text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
Q: Killing process that has been created with popen2 I'm using the function popen2 (that has been recommended elsewhere on stackoverflow) to programatically create a process that has to be killed again after some time. popen2 returns a PID and I thought that this PID could be used to kill the process. It doesn't work this way, though. In order to kill it, I have to increment the returned PID, which I don't understand (see code below)
Another problem might arise when various threads are doing this in parallel. In that case the PIDs might differ by more than one due to race conditions, I think.
So my question is this: How can I reliably identify the PID of the process created by popen2 in a multi-threaded scenario?
#include <sys/types.h>
#include <fcntl.h>
#include <unistd.h>
#include <signal.h>
#define READ 0
#define WRITE 1
pid_t popen2(const char *command, int *infp, int *outfp) {
int p_stdin[2], p_stdout[2];
pid_t pid;
if (pipe(p_stdin) != 0 || pipe(p_stdout) != 0)
return -1;
pid = fork();
if (pid < 0)
return pid;
else if (pid == 0)
{
close(p_stdin[WRITE]);
dup2(p_stdin[READ], READ);
close(p_stdout[READ]);
dup2(p_stdout[WRITE], WRITE);
execl("/bin/sh", "sh", "-c", command, NULL);
perror("execl");
exit(1);
}
if (infp == NULL)
close(p_stdin[WRITE]);
else
*infp = p_stdin[WRITE];
if (outfp == NULL)
close(p_stdout[READ]);
else
*outfp = p_stdout[READ];
return pid;
}
main() {
pid_t pid;
// Create process
pid = popen2("crafty", &in, &out);
sleep(5);
// Why doesn't kill(pid, SIGKILL) work?
kill(pid+1, SIGKILL);
while (1);
}
A: I think I got it.
execl("/bin/sh", "sh", "-c", command, NULL);
runs sh and popen2 returns it's pid. When you call kill it kills sh, but does not touch it's child process command. It is actually a fluke that killing the next pid kills command. This will not always work and is just up to race conditions.
If you want to be able to kill your target process then you will have to start that directly.
Warning (untested code):
pid_t popen2(const char **command, int *infp, int *outfp) {
int p_stdin[2], p_stdout[2];
pid_t pid;
if (pipe(p_stdin) != 0 || pipe(p_stdout) != 0)
return -1;
pid = fork();
if (pid < 0)
return pid;
else if (pid == 0)
{
close(p_stdin[WRITE]);
dup2(p_stdin[READ], READ);
close(p_stdout[READ]);
dup2(p_stdout[WRITE], WRITE);
execvp(*command, command);
perror("execvp");
exit(1);
}
if (infp == NULL)
close(p_stdin[WRITE]);
else
*infp = p_stdin[WRITE];
if (outfp == NULL)
close(p_stdout[READ]);
else
*outfp = p_stdout[READ];
return pid;
}
and pass command in the form of
char *command[] = {"program", "arg1", "arg2", ..., NULL};
in your particular example:
char *command[] = {"crafty", NULL};
A: You can use 'exec' command of shell to avoid pending process. Also: popen2 shall close the writing end of unused pipes, otherwise the pipe remains open. If one of pointers (infp, outpf) is NULL, it is useless to create and immediately close the pipe. Here is the version of popen2 I use in my project:
pid_t popen2(char *command, int *in_fd, int *out_fd) {
int pin[2], pout[2];
pid_t pid;
char cmd[strlen(command)+10];
if (out_fd != NULL) {
if (pipe(pin) != 0) return(-1);
}
if (in_fd != NULL) {
if (pipe(pout) != 0) {
if (out_fd != NULL) {
close(pin[0]);
close(pin[1]);
}
return(-1);
}
}
pid = fork();
if (pid < 0) {
if (out_fd != NULL) {
close(pin[0]);
close(pin[1]);
}
if (in_fd != NULL) {
close(pout[0]);
close(pout[1]);
}
return pid;
}
if (pid==0) {
if (out_fd != NULL) {
close(pin[1]);
dup2(pin[0], 0);
}
if (in_fd != NULL) {
close(pout[0]);
dup2(pout[1], 1);
}
// Exec makes possible to kill this process
sprintf(cmd, "exec %s", command);
execlp("sh", "sh", "-c", cmd, NULL);
fprintf(stderr, "%s:%d: Exec failed in popen2. ", __FILE__, __LINE__);
perror("Error:");
exit(1);
}
if (in_fd != NULL) {
close(pout[1]);
*in_fd = pout[0];
}
if (out_fd != NULL) {
close(pin[0]);
*out_fd = pin[1];
}
return pid;
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,099 |
Frontiers of Social Psychology
Five distinguished social psychologists (Barbara Fredrickson, Roy Baumeister, Lisa Feldman Barrett, Philip Zimbardo, Carol Dweck) share thought-provoking insights into open questions and active research issues in social...
Five distinguished social psychologists (Barbara Fredrickson, Roy Baumeister, Lisa Feldman Barrett, Philip Zimbardo, Carol Dweck) share thought-provoking insights into open questions and active research issues in social psychology.
Ideas Roadshow is an award-winning initiative offering an expanding series of thought-provoking compilations developed from in-depth conversations between Howard Burton and more than 100 world-leading researchers across the arts and sciences, including 3 Nobel Laureates. Each compilation provides unique perspectives from several top experts on a single theme or topic.
The founder and creator of Ideas Roadshow, Howard Burton, holds a PhD in theoretical physics and an MA in philosophy and was the Founding Director of Perimeter Institute for Theoretical Physic in Canada.
Howard Burton
Barbara Fredrickson, Carol Dweck, Howard Burton, Lisa Feldman Barrett, Philip Zimbardo, Roy Baumeister
Ideas Roadshow | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,788 |
import {
Component,
EventEmitter,
Input,
OnDestroy,
Output,
TemplateRef,
ViewChild,
ViewEncapsulation,
ElementRef,
ChangeDetectionStrategy,
HostBinding,
NgZone
} from '@angular/core';
import { AnimationEvent } from '@angular/animations';
import { coerceBooleanProperty } from '@angular/cdk/coercion';
import { ESCAPE } from '@angular/cdk/keycodes';
import { MdePopoverPositionX, MdePopoverPositionY, MdePopoverTriggerEvent, MdePopoverScrollStrategy } from './popover-types';
import { throwMdePopoverInvalidPositionX, throwMdePopoverInvalidPositionY } from './popover-errors';
import { MdePopoverPanel } from './popover-interfaces';
import { transformPopover } from './popover-animations';
@Component({
selector: 'mde-popover',
templateUrl: './popover.html',
styleUrls: ['./popover.scss'],
changeDetection: ChangeDetectionStrategy.OnPush,
encapsulation: ViewEncapsulation.None,
animations: [
transformPopover
],
exportAs: 'mdePopover'
})
export class MdePopover implements MdePopoverPanel, OnDestroy { // tslint:disable-line:component-class-suffix
@HostBinding('attr.role') role = 'dialog';
/** Settings for popover, view setters and getters for more detail */
private _positionX: MdePopoverPositionX = 'after';
private _positionY: MdePopoverPositionY = 'below';
private _triggerEvent: MdePopoverTriggerEvent = 'hover';
private _scrollStrategy: MdePopoverScrollStrategy = 'reposition';
private _enterDelay = 200;
private _leaveDelay = 200;
private _overlapTrigger = true;
private _disableAnimation = false;
private _targetOffsetX = 0;
private _targetOffsetY = 0;
private _arrowOffsetX = 20;
private _arrowWidth = 8;
private _arrowColor = 'rgba(0, 0, 0, 0.12)';
private _closeOnClick = true;
private _focusTrapEnabled = true;
private _focusTrapAutoCaptureEnabled = true;
/** Config object to be passed into the popover's ngClass */
_classList: {[key: string]: boolean} = {};
// TODO: Write comment description
/** */
public containerPositioning = false;
/** Closing disabled on popover */
public closeDisabled = false;
/** Config object to be passed into the popover's arrow ngStyle */
public popoverPanelStyles: {};
/** Config object to be passed into the popover's arrow ngStyle */
public popoverArrowStyles: {};
/** Config object to be passed into the popover's content ngStyle */
public popoverContentStyles: {};
/** Emits the current animation state whenever it changes. */
_onAnimationStateChange = new EventEmitter<AnimationEvent>();
/** Position of the popover in the X axis. */
@Input('mdePopoverPositionX')
get positionX() { return this._positionX; }
set positionX(value: MdePopoverPositionX) {
if (value !== 'before' && value !== 'after') {
throwMdePopoverInvalidPositionX();
}
this._positionX = value;
this.setPositionClasses();
}
/** Position of the popover in the Y axis. */
@Input('mdePopoverPositionY')
get positionY() { return this._positionY; }
set positionY(value: MdePopoverPositionY) {
if (value !== 'above' && value !== 'below') {
throwMdePopoverInvalidPositionY();
}
this._positionY = value;
this.setPositionClasses();
}
/** Popover trigger event */
@Input('mdePopoverTriggerOn')
get triggerEvent(): MdePopoverTriggerEvent { return this._triggerEvent; }
set triggerEvent(value: MdePopoverTriggerEvent) { this._triggerEvent = value; }
/** Popover scroll strategy */
@Input('mdePopoverScrollStrategy')
get scrollStrategy(): MdePopoverScrollStrategy { return this._scrollStrategy; }
set scrollStrategy(value: MdePopoverScrollStrategy) { this._scrollStrategy = value; }
/** Popover enter delay */
@Input('mdePopoverEnterDelay')
get enterDelay(): number { return this._enterDelay; }
set enterDelay(value: number) { this._enterDelay = value; }
/** Popover leave delay */
@Input('mdePopoverLeaveDelay')
get leaveDelay(): number { return this._leaveDelay; }
set leaveDelay(value: number) { this._leaveDelay = value; }
/** Popover overlap trigger */
@Input('mdePopoverOverlapTrigger')
get overlapTrigger(): boolean { return this._overlapTrigger; }
set overlapTrigger(value: boolean) { this._overlapTrigger = value; }
/** Popover target offset x */
@Input('mdePopoverOffsetX')
get targetOffsetX(): number { return this._targetOffsetX; }
set targetOffsetX(value: number) { this._targetOffsetX = value; }
/** Popover target offset y */
@Input('mdePopoverOffsetY')
get targetOffsetY(): number { return this._targetOffsetY; }
set targetOffsetY(value: number) { this._targetOffsetY = value; }
/** Popover arrow offset x */
@Input('mdePopoverArrowOffsetX')
get arrowOffsetX(): number { return this._arrowOffsetX; }
set arrowOffsetX(value: number) { this._arrowOffsetX = value; }
/** Popover arrow width */
@Input('mdePopoverArrowWidth')
get arrowWidth(): number { return this._arrowWidth; }
set arrowWidth(value: number) { this._arrowWidth = value; }
/** Popover arrow color */
@Input('mdePopoverArrowColor')
get arrowColor(): string { return this._arrowColor; }
set arrowColor(value: string) { this._arrowColor = value; }
/**
* Popover container close on click
* default: true
*/
@Input('mdePopoverCloseOnClick')
get closeOnClick(): boolean { return this._closeOnClick; }
set closeOnClick(value: boolean) { this._closeOnClick = coerceBooleanProperty(value); }
/**
* Disable animations of popover and all child elements
* default: false
*/
@Input('mdePopoverDisableAnimation')
get disableAnimation(): boolean { return this._disableAnimation; }
set disableAnimation(value: boolean) { this._disableAnimation = coerceBooleanProperty(value); }
/**
* Popover focus trap using cdkTrapFocus
* default: true
*/
@Input('mdeFocusTrapEnabled')
get focusTrapEnabled(): boolean { return this._focusTrapEnabled; }
set focusTrapEnabled(value: boolean) { this._focusTrapEnabled = coerceBooleanProperty(value); }
/**
* Popover focus trap auto capture using cdkTrapFocusAutoCapture
* default: true
*/
@Input('mdeFocusTrapAutoCaptureEnabled')
get focusTrapAutoCaptureEnabled(): boolean { return this._focusTrapAutoCaptureEnabled; }
set focusTrapAutoCaptureEnabled(value: boolean) { this._focusTrapAutoCaptureEnabled = coerceBooleanProperty(value); }
/**
* This method takes classes set on the host md-popover element and applies them on the
* popover template that displays in the overlay container. Otherwise, it's difficult
* to style the containing popover from outside the component.
* @param classes list of class names
*/
@Input('class')
set panelClass(classes: string) {
if (classes && classes.length) {
this._classList = classes.split(' ').reduce((obj: any, className: string) => {
obj[className] = true;
return obj;
}, {});
this._elementRef.nativeElement.className = '';
this.setPositionClasses();
}
}
/**
* This method takes classes set on the host md-popover element and applies them on the
* popover template that displays in the overlay container. Otherwise, it's difficult
* to style the containing popover from outside the component.
* @deprecated Use `panelClass` instead.
*/
@Input()
get classList(): string { return this.panelClass; }
set classList(classes: string) { this.panelClass = classes; }
/** Event emitted when the popover is closed. */
@Output() close = new EventEmitter<void>();
@ViewChild(TemplateRef) templateRef: TemplateRef<any>;
constructor(private _elementRef: ElementRef, public zone: NgZone) {
this.setPositionClasses();
}
ngOnDestroy() {
this._emitCloseEvent();
this.close.complete();
}
/** Handle a keyboard event from the popover, delegating to the appropriate action. */
_handleKeydown(event: KeyboardEvent) {
switch (event.keyCode) {
case ESCAPE:
this._emitCloseEvent();
return;
}
}
/**
* This emits a close event to which the trigger is subscribed. When emitted, the
* trigger will close the popover.
*/
_emitCloseEvent(): void {
this.close.emit();
}
/** Close popover on click if closeOnClick is true */
onClick() {
if (this.closeOnClick) {
this._emitCloseEvent();
}
}
/**
* TODO: Refactor when @angular/cdk includes feature I mentioned on github see link below.
* https://github.com/angular/material2/pull/5493#issuecomment-313085323
*/
/** Disables close of popover when leaving trigger element and mouse over the popover */
onMouseOver() {
if (this.triggerEvent === 'hover') {
this.closeDisabled = true;
}
}
/** Enables close of popover when mouse leaving popover element */
onMouseLeave() {
if (this.triggerEvent === 'hover') {
this.closeDisabled = false;
this._emitCloseEvent();
}
}
// TODO: Refactor how styles are set and updated on the component, use best practices.
// TODO: If arrow left and right positioning is requested, see if flex direction can be used to work with order.
/** Sets the current styles for the popover to allow for dynamically changing settings */
setCurrentStyles() {
// TODO: See if arrow position can be calculated automatically and allow override.
// TODO: See if flex order is a better alternative to position arrow top or bottom.
this.popoverArrowStyles = {
'right': this.positionX === 'before' ? (this.arrowOffsetX - this.arrowWidth) + 'px' : '',
'left': this.positionX === 'after' ? (this.arrowOffsetX - this.arrowWidth) + 'px' : '',
'border-top': this.positionY === 'below' ?
this.arrowWidth + 'px solid ' + this.arrowColor : '0px solid transparent',
'border-right': 'undefined' === undefined ?
this.arrowWidth + 'px solid ' + this.arrowColor :
this.arrowWidth + 'px solid transparent',
'border-bottom': this.positionY === 'above' ?
this.arrowWidth + 'px solid ' + this.arrowColor :
this.arrowWidth + 'px solid transparent',
'border-left': 'undefined' === undefined ?
this.arrowWidth + 'px solid ' + this.arrowColor :
this.arrowWidth + 'px solid transparent',
};
// TODO: Remove if flex order is added.
this.popoverContentStyles = {
'padding-top': this.overlapTrigger === true ? '0px' : this.arrowWidth + 'px',
'padding-bottom': this.overlapTrigger === true ? '0px' : (this.arrowWidth) + 'px',
'margin-top': this.overlapTrigger === false && this.positionY === 'below' && this.containerPositioning === false ?
-(this.arrowWidth * 2) + 'px' : '0px'
};
}
/**
* It's necessary to set position-based classes to ensure the popover panel animation
* folds out from the correct direction.
*/
setPositionClasses(posX = this.positionX, posY = this.positionY): void {
this._classList['mde-popover-before'] = posX === 'before';
this._classList['mde-popover-after'] = posX === 'after';
this._classList['mde-popover-above'] = posY === 'above';
this._classList['mde-popover-below'] = posY === 'below';
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 932 |
Q: Java - Serializing a Thread I have a collection of objects that I am trying to Serialize. Unfortunately these objects all have a reference to the controller class that holds them which also holds the threads of execution. Whenever I try to serialize this collection I am getting an error that it can not Serialize a thread. Is there any way around this without restructuring my entire setup? I can give more details if that will be helpful.
A: Yes, you make the reference to the controller class transient.
A: You just mark the threads as transient to tell the serialization mechanism that these fields should not be saved along with the rest of that object's state.
So you must mark transient any field that either cannot be serialized or any field you do not want serialized.
A: In each object in the collection the reference to the controller should be
private transient Controller controller = ...
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,768 |
\section{Introduction}
\subsection{Quasinormal Modes}
Optical micro and nanoresonators, be they plasmonic, photonic or hybrid, enhance and localize the electromagnetic energy at wavelength or subwavelength scales and are key components in many photonic applications. Their optical response is characterized by one of a few resonant features resulting from the excitation of one or a few dominant modes, the natural resonance modes of the resonators.
These modes conveniently labelled by the integer $m=1,2 ...$ are characterized by their electric and magnetic field vectors distributions, $\widetilde{\textbf{E}}_m(\textbf{r})$
and $\widetilde{\textbf{H}}_m(\textbf{r})$. These vectors are solutions of the following eigenvalue boundary problem \cite{LalanneReview}
\begin{equation}
\left \{
\begin{array}{lll}
-i \, \tilde{\omega}_m \varepsilon(\tilde{\omega}_m) \widetilde{\textbf{E}}_m - \nabla \times \widetilde{\textbf{H}}_m & = & 0, \medskip \\
-i \, \tilde{\omega}_m \mu(\tilde{\omega}_m) \widetilde{\textbf{H}}_m + \nabla \times \widetilde{\textbf{E}}_m & = & 0, \medskip \\
+ \mbox{ Boundary conditions} & &
\end{array}
\right.
\label{eq:EigenvalueProb}
\end{equation}
where $\varepsilon(\tilde{\omega}_m)$ and $\mu(\tilde{\omega}_m)$ are respectively the dielectric permittivity and magnetic permeability and depend on the position $\text\bf{r}$ and pulsation $\omega$.
The fields $\widetilde{\textbf{E}}_m(\textbf{r})$ have continuous tangent traces across interfaces between subdomains and satisfy the outgoing-wave conditions at infinity. The $\exp(-i\omega t)$ convention for time harmonic fields is assumed. They are often called quasinormal modes (QNMs) to emphasize that their harmonic evolution is characterized by an exponential damping in time (they are the eigenstates of a non-Hermitian operator), so to say their pulsation $\tilde{\omega}_m$ is complex with Im$(\tilde{\omega}_m)<0$.
Micro and nanoresonators play a leading role in many areas in nanophotonics, from quantum information processing to ultrasensitive biosensing, nonlinear optics, and various optical metasurfaces. This pushes a strong pressure on the development of QNM theory and QNM numerical methods that explicitly consider QNMs in the analysis, providing important clues towards the interpretation of the resonator response.
\subsection{Quasinormal Mode expansion of the scattered field}
The scattered field $\left[ \textbf{E}_S(\textbf{r},\omega), \textbf{H}_S(\textbf{r},\omega)\right]$ is solution of time-harmonic Maxwell's equations
\begin{equation*}
\left \{
\begin{array}{lll}
-i \, \omega \, \varepsilon(\omega) \textbf{E}_S - \nabla \times \textbf{H}_S & = & i \omega (\varepsilon(\omega) - \varepsilon_b) \, \textbf{E}_{\mbox{inc}}, \medskip \\
-i \, \omega \, \mu(\omega) \textbf{H}_S + \nabla \times \textbf{E}_S & = & i \omega (\mu(\omega) - \mu_b) \textbf{H}_{\mbox{inc}}, \medskip \\
+ \mbox{ Sommerfeld condition},
\end{array}
\right.
\end{equation*}
where $\textbf{E}_{\mbox{inc}}, \textbf{H}_{\mbox{inc}}$ is the incident field, and $\varepsilon_b, \mu_b$ the background indices. The incident fields $\textbf{E}_{\mbox{inc}}, \textbf{H}_{\mbox{inc}}$ satisfy homogeneous Maxwell's equations with indices $\varepsilon_b, \mu_b$. Let us introduce
$$ \textbf{J} = i \omega (\varepsilon(\omega) - \varepsilon_b) \, \textbf{E}_{\mbox{inc}} $$
and we consider only dielectric media such that $\mu(\omega) = \mu_b = \mu_0$ in the physical domain. As a result, the Maxwell's equations that will considered in the sequel are given as
\begin{equation}
\left \{
\begin{array}{lll}
-i \, \omega \, \varepsilon(\omega) \textbf{E}_S - \nabla \times \textbf{H}_S & = & \textbf{J}, \medskip \\
-i \, \omega \, \mu(\omega) \textbf{H}_S + \nabla \times \textbf{E}_S & = & 0, \medskip \\
+ \mbox{ Sommerfeld condition }.
\end{array}
\right.
\label{eq:MaxwellSystem}
\end{equation}
Efficiently computing this scattered field for a large number of pulsations consists in expanding the solution into the QNM basis :
\begin{equation*}
\left[ \textbf{E}_S(\textbf{r},\omega), \textbf{H}_S(\textbf{r},\omega)\right] = \sum_m \alpha_m
(\omega) \left[ \widetilde{\textbf{E}}_m(\textbf{r},\omega),
\widetilde{\textbf{H}}_m(\textbf{r}
,\omega)\right],
\end{equation*}
where the $\alpha_m$'s are the complex modal excitation coefficients, which measure how much the QNMs are excited by the driving field illuminating the resonator with a real frequency $\omega$. Note that we use a tilde to differentiate the QNM fields from other fields, for instance the scattered or driving fields, and consistently, we will also use a tilde to denote the QNM frequency $\tilde{\omega}_m$, in contrast to the real excitation frequency $\omega$.
There are still some complicated mathematical issues in relation with the actual physical problem for which the open space is infinite and Maxwell's operator are continuous. For instance, the conditions under which the completeness of the QNM expansions of Eq. (1) is guaranteed are not still fully understood \cite{bonod,gralak}. There are also several known and correct expressions for the $\alpha_m$'s \cite{LalanneReview}, but we do not know which offer the best performance, e.g. the fastest convergence rate towards the actual solution as the number of QNMs retained in the expansion increases.
However, for practical geometries of interest in nanophotonics, the QNMs are computed numerically and it would be unrealistic to expect computing many QNMs over a broad spectral range, ideally in the entire lower half-plane of the complex plane $(Im(\tilde{\omega}_m)<0)$. Rather we have to consider a discretized version of the initial Maxwell's equations and the physical domain is bounded by perfectly-matched layers (PMLs). The discretized operator is a matrix of finite dimension, and its spectrum is composed of a finite number of QNMs (often the relevant ones involved in the resonator dynamics in the spectral range of interest) completed by a large number of PML modes, which have much less physical significance but warrant completeness \cite{Vial,Wei,LalanneReview}.
Efficient QNMs solvers exist for computing and normalizing QNMs and PML modes for various geometries, such as plasmonic crystals, metal gratings and plasmonic nanoantennas \cite{Gras19}; even freeware \cite{Bai13} or improved commercial software packages \cite{Wei} can be used. Thus the important remaining step is the reconstruction problem, i.e. the computation of the modal coefficients $\alpha_m$'s and the reconstruction of the scattered field. In this paper, we focus on material systems whose relative permittivity $\varepsilon(\omega)$ is described by a N-pole Lorentz permittivity (see \cite{LorentzOptic}):
\begin{equation}
\varepsilon(\omega)/\varepsilon_{\infty}=1-\sum_{i=1}^N\omega^2_{p,i}/(\omega^2-\omega^2_{0,i}+i\omega\gamma_i),
\label{eq:DrudeLorentz}
\end{equation}
which may model a large variety of systems with increasing accuracy as the number of poles increases. This model permits the introduction of auxiliary fields in order to linearize the previously-non-linear eigenvalue problem. It also respects the causality relation $\bar{\varepsilon}(\omega) = \varepsilon(-\bar{\omega})$ where $\bar{\omega}$ stands for the complex conjugate of $\omega$. The contribution of the free electron-gas of metals can be treated by a Drude permittivity, setting $\omega_{0,i}=0$.
Let us denote $\Omega_{res}$ the domain of the resonator for which $\varepsilon(\omega)$ is different from $\varepsilon_b$ (hence it is the support of the source term $\textbf{J}$).
In \cite{LalanneReview}, a review of the literature surrounding quasinormal modes, an attempt was made to classify the different formulas used to compute the excitation coefficients. At least, three different formulas for $\alpha_m$ were reported:
\begin{itemize}
\item The formula 5.11 in \cite{LalanneReview}:
\begin{equation}
\alpha_m = \dfrac{1}{i (\tilde{\omega}_m - \omega) } \int_{\Omega_{res}} \textbf{J}(\textbf{r}) \cdot \tilde{\textbf{E}}_m(\textbf{r})d \textbf{r}
\label{eq:FormulaAlpha}
\end{equation}
\item The formula proposed in \cite{Wei} (equivalent to formula 5.6 in \cite{LalanneReview}):
\begin{equation}
\alpha_m = \int_{\Omega_{res}} (\varepsilon_b - \varepsilon_\infty) \textbf{E}_{inc} \cdot \tilde{\textbf{E}}_m d\Omega + \dfrac{\tilde{\omega}_m}{\tilde{\omega}_m-\omega}\int_{\Omega_{res}} (\varepsilon(\tilde{\omega}_m) - \varepsilon_b) \textbf{E}_{inc} \cdot \tilde{\textbf{E}}_m d\Omega
\label{eq:FormuleWei}
\end{equation}
\item The formula proposed in \cite{Marseillais} (equivalent to formula 5.10 in \cite{LalanneReview}):
\begin{equation}
\alpha_m = \dfrac{\omega}{i \, \tilde{\omega}_m (\tilde{\omega}_m - \omega) } \int_{\Omega_{res}} \textbf{J}(\textbf{r}) \cdot \tilde{\textbf{E}}_m(\textbf{r})d \textbf{r}
\label{eq:FormuleMarseille}
\end{equation}
\end{itemize}
All these formulas hold if the modes $\tilde{\textbf{E}}_m$ are normalized as follows
\begin{equation}
\int_\Omega \dfrac{\partial (\tilde{\omega}_m \, \varepsilon(\tilde{\omega}_m))}{\partial \tilde{\omega}_m} \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m - \dfrac{ \partial \left(\tilde{\omega_m} \mu(\tilde{\omega}_m) \right)}{\partial \tilde{\omega_m}} \, \tilde{\textbf{H}}_m \cdot \tilde{\textbf{H}}_m d\Omega = 1.
\label{eq:Norm}
\end{equation}
where $\Omega$ is the computational domain. This is the usual normalization \cite{Muljarov18, Sauvan13, Bai13}.
\subsection{Discrete modal expansion}
In this paper, we propose a common formalism based on the discrete Maxwell's equations to obtain these three formulas that we show to be valid for both QNMs and PML modes.
More precisely, when $\varepsilon(\omega)$ is a rational function, auxiliary unknowns can be introduced in order to obtain a linear eigenvalue problem.
After this linearization procedure and after discretization (e.g. with Finite Element Method), the time-harmonic Maxwell's Equations can be written
\begin{equation}
-i\omega \textbf{M}_h \textbf{U}_h + \textbf{K}_h \textbf{U}_h = \textbf{F}_h ,
\label{eq:FormulaFEM}
\end{equation}
where $\textbf{M}_h$ is the mass matrix, $\textbf{K}_h$ is the stiffness matrix, and $\textbf{F}_h$ is the source term (h denotes the mesh size).
$\textbf{U}_h$ is the main unknown that will contain components of $\textbf{E}$ and other auxiliary unknowns introduced to obtain a linear eigenvalue problem.
The matrices $\textbf{M}_h$ and $\textbf{K}_h$ are independent of $\omega$, an example of matrices will be given in section \ref{sec:Core}.
From a discrete point of view, once the discrete linear system \eqref{eq:FormulaFEM} is set, the biorthogonal projection of the unknown $\textbf{U}_h$ provides an unique formula for $\alpha_m$:
\begin{equation}
\alpha_m = \dfrac{1}{i(\tilde{\omega}_m-\omega)} \langle \textbf{F}_h,\textbf{x}^\bot_m\rangle,
\label{eq:AlphaDiscrete}
\end{equation}
where $\textbf{x}^\bot_m$ is the left eigenvector (i.e. the conjugate of the biorthogonal). This biorthogonal projection is obtained by considering the relation \eqref{eq:FormulaFEM} and taking the scalar product with the left eigenvector. Details are given in section \ref{sec:Core}.
$\textbf{x}^\bot_m$ solves the transpose eigenvalue problem
$$ \textbf{K}_h^T \textbf{x}^\bot_m = i \tilde{\omega}_m \textbf{M}_h^T \textbf{x}^\bot_m. $$
In this paper, the convention $\langle x,y\rangle = \sum x_i y_i $ is used. The formula \eqref{eq:AlphaDiscrete} holds if the eigenvectors $\textbf{x}_m$ are normalized such that
\begin{equation}
\langle \textbf{M}_h \textbf{x}_m, \textbf{x}^\bot_m \rangle = 1,
\label{eq:NormDiscrete}
\end{equation}
which is the discrete equivalent of \eqref{eq:Norm}. This result is proven in section \ref{sec:Core}. In that section, the proposed matrices $\textbf{M}_h$ and $\textbf{K}_h$ are symmetric, such that we have
$$ \textbf{x}_m^\bot = \textbf{x}_m .$$
An infinity of formulas can be found by writing different linearizations of Maxwell's equations. Each different linearization will produce a new set of auxiliary unknowns, and consequently a different set of matrices $\textbf{K}_h$ and $\textbf{M}_h$ and right hand side $\textbf{F}_h$. The three aforementioned formulas are obtained as follows:
\begin{itemize}
\item The formula \eqref{eq:FormulaAlpha} is obtained by a direct linearization of system \eqref{eq:MaxwellSystem}. This derivation is detailed in section \ref{sec:Core}.
\item The formula \eqref{eq:FormuleWei} is obtained by choosing a different source $\textbf{F}_h$. This is the object of section \ref{sec:ComparWei}.
\item The formula \eqref{eq:FormuleMarseille} is obtained by starting from the second-order formulation of Maxwell's equations with curl-curl operator. This derivation is detailed in section \ref{sec:ComparMarseille}.
\end{itemize}
Other formulas exist \cite{LalanneReview} but will not be analyzed here. More recently, a newly developed formula is presented in \cite{Tong19}. An infinite set of formulas can be found by splitting the source on the different fields. For the linearization given in section \ref{sec:Core}, by writing the generalized source term as $\textbf{F} = [\textbf{f}_1, \textbf{f}_2, \textbf{f}_3, \textbf{f}_4]^T$, we can find the following generalization of the modal excitation coefficient:
\begin{equation}
\alpha_m = \dfrac{1}{i(\omega_m - \omega)}\int_{\Omega_{Res}} \textbf{f}_1\cdot \tilde{\textbf{E}}_m + \textbf{f}_2 \cdot \tilde{\textbf{H}}_m + (\varepsilon(\tilde{\omega}_m)-\varepsilon_\infty)(\textbf{f}_3-i\tilde{\omega}_m\textbf{f}_4)\cdot \tilde{\textbf{E}}_m d\Omega
\label{eq:GeneralFormula}
\end{equation}
provided that
$$
-i \omega \textbf{f}_1 + i\omega (\varepsilon(\omega)-\varepsilon_\infty)(i\omega \textbf{f}_4 - \textbf{f}_3) - \nabla \times \left( \dfrac{1}{\mu_0} \textbf{f}_2 \right) = -i \omega \textbf{J} .
$$
The derivation is detailed in section \ref{sec:SplitSource}. The modal solution is given as
\begin{equation}
\textbf{E}_{S}^{\mbox{modal}} = \sum_{m=1}^N \alpha_m \tilde{\textbf{E}}_m
\label{eq:modal_expansion}
\end{equation}
where $N$ is the number of modes conserved. The four formulas \eqref{eq:FormulaAlpha}, \eqref{eq:FormuleMarseille}, \eqref{eq:FormuleWei} and \eqref{eq:GeneralFormula} for coefficients $\alpha_m$ will provide a field $\textbf{E}_{S}^{\mbox{modal}}$ that will converge to the scattered field $\textbf{E}_S$ when $N$ tends to the size of matrix $\textbf{M}_h$. Their convergence rate, however, may differ.
In section \ref{sec:Degenerate}, it is explained how degenerate eigenvalues (i.e. multiple eigenvalues) can be treated correctly with a simple Gram-Schmidt orthogonalization procedure with respects to matrix $\textbf{M}_h$. In most of papers in the literature, eigenvalues are assumed to be simple. However, as the numerical results presented in \ref{sec:Numeric} show, there can be a non negligible number of degenerate eigenvalues.
The computational domain has to be truncated, e.g. with Perfectly Matched Layers. In order to keep real matrices $\textbf{M}_h$ and $\textbf{K}_h$ (and complex conjugate eigenvalues), dispersive PMLs have been chosen. The indexes $\varepsilon(\omega), \mu(\omega)$ are rational functions of $\omega$, they are given by formula \eqref{eq:EpsMuPML} in 3-D. In section \ref{sec:PML}, we detail how Maxwell's equations are linearized with respect to $\omega$, leading to non-symmetric matrices $\textbf{M}_h$ and $\textbf{K}_h$. Because the final eigenvalue problem solved by $\tilde{\textbf{E}}_m$ is symmetric, the left eigenvector $\textbf{x}^\bot_m$ can be computed directly from the right eigenvector $\textbf{x}_m$, formulas are given in section \ref{sec:PML}. The normalization \eqref{eq:Norm} is also valid for dispersive PMLs. The computational domain $\Omega$ involved in the integral includes both the physical domain and the PMLs.
Finally, numerical results are presented in section \ref{sec:Numeric} in order to compare the accuracy of the three formulas \eqref{eq:FormulaAlpha}, \eqref{eq:FormuleWei} and \eqref{eq:FormuleMarseille}.
\section{Eigenmode expansion for first-order formulation of Maxwell's equations}
\label{sec:Core}
In this section, we note $\textbf{E}, \textbf{H}$ the solutions of Maxwell's system \eqref{eq:MaxwellSystem}.
\subsection{Discrete expansion}
For the sake of illustration, we consider an isotropic (to simplify) medium with a dispersive permittivity described by the single-pole Lorentz model,
$$\varepsilon(\omega)=\varepsilon_\infty \left(1-\dfrac{\omega_p^2}{\omega^2-\omega_0^2+i\gamma\omega} \right)$$
and a nondispersive permeability $\mu(\omega)=\mu_0$. We introduce two auxiliary fields, the polarization $\textbf{P}=-\varepsilon_\infty \dfrac{\omega_p^2}{\omega^2-\omega_0^2+i\gamma\omega}\textbf{E}$ and $\textbf{Q}=-i\omega\textbf{P}$. With elementary algebraic manipulations, we can reformulate Maxwell's system \eqref{eq:MaxwellSystem} as the following source problem
\begin{equation}
\left \{
\begin{array}{lll}
-i \, \omega \, \varepsilon_\infty \, \textbf{E} + \textbf{Q} - \nabla \times \textbf{H} & = & \textbf{J} \medskip \\
-i \, \omega \, \mu_0 \, \textbf{H} + \nabla \times \textbf{E} & = & 0 \medskip \\
-i\omega\textbf{P} - \textbf{Q} &= & 0 \medskip \\
i\omega\textbf{Q}-\gamma\textbf{Q}-\omega_0^2\textbf{P} +\varepsilon_\infty \omega_p^2 \textbf{E}& = & 0 \medskip \\
+ \mbox{ Sommerfeld condition}
\end{array}
\right.
\label{eq:MaxwellSystemPQ}
\end{equation}
In order to obtain a symmetric system, we multiply the second equation by $-1$, the third equation by $\omega_0^2 /(\varepsilon_\infty \omega_p^2)$ and the fourth by $1/(\varepsilon_\infty \omega_p^2)$.
$$
\left \{
\begin{array}{lll}
-i \, \omega \, \varepsilon_\infty \, \textbf{E} + \textbf{Q} - \nabla \times \textbf{H} & = & \textbf{J} \medskip \\
+i \, \omega \, \mu_0 \, \textbf{H} - \nabla \times \textbf{E} & = & 0 \medskip \\
-i \omega \dfrac{\omega_0^2}{\varepsilon_\infty\omega^2_p}\textbf{P}-\dfrac{\omega_0^2}{\varepsilon_\infty\omega^2_p}\textbf{Q} &= & 0 \medskip \\
\dfrac{i \omega}{\varepsilon_\infty \omega^2_p} \textbf{Q} - \dfrac{\gamma}{\varepsilon_\infty \omega_p^2}\textbf{Q} - \dfrac{\omega_0^2}{\varepsilon_\infty \omega_p^2}\textbf{P} + \textbf{E} & = & 0 \medskip \\
+ \mbox{ Sommerfeld condition}
\end{array}
\right.
$$
We can write this system using the linear operators $\textbf{K}$ and $\textbf{M}$
$$ \textbf{K} \textbf{U} - i\omega \textbf{M} \textbf{U} = \textbf{F} $$
with
$$
\textbf{K}= \left[
\begin{array}{cccc}
0 & -\nabla\times & 0 & 1 \\
-\nabla\times & 0 & 0 & 0 \\
0 & 0 & 0 & -\dfrac{\omega_0^2}{\varepsilon_\infty \omega_p^2} \\
1 & 0 & -\dfrac{\omega_0^2}{\varepsilon_\infty \omega_p^2} & - \dfrac{\gamma}{\varepsilon_\infty \omega_p^2}
\end{array}
\right],
$$
$$
\textbf{M}= \left[
\begin{array}{cccc}
\varepsilon_\infty & 0 & 0 & 0 \\
0 & -\mu_0 & 0 & 0 \\
0 & 0 & \dfrac{\omega_0^2}{\varepsilon_\infty\omega_p^2} & 0 \\
0 & 0 & 0 & - \dfrac{1}{\varepsilon_\infty\omega_p^2}
\end{array}
\right],
\quad \textbf{F} = \left[\begin{array}{l} \textbf{J} \\ 0 \\ 0 \\ 0 \end{array} \right]
$$
~\\
After discretization, the Maxwell's system is then given as
\begin{equation}
-i \omega \textbf{M}_h \textbf{U}_h + \textbf{K}_h \textbf{U}_h = \textbf{F}_h
\label{eq:DiscreteMaxwell}
\end{equation}
where
$ \textbf{U}_h = \left( \textbf{E}_h, \textbf{H}_h, \textbf{P}_h, \textbf{Q}_h \right), $
and $\textbf{E}_h, \textbf{H}_h, \textbf{P}_h, \textbf{Q}_h$ contain the components of $\textbf{E}, \textbf{H}, \textbf{P}, \textbf{Q}$ on basis functions. The source term $\textbf{F}_h$ is given as
$$ (\textbf{F}_h)_i = \int_{\Omega_{res}} \textbf{J}(\textbf{r}) \cdot \boldsymbol{\varphi}_i(\textbf{r}) \, d\textbf{r} $$
where $\boldsymbol{\varphi}_i$ are basis functions for unknown $\textbf{E}_h$. Matrices $\textbf{M}_h$ and $\textbf{K}_h$ are given in appendix \ref{app:FemMatrices}.
The right eigenvectors $\textbf{x}_m$ solve the eigenproblem
\begin{equation}
\textbf{K}_h \textbf{x}_m = \lambda_m \textbf{M}_h \textbf{x}_m
\label{eq:DiscreteEigen}
\end{equation}
where the eigenvalue $\lambda_m$ is linked with $\tilde{\omega}_m$ by
$$ \lambda_m = i \tilde{\omega}_m $$
Assuming that $\textbf{M}_h^{-1} \textbf{K}_h$ is diagonalizable, we have
$$ \textbf{M}_h^{-1} \textbf{K}_h = \textbf{V} \textbf{D} \textbf{V}^{-1} $$
where $\textbf{D}$ is a diagonal matrix with eigenvalues $\lambda_m$ on the diagonal and $\textbf{V}$ the matrix whose columns are formed with right eigenvectors $\textbf{x}_m$. The left eigenvectors of $\textbf{M}_h^{-1} \textbf{K}_h$ denoted $\textbf{w}_m$ are the rows of matrix $\textbf{V}^{-1}$. Since $\textbf{V} \textbf{V}^{-1} = \textbf{I}$, vectors $\textbf{x}_m$ and $\textbf{w}_m$ are biorthogonal
$$ \langle \textbf{x}_m , \textbf{w}_n \rangle = \delta_{m, n} $$
The left eigenvectors $\textbf{w}_m$ can also be found by searching right eigenvectors of the transpose of $\textbf{M}_h^{-1} \textbf{K}_h$. Since $\textbf{K}_h$ and $\textbf{M}_h$ are symmetric, we have
$$ (\textbf{M}_h^{-1} \textbf{K}_h)^T = \textbf{K}_h \textbf{M}_h^{-1} $$
Hence $\textbf{w}_m$ solves the following eigenvalue problem
$$ \textbf{K}_h \textbf{M}_h^{-1} \textbf{w}_m = \lambda_m \textbf{w}_m $$
By introducing $\textbf{x}^\bot_m = \textbf{M}_h^{-1} \textbf{w}_m$, we obtain
$$ \textbf{K}_h \textbf{x}^\bot_m = \lambda_m \textbf{M}_h \textbf{x}^\bot_m $$
$\textbf{x}^\bot_m$ is the left eigenvector of generalized eigenproblem \eqref{eq:DiscreteEigen}.
If $\lambda_m$ is a simple eigenvalue, $\textbf{x}^\bot_m$ is colinear with $\textbf{x}_m$ since they solve the same eigenvalue problem. In order to have $\textbf{x}^\bot_m = \textbf{x}_m$, the eigenvector $\textbf{x}_m$ must be normalized such that
\begin{equation}
\langle \textbf{M}_h \textbf{x}_m, \textbf{x}_m \rangle = 1
\end{equation}
The solution $\textbf{U}_h$ is expanded with right eigenvectors $\textbf{x}_m$ (they form a basis since the matrix is diagonalizable):
$$ \textbf{U}_h = \sum_m \alpha_m \textbf{x}_m $$
By injecting this expansion in \eqref{eq:DiscreteMaxwell} and using \eqref{eq:DiscreteEigen}), we obtain
$$ \sum_m \alpha_m (- i \, \omega + i \, \tilde{\omega}_m) \textbf{M}_h \textbf{x}_m = \textbf{F}_h $$
The modal coefficient $\alpha_m$ is directly obtained by taking the scalar product $\langle, \rangle$ with the left eigenvector $\textbf{x}^\bot_m$
$$ \alpha_m (-i \, \omega + i \, \tilde{\omega}_m) = \langle \textbf{F}_h, \textbf{x}^\bot_m \rangle $$
Since $\textbf{x}^\bot_m = \textbf{x}_m$, we obtain
\begin{equation}
\alpha_m = \dfrac{1}{i(\tilde{\omega}_m - \omega)} \langle \textbf{F}_h, \textbf{x}_m \rangle
\label{eq:AlphaDiscrete2}
\end{equation}
which is the announced result in the introductio, implying that the expansion coefficient solely depends on the QNM and not on the left eigenvector. This important results provides analyticity which has not been obtained in the related works by \cite{Vial} and was derived in a different way using the divergence theorem and the continuous operator, not the discretized one, in \cite{Wei}.
\subsection{Link with continuous expansion}
The formula \eqref{eq:AlphaDiscrete2} is the discrete equivalent of \eqref{eq:FormulaAlpha}
since
$$ \langle \textbf{F}_h, \textbf{x}_m \rangle = \sum_i x_{m, i} \int_{\Omega_{res}} \textbf{J}(\textbf{r}) \cdot \boldsymbol{\varphi}_i(\textbf{r}) dr $$
where $x_{m, i}$ is the $i$-th component of $\textbf{x}_m$. By swapping the sum and the integral, we obtain
$$\langle \textbf{F}_h, \textbf{x}_m \rangle = \int_{\Omega_{res}} \textbf{J}(\textbf{r}) \cdot \textbf{x}_m(\textbf{r}) dr $$
For numerical experiments, it is preferable to perform a scalar product as presented in formula \eqref{eq:AlphaDiscrete2} rather than approximating this integral. With the same arguments, we have the following equality
$$ \langle \textbf{M}_h \textbf{x}_m , \textbf{x}_m \rangle = \int_{\Omega} \varepsilon_e \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m - \mu_0 \tilde{\textbf{H}}_m \cdot \tilde{\textbf{H}}_m + \dfrac{\omega_0^2}{\varepsilon_\infty \omega_p^2} \tilde{\textbf{P}}_m \cdot \tilde{\textbf{P}}_m - \dfrac{1}{\varepsilon_\infty \omega_p^2} \tilde{\textbf{Q}}_m \cdot \tilde{\textbf{Q}}_m \, d\Omega $$
where
$$ \varepsilon_e = \left \{ \begin{array}{l}
\varepsilon_\infty \, \mbox{ in } \Omega_{res} \\
\varepsilon_b, \mbox{ elsewhere. }
\end{array} \right. ,
\quad \textbf{x}_m = \left[ \begin{array}{c} \tilde{\textbf{E}}_m \\ \tilde{\textbf{H}}_m \\ \tilde{\textbf{P}}_m
\\ \tilde{\textbf{Q}}_m \end{array} \right]. $$
Since $\tilde{\textbf{P}}_m = -\varepsilon_\infty \omega_p^2 / (\tilde{\omega}_m^2 + i \gamma \tilde{\omega}_m - \omega_0^2) \tilde{\textbf{E}}_m$ and $\tilde{\textbf{Q}}_m = -i \tilde{\omega}_m \tilde{\textbf{P}}_m$, we get
$$ \dfrac{\omega_0^2}{\varepsilon_\infty \, \omega_p^2} \tilde{\textbf{P}}_m \cdot \tilde{\textbf{P}}_m - \dfrac{1}{\varepsilon_\infty \omega_p^2} \tilde{\textbf{Q}}_m \cdot \tilde{\textbf{Q}}_m = \varepsilon_\infty \omega_p^2
\dfrac{\left(\tilde{\omega}_m^2 + \omega_0^2 \right) }{\left( \tilde{\omega}_m^2 + i \gamma \tilde{\omega}_m - \omega_0^2\right)^2} \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m $$
Since we have
$$ \dfrac{\partial \varepsilon(\omega)}{\partial \omega} = \dfrac{\omega_p^2 \varepsilon_\infty (2 \omega + i \gamma)}{\left(\omega^2 - \omega_0^2 + i \gamma \omega \right)^2} $$
we obtain
$$ \dfrac{\partial \left( \tilde{\omega}_m \varepsilon(\tilde{\omega}_m) \right)}{ \partial \tilde{\omega}_m} = \left \{ \begin{array}{l}
\varepsilon_\infty + \varepsilon_\infty \omega_p^2 \dfrac{\tilde{\omega}_m^2 + \omega_0^2}{\left( \tilde{\omega}_m^2 + i \gamma \tilde{\omega}_m - \omega_0^2 \right)^2 }, \mbox{ in } \Omega_{res} \medskip \\
\varepsilon_b, \mbox{ otherwise } \end{array} \right.
$$
As a result, we have proven that
$$ \langle \textbf{M}_h \textbf{x}_m, \textbf{x}_m \rangle = \int_{\Omega} \dfrac{\partial \left( \tilde{\omega}_m \varepsilon(\tilde{\omega}_m) \right) }{\partial \tilde{\omega}_m} \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m - \mu_0 \tilde{\textbf{H}}_m \cdot \tilde{\textbf{H}}_m d \Omega. $$
This relation proves that the normalization \eqref{eq:NormDiscrete} is the discrete equivalent of \eqref{eq:Norm}. Again, for the sake of simplicity, the relation \eqref{eq:NormDiscrete} is preferred to normalize discrete eigenvectors.
\begin{remark}
The normalization can be written with only unknown $\tilde{\textbf{E}}_m$.
By using the relation $\tilde{\textbf{H}}_m = \dfrac{1}{i \omega \mu_0} \nabla \times \tilde{\textbf{E}}_m$ and the variational formulation satisfied by $\tilde{\textbf{E}}_m$ with only Dirichlet or Neumann boundary conditions:
$$ -\tilde{\omega}_m^2 \int_\Omega \varepsilon(\tilde{\omega}_m) \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m \, d \Omega
+ \int_\Omega \dfrac{1}{\mu_0} \nabla \times \tilde{\textbf{E}}_m \cdot \nabla \times \tilde{\textbf{E}}_m \, d\Omega = 0, $$
we obtain that
$$ - \int_\Omega \mu_0 \, \tilde{\textbf{H}}_m \cdot \tilde{\textbf{H}}_m \, d\Omega = \int_\Omega \varepsilon(\tilde{\omega}_m) \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m \, d\Omega. $$
As a result the normalization can be written as
$$ \langle \textbf{M}_h \textbf{x}_m, \textbf{x}_m \rangle = \int_\Omega \dfrac{\partial \left( \tilde{\omega}_m \varepsilon(\tilde{\omega}_m) \right) }{\partial \tilde{\omega}_m} \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m
+ \varepsilon(\tilde{\omega}_m) \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m d\Omega. $$
\end{remark}
\section{Derivation of other formulas and issues}
\subsection{Derivation of formula of \cite{Wei}}
\label{sec:ComparWei}
To obtain the formula \eqref{eq:FormulaAlpha}, first, we have written Maxwell's equations directly for the scattered field $\textbf{E}_S(\textbf{r},\omega), \textbf{H}_S(\textbf{r},\omega)$ and then introduced the auxiliary fields $\textbf{P}$ and $\textbf{Q}$. In the aforementioned paper \cite{Wei},
Maxwell's equations are first written for the total field, and the auxiliary unknowns $\textbf{P}$ and $\textbf{Q}$ are introduced at this step. Hence the unknowns $\textbf{E}, \textbf{H}, \textbf{P}, \textbf{Q}$ solve the system \eqref{eq:MaxwellSystemPQ} with $\textbf{J} = 0$. As a second step, we subtract the equations solved by the incident field (homogeneous Maxwell's equation with indices $\varepsilon_b$ and $\mu_0$), and use the relations
$$ \left[ \textbf{E}(\textbf{r},\omega), \textbf{H}(\textbf{r},\omega)\right] = \left[ \textbf{E}_S(\textbf{r},\omega) + \textbf{E}_{\mbox{inc}}(\textbf{r},\omega), \; \textbf{H}_S(\textbf{r},\omega) + \textbf{H}_{\mbox{inc}}(\textbf{r},\omega)\right]$$
to obtain the system solved by the scattered field
\begin{equation}
\left \{
\begin{array}{lll}
-i \, \omega \, \varepsilon_\infty \, \textbf{E}_S + \textbf{Q}_S - \nabla \times \textbf{H}_S & = & i\omega(\varepsilon_\infty - \varepsilon_b)\textbf{E}_{\mbox{inc}} \medskip \\
+i \, \omega \, \mu_0 \, \textbf{H}_S - \nabla \times \textbf{E}_S & = & 0 \medskip \\
-i \omega \dfrac{\omega_0^2}{\varepsilon_\infty\omega^2_p}\textbf{P}_S-\dfrac{\omega_0^2}{\varepsilon_\infty\omega^2_p}\textbf{Q}_S &= & 0 \medskip \\
\dfrac{i \omega}{\varepsilon_\infty \omega^2_p} \textbf{Q}_S - \dfrac{\gamma}{\varepsilon_\infty \omega_p^2}\textbf{Q}_S - \dfrac{\omega_0^2}{\varepsilon_\infty \omega_p^2}\textbf{P}_S + \textbf{E}_S & = & -\textbf{E}_{\mbox{inc}} \medskip \\
+ \mbox{ Sommerfeld condition}
\end{array}
\right.
\label{eq:SystemWei}
\end{equation}
Unlike the equations considered in section \ref{sec:Core}, we can see that the source term on the right hand side of the equations is no longer confined to the first equation.
The coefficient $\alpha_m$ becomes:
$$\alpha_m = \int_{\Omega_{res}} (\varepsilon_b - \varepsilon_\infty) \textbf{E}_{\mbox{inc}} \cdot \tilde{\textbf{E}}_m d\Omega + \dfrac{\tilde{\omega}_m}{\tilde{\omega}_m-\omega}\int_{\Omega_{res}} (\varepsilon(\tilde{\omega}_m) - \varepsilon_b) \textbf{E}_{\mbox{inc}} \cdot \tilde{\textbf{E}}_m d\Omega .$$
It is important to notice that the systems \eqref{eq:SystemWei} and \eqref{eq:MaxwellSystemPQ} provide exactly the same numerical solution $\textbf{E}_S$. Only the auxiliary fields $\textbf{P}$ and $\textbf{Q}$ differ, that's why the source $\textbf{F}_h$ is different between the two approaches and two different formulas are obtained for $\alpha_m$. Other formulas for $\alpha_m$ can be found by choosing a different distribution of the source over the four equations. This is the object of the next sub-section.
\subsection{Generalized Sources}
\label{sec:SplitSource}
Let us split the source term $\textbf{J}$ into a set of artificial sources denoted $\textbf{f}_1, \textbf{f}_2, \textbf{f}_3, \textbf{f}_4$.
$$
\left \{
\begin{array}{lll}
-i \, \omega \, \varepsilon_\infty \, \textbf{E} + \textbf{Q} - \nabla \times \textbf{H} & = & \textbf{f}_1 \medskip \\
+i \, \omega \, \mu_0 \, \textbf{H} - \nabla \times \textbf{E} & = & \textbf{f}_2 \medskip \\
-i \omega \dfrac{\omega_0^2}{\varepsilon_\infty\omega^2_p}\textbf{P}-\dfrac{\omega_0^2}{\varepsilon_\infty\omega^2_p}\textbf{Q} &= & \textbf{f}_3 \medskip \\
\dfrac{i \omega}{\varepsilon_\infty \omega^2_p} \textbf{Q} - \dfrac{\gamma}{\varepsilon_\infty \omega_p^2}\textbf{Q} - \dfrac{\omega_0^2}{\varepsilon_\infty \omega_p^2}\textbf{P} + \textbf{E} & = & \textbf{f}_4 \medskip \\
+ \mbox{ Boundary conditions}
\end{array}
\right.
$$
By eliminating the unknowns \textbf{H}, \textbf{P}, and \textbf{Q}, we obtain the following equation for \textbf{E}:
$$
-\omega^2 \varepsilon(\omega) \textbf{E} + \nabla \times \left( \dfrac{1}{\mu_0} \nabla \times \textbf{E} \right) = -i\omega \textbf{f}_1 + \dfrac{i\omega \varepsilon_\infty \omega_p^2}{-\omega^2-i\omega\gamma+\omega_0^2}(i\omega \textbf{f}_4 - \textbf{f}_3) - \nabla \times \left( \dfrac{1}{\mu_0} \textbf{f}_2 \right)
$$
which is equivalent to the standard Maxwell's equations:
$$
\omega^2 \varepsilon(\omega) \textbf{E} + \nabla \times \left( \dfrac{1}{\mu_0} \nabla \times \textbf{E} \right) = -i\omega \textbf{J}
$$
as soon as
$$
-i \omega \textbf{f}_1 + \dfrac{i\omega \varepsilon_\infty \omega_p^2}{-\omega^2-i\omega\gamma+\omega_0^2}(i\omega \textbf{f}_4 - \textbf{f}_3) - \nabla \times \left( \dfrac{1}{\mu_0} \textbf{f}_2 \right) = -i \omega \textbf{J}.
$$
By choosing different splittings of the source (i.e. different functions $\textbf{f}_1, \textbf{f}_2, \textbf{f}_3, \textbf{f}_4$ that satisfy the relationship above), we will obtain different formulas for $\alpha_m$. The modal solution obtained with these different formulas (see equation \eqref{eq:modal_expansion}) will converge towards the same electric field $\textbf{E}_S$ when the number of modes is increased.
\subsection{Derivation of formula in \cite{Marseillais}}
\label{sec:ComparMarseille}
In this section we propose a different linearization of the problem by starting from the second order formulation. With this alternative linearization, we obtain the formula \eqref{eq:FormuleMarseille} for the coefficients $\alpha_m$. Let us start from the second-order formulation of Maxwell's equations
$$-\omega^2 \varepsilon(\omega) \textbf{E} + \nabla\times \left(\dfrac{1}{\mu_0}\nabla\times \textbf{E}\right)=-i\omega \textbf{J} .$$
In order to linearize this equation, let us introduce the field $\textbf{E}' = -i \omega \textbf{E}$ and the auxiliary field $\textbf{P} = \left( \varepsilon(\omega)-\varepsilon_\infty \right) \textbf{E}'$ and $\textbf{Q} = -i \omega \textbf{P}$. We obtain the following system of linear equations:
$$
\left\{
\begin{array}{lll}
-i \, \omega \textbf{E} -\textbf{E}' & = & 0 \\
-i \, \omega \varepsilon_\infty \textbf{E}' + \textbf{Q} + \nabla \times \left(\dfrac{1}{\mu_0}\nabla\times \textbf{E}\right) & = & - i \, \omega \textbf{J} \\
-i \, \omega \textbf{P} - \textbf{Q} & = & 0 \medskip \\
-i \omega \textbf{Q} + \gamma \textbf{Q} + \omega_0^2 \textbf{P} - \varepsilon_\infty
\omega_p^2 \textbf{E}' & = & 0
\end{array}
\right. ,
$$
which gives the following stiffness and mass operators $\textbf{K}$ and $\textbf{M}$ for the vector $\textbf{U} = [\textbf{E}, \textbf{E}', \textbf{P}, \textbf{Q}]^T$:
$$
\textbf{K}= \left[
\begin{array}{cccc}
0 & -1 & 0 & 0 \\
\dfrac{1}{\mu_0} \nabla\times \nabla\times & 0 & 0 & 1 \\
0 & 0 & 0 & -1 \\
0 & -\varepsilon_\infty \omega_p^2 & \omega_0^2 & \gamma
\end{array}
\right],
$$
$$
\textbf{M} = \left[
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & \varepsilon_\infty & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array}{}
\right] .
$$
As a result, Maxwell's equations are rewritten as :
$$
(-i\, \omega \textbf{M} + \textbf{K}) \textbf{U} = \textbf{F} ,
$$
where
$$ \textbf{F} = [ 0, -i \omega \textbf{J}, 0, 0] $$
is the source term. After discretization, we have the following discrete system
$$
(-i \omega \textbf{M}_h + \textbf{K}_h) \textbf{U}_h = \textbf{F}_h .$$
The matrices $\textbf{M}_h, \textbf{K}_h$ are not detailed here, but are different from matrices
$\textbf{M}_h$ and $\textbf{K}_h$ given in section \ref{sec:Core}.
It can be noticed that the discrete solution $\textbf{E}_h$ will be exactly the same
with this formulation or with the formulation presented in section \ref{sec:Core}.
The right eigenvectors $\textbf{x}_m$ solve the eigenvalue problem
$$\textbf{K}_h \textbf{x}_m = i \tilde{\omega}_m \textbf{M}_h \textbf{x}_m $$
while the left eigenvectors $\textbf{x}^\bot_m$ solve the adjoint eigenvalue problem
$$ \textbf{K}_h^T \textbf{x}^\bot_m = i \tilde{\omega}_m \textbf{M}_h^T \textbf{x}^\bot_m. $$
Since we have
$$
\textbf{K}^T= \left[
\begin{array}{cccc}
0 & \dfrac{1}{\mu_0} \nabla\times \nabla\times & 0 & 0 \\
-1 & 0 & 0 & -\varepsilon_\infty \omega_p^2 \\
0 & 0 & 0 & \omega_0^2 \\
0 & 1 & -1 & \gamma
\end{array}
\right],
$$
$$
\textbf{M}^T = \textbf{M}
$$
we obtain the following system of equations for the biorthogonal eigenvectors ($\textbf{x}^\bot_m = [\textbf{E}_\bot, \textbf{E}'_\bot, \textbf{P}_\bot, \textbf{Q}_\bot ]$):
$$
\left\{
\begin{array}{lll}
-i \, \tilde{\omega}_m \textbf{E}_\bot +\nabla\times\left(\dfrac{1}{\mu_0}\nabla\times \textbf{E}'_\bot\right) & = & 0 \medskip \\
-i \, \tilde{\omega}_m \varepsilon_\infty \textbf{E}'_\bot -\textbf{E}_\bot - \varepsilon_\infty \omega_p^2 \textbf{Q}_\bot & = & 0 \medskip \\
-i \, \tilde{\omega}_m \textbf{P}_\bot + \omega_0^2 \textbf{Q}_\bot & = & 0 \medskip \\
-i \, \tilde{\omega}_m \textbf{Q}_\bot + \gamma \textbf{Q}_\bot + \textbf{E}'_\bot - \textbf{P}_\bot & = & 0,
\end{array}
\right.
$$
By eliminating the other variables, we can show that $\textbf{E}'_\bot$ verifies
$$
-\tilde{\omega}_m^2\varepsilon(\tilde{\omega}_m)\textbf{E}'_\bot + \nabla \times \left( \dfrac{1}{\mu_0} \nabla \times \textbf{E}'_\bot \right) = 0 ,
$$
and subsequentially :
$$
\left\{
\begin{array}{lll}
\textbf{E}'_\bot & = & \tilde{\textbf{E}}_m \medskip \\
\textbf{E}_\bot & = & -i\tilde{\omega}_m\varepsilon(\tilde{\omega}_m)\tilde{\textbf{E}}_m \medskip \\
\textbf{P}_\bot & = & \dfrac{\omega_0^2}{\omega^2_0-i\gamma\tilde{\omega}_m-\tilde{\omega}_m^2} \tilde{\textbf{E}}_m \medskip \\
\textbf{Q}_\bot & = & \dfrac{i\tilde{\omega}_m}{\omega^2_0-i\gamma\tilde{\omega}_m-\tilde{\omega}_m^2} \tilde{\textbf{E}}_m, \\
\end{array}
\right.
$$
where $\tilde{\textbf{E}}_m$ is the E-component of the the left eigenvector $\textbf{x}_m$.
We can now obtain the excitation coefficient :
$$
\alpha_m = \dfrac{1}{i \left(\tilde{\omega}_m - \omega \right) } \dfrac{\langle \textbf{F}_h, \textbf{x}^\bot_m \rangle}{\langle \textbf{M}_h \textbf{x}_m, \textbf{x}^\bot_m \rangle} = \dfrac{-i \omega \displaystyle \int_{\Omega_{res}} \textbf{J}(\textbf{r}) \cdot \tilde{\textbf{E}}_m(r) d \textbf{r} } {i(\tilde{\omega}_m-\omega) \, N_m},
$$
where the coefficient $N_m$ appears since we choose the normalization \eqref{eq:Norm} of the first order formulation. $N_m$ is given as
$$
N_m = \langle \textbf{M}_h \textbf{x}_m, \textbf{x}^\bot_m \rangle = \int_\Omega \tilde{\textbf{E}}_m \cdot \textbf{E}_\bot + \varepsilon_\infty \tilde{\textbf{E}}_m \cdot \textbf{E}'_\bot + \dfrac{\varepsilon_\infty \, \omega_p^2}{\omega^2_0-i\gamma\tilde{\omega}_m-\tilde{\omega}_m^2} \left( - i \omega_m \tilde{\textbf{E}}_m \cdot \textbf{P}_\bot - \tilde{\omega}_m^2 \tilde{\textbf{E}}_m\cdot \textbf{Q}_\bot \right) d \Omega.
$$
By substituting $\textbf{E}_\bot, \textbf{E}'_\bot, \textbf{P}_\bot, \textbf{Q}_\bot$ by the expressions above, we obtain
$$
N_m = - i \tilde{\omega}_m \left[ \int_\Omega \varepsilon(\tilde{\omega}_m) \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m + \varepsilon_\infty \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m + \dfrac{\varepsilon_\infty \omega_p^2}{(\omega^2_0-i\gamma\tilde{\omega}_m-\tilde{\omega}_m^2)^2} (\omega_0^2 \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m + \tilde{\omega}_m^2 \tilde{\textbf{E}}_m\cdot \tilde{\textbf{E}}_m) \, d\Omega \right].
$$
We recognize the normalization used by the first order formulation multiplied by $-i \tilde{\omega}_m$. As a result, if $\tilde{\textbf{E}}_m$ is normalized by \eqref{eq:Norm}, we obtain that
$$ N_m = -i \tilde{\omega}_m, $$
which gives us this expression for the excitation coefficient:
$$
\alpha_m = \dfrac{\omega}{i\tilde{\omega}_m(\tilde{\omega}_m-\omega)}\int_{\Omega_{res}} \textbf{J} \cdot \tilde{\textbf{E}}_m d\Omega.
$$.
We recognize the formula \eqref{eq:FormuleMarseille}.
\subsection{Treatment of degenerate eigenvalues}
\label{sec:Degenerate}
A set of degenerate modes $\{ \textbf{x}_k \}_{m_1\leq k \leq m_2}, $ are solutions of the eigenvalue problem at the same eigenfrequency $\tilde{\omega}_{m_1}$. Degenerate eigenvectors do not necessarily form an orthogonal sub-basis with respects to $\textbf{M}_h$. However, using Gram-Schmidt orthogonalization process, an orthogonal sub-basis with respects to $\textbf{M}_h$ can be constructed from the set of degenerate modes by algorithm \ref{algo:GramSchmidt}.
\begin{algorithm}
\caption{Algorithm to apply Gram-Schmidt orthogonalization to vectors $\textbf{x}_m$}
\label{algo:GramSchmidt}
\begin{algorithmic}
\FOR{m=$m_1$ to $m_2$}
\STATE{Initialize $\textbf{y} = \textbf{x}_m$}
\FOR{j = $m_1$ to $m-1$}
\STATE{Compute $\alpha = \langle \textbf{M}_h \textbf{x}_m, \textbf{x}^\bot_j \rangle$ }
\STATE{Substitute $\textbf{y}$ by $\textbf{y} - \alpha \textbf{x}_j$}
\ENDFOR
\STATE{Compute left eigenvector $\textbf{y}^\bot$ from right eigenvector $\textbf{y}$ with formula \eqref{eq:LeftEigenVecPML3D} }
\STATE{Substitute $\textbf{x}_m$ by $\textbf{y} / \langle \textbf{M}_h \textbf{y}, \textbf{y}^\bot \rangle$}
\STATE{Store $\textbf{x}^\bot_j = \textbf{y}^\bot / \langle \textbf{M}_h \textbf{y}, \textbf{y}^\bot \rangle $}
\ENDFOR
\end{algorithmic}
\end{algorithm}
By applying this procedure, the formula \eqref{eq:AlphaDiscrete} holds for degenerate eigenvalues with normalization \eqref{eq:NormDiscrete}. This process can also be done with continuous eigenmodes by replacing $\langle \textbf{M}_h \textbf{x}_m, \textbf{x}^\bot_j \rangle$ by
$$
\int_\Omega \dfrac{\partial (\tilde{\omega}_m \, \varepsilon(\tilde{\omega}_m))}{\partial \tilde{\omega}_m} \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_j - \dfrac{ \partial \left(\tilde{\omega_m} \mu(\tilde{\omega}_m) \right)}{\partial \tilde{\omega_m}} \, \tilde{\textbf{H}}_m \cdot \tilde{\textbf{H}}_j d\Omega .
$$
Here $\mu$ depends on $\omega$ inside the PML layers, which are detailed in the next sub-section.
\subsection{PML}
\label{sec:PML}
In this section, we describe how dispersive PMLs are handled.
The damping coefficients $\sigma_x$, $\sigma_y$ and $\sigma_z$ inside a PML where $x > x_0$, $y > y_0$ or $z>z_0$ are parabolic:
$$
\sigma_1 = \sigma_x = \dfrac{3\, \text{log}(1000)}{2a^3} (x-x_0)^2 v_{max} \, \sigma
$$
$$
\sigma_2 = \sigma_y = \dfrac{3\, \text{log}(1000)}{2a^3} (y-y_0)^2 v_{max} \, \sigma
$$
$$
\sigma_3 = \sigma_z = \dfrac{3\, \text{log}(1000)}{2a^3} (z-z_0)^2 v_{max} \, \sigma.
$$
The coefficient $\sigma$ serves to adjust the reflection coefficient of the PML. $v_{max}$ is the speed of the wave inside the PML.
In this section, we describe the formulation used for dispersive PMLs. The matrices $\textbf{M}_h, \textbf{K}_h$ are no longer symmetric.
We provide relations between the left eigenvector $\textbf{x}_m^\bot$ and right eigenvector $\textbf{x}_m$. As a result we do not need to compute the eigenvectors of the adjoint problem, since we can compute $\textbf{x}^\bot_m$ directly from the right eigenvector $\textbf{x}_m$.
\subsubsection{2-D case}
\label{sec:PML2D}
In Transverse Electric case, we have
$$
\textbf{E} = u \, \textbf{e}_z, \quad \textbf{H} = v_x \, \textbf{e}_x + v_y \, \textbf{e}_y .$$
We use a split formulation of the PMLs where $u=u_1+u_2$ inside the PML. The unknowns $u_1$, $u_2$, and $\textbf{v}=(v_x, v_y)$ are solutions of:
$$
\left\{
\begin{array}{l}
-i \omega \, \varepsilon_b \, u_1 + \varepsilon_b \, \sigma_x \, u_1 - \dfrac{\partial v_x}{\partial x} = 0 \medskip \\
-i \omega \, \varepsilon_b \, u_2 + \varepsilon_b \, \sigma_y u_2 - \dfrac{\partial v_y}{\partial y} = 0 \medskip \\
-i\omega \mu_b \, \textbf{v} + \mu_b \left(\begin{array}{cc}
\sigma_x & 0 \\
0 & \sigma_y
\end{array} \right)
\textbf{v} - \nabla(u_1+u_2) = 0\\
u = 0 \quad \text{ at the border of the PML}.
\end{array}
\right.
$$
We consider the unknowns:
$$
u=u_1+u_2
$$
$$
u^* = u_1 - u_2.
$$
$u$, $u^*$, $v$, are solutions of the following system,
\begin{equation}
\left\{\begin{array}{l}
-i \omega \, \varepsilon_b \, u + \varepsilon_b \dfrac{\sigma_x+\sigma_y}{2} u + \varepsilon_b \dfrac{\sigma_x-\sigma_y}{2} u^* - \text{div} \, \textbf{v} \, = \, 0 \medskip \\
-i\omega \, \varepsilon_b \, u^* + \varepsilon_b \dfrac{\sigma_x+\sigma_y}{2} u^* + \varepsilon_b \dfrac{\sigma_x-\sigma_y}{2} u - \left(\dfrac{\partial v_x}{\partial x}-\dfrac{\partial v_y}{\partial y} \right) \, = \, 0 \medskip \\
-i\omega \mu_b \, \textbf{v} + \mu_b \, \sigma \, \textbf{v} - \nabla \, u \, = \, 0.
\end{array}\right.
\end{equation}
The unknown $u^*$ exists only in the PML domain. In the physical domain, only unknowns $u$ and $\textbf{v}$ are present, and we solve
$$
\left\{\begin{array}{l}
-i \omega \, \varepsilon(\omega) \, u - \text{div} \, \textbf{v} \, = \, -i \omega j \medskip \\
-i\omega \mu_b \, \textbf{v} + \mu_b \, \sigma \, \textbf{v} - \nabla \, u \, = \, 0,
\end{array}\right.
$$
where $j$ is the source term. Of course, additional unknowns $p$ and $q$ are added in $\Omega_{res}$ to linearize the system in $\omega$.
After discretization, we will obtain :
$$
-i \omega \textbf{M}_h \textbf{U}_h + \textbf{K}_h \textbf{U}_h = \textbf{F}_h.
$$
The matrix $\textbf{M}_h$ is symmetric, while $\textbf{K}_h$ is not.
The left eigenvector $\textbf{x}^\bot_m$ and the right eigenvector $\textbf{x}_m$ are written as:
$$
\textbf{x}^\bot_m = \left( \begin{array}{c} \textbf{u}^\bot_m \\ \textbf{u}^{*, \bot}_m \\ \textbf{v}^\bot_m \end{array} \right),
\quad \textbf{x}_m = \left( \begin{array}{c} \textbf{u}_m \\ \textbf{u}^*_m \\ \textbf{v}_m \end{array} \right).
$$
We have obtained the following relations ($\lambda_m = i \tilde{\omega}_m$ is the eigenvalue associated with $\textbf{x}_m$ and $\textbf{x}^\bot_m$):
$$
\textbf{u}^\bot_m = \left( 1- \dfrac{\sigma_x+\sigma_y}{2\lambda_m} \right) \textbf{u}_m
$$
$$
\textbf{u}^{*,\bot}_m = \left( \dfrac{\sigma_x-\sigma_y}{2\lambda_m} \right) \textbf{u}_m
$$
and
$$
\textbf{v}^\bot_m = \left[ \begin{array}{c}
\dfrac{1}{\mu_b \left(-\lambda_m + \sigma_x \right)} \left( \dfrac{\partial\textbf{u}^\bot_m}{\partial x} + \dfrac{\partial\textbf{u}^{*,\bot}_m}{\partial x} \right) \\
\dfrac{1}{\mu_b \left(-\lambda_m + \sigma_y \right)} \left( \dfrac{\partial\textbf{u}^\bot_m}{\partial y} - \dfrac{\partial\textbf{u}^{*,\bot}_m}{\partial y} \right)
\end{array}\right].
$$
The proof is given in appendix \ref{app:BiorthoPML2D}.
\subsubsection{3-D case}
In the PMLs we have:
$$
\left \{
\begin{array}{l}
-i \omega \varepsilon_b \textbf{E} + \varepsilon \textbf{T}_{2,3,1}\textbf{E} - \nabla \times \textbf{H}^* = 0 \bigskip \\
-i \omega \mu_b \textbf{H} + \mu \textbf{T}_{2,3,1}\textbf{H} + \nabla \times \textbf{E}^* = 0 \bigskip \\
-i \omega \textbf{E}^* + \textbf{T}_{3,1,2}\textbf{E}^* + i \omega \textbf{E} -\textbf{T}_{1,2,3}\textbf{E}= 0 \bigskip \\
-i \omega \textbf{H}^* + \textbf{T}_{3,1,2}\textbf{H}^* + i \omega \textbf{H}-\textbf{T}_{1,2,3}\textbf{H}= 0 \bigskip \\
\textbf{E} \times \textbf{n} = 0 \quad \mbox{at the border of the PML, }
\end{array}
\right.
$$
with $\textbf{T}_{i,j,k}=\left( \begin{array}{ccc}
\sigma_i & 0 & 0\\
0 & \sigma_j & 0\\
0 & 0 & \sigma_k
\end{array}
\right)$. The unknowns $\textbf{E}^*$ and $\textbf{H}^*$ exist only in the PML domain. In
the physical domain, there are only unknowns $\textbf{E}$ and $\textbf{H}$ (supplemented by unknowns $\textbf{P}$ and $\textbf{Q}$
in $\Omega_{res}$) that solve \eqref{eq:MaxwellSystemPQ}.
After discretization we will obtain:
$$ -i \omega \textbf{M}_h \textbf{U}_h + \textbf{K}_h \textbf{U}_h = \textbf{F}_h.$$
The matrices $\textbf{M}_h$ and $\textbf{K}_h$ are not symmetric (see appendix \eqref{app:BiorthoPML3D}). If we note $\textbf{x}_m = \left( \textbf{E}_m, \textbf{H}_m, \textbf{E}_m^*, \textbf{H}_m^* \right)$ the right eigenvector, the left eigenvector $\textbf{x}_m^\bot$ is given as:
\begin{equation}
\textbf{x}_m^\bot = \left( \begin{array}{c}
\textbf{E}_m^* \\
-\textbf{H}_m^* \medskip \\
\left( 1+\dfrac{\textbf{T}_{2,3,1}-\textbf{T}_{3,1,2}}{-\lambda_m + \textbf{T}_{3,1,2}} \right) \varepsilon_b \textbf{E}_m \medskip \\
-\left( 1+\dfrac{\textbf{T}_{2,3,1}-\textbf{T}_{3,1,2}}{-\lambda_m + \textbf{T}_{3,1,2}} \right)\mu_b \textbf{H}_m \medskip \\
\end{array} \right).
\label{eq:LeftEigenVecPML3D}
\end{equation}
The proof is given in appendix \ref{app:BiorthoPML3D}. Straightforward computations give that $$ \langle \textbf{M}_h \textbf{x}_m, \textbf{x}_m^\bot \rangle = \int_\Omega \dfrac{\partial (\tilde{\omega}_m \, \varepsilon(\tilde{\omega}_m))}{\partial \tilde{\omega}_m} \tilde{\textbf{E}}_m \cdot \tilde{\textbf{E}}_m - \dfrac{ \partial \left(\tilde{\omega_m} \mu(\tilde{\omega}_m) \right)}{\partial \tilde{\omega_m}} \, \tilde{\textbf{H}}_m \cdot \tilde{\textbf{H}}_m d\Omega $$
with
\begin{equation}
\varepsilon(\omega) = \varepsilon_b \dfrac{ \left(-i \omega + \textbf{T}_{2, 3, 1}\right) \left(-i \omega + \textbf{T}_{3, 1, 2} \right) } { - i \omega \left( -i \omega + \textbf{T}_{1, 2, 3} \right) }, \quad
\mu(\omega) = \mu_b \dfrac{ \left(-i \omega + \textbf{T}_{2, 3, 1}\right) \left(-i \omega + \textbf{T}_{3, 1, 2} \right) } { - i \omega \left( -i \omega + \textbf{T}_{1, 2, 3} \right) }, \quad
\label{eq:EpsMuPML}
\end{equation}
inside the PML. We find the announced normalization \eqref{eq:Norm} in the introduction.
\subsection{Case of metals : $\omega_0 = 0$}
In section \ref{sec:Core}, the third equation of \eqref{eq:MaxwellSystemPQ} has been multiplied by $\omega_0^2/(\varepsilon_\infty \omega_p^2)$ which vanishes when $\omega_0 = 0$. But the latter case is often interesting because it occurs for metallic materials. The linear system \eqref{eq:DiscreteMaxwell} is no longer invertible because some rows of $\textbf{K}_h$ and $\textbf{M}_h$ are null. For metals, we cannot symmetrize the linear system. Therefore the calculations made in section \ref{sec:Core} are no longer valid for metals.
However, if we consider the nonsymmetric system \eqref{eq:MaxwellSystemPQ},
$$
\textbf{K}=
\left [
\begin{array}{cccc}
0 &-\nabla \times &0 &0 \\
-\nabla \times &0 &0 &0 \\
0 & 0 & 0 & -1 \\
\varepsilon_\infty \omega_p^2 & 0 & \omega_0^2 & -\gamma
\end{array} \right ], \ \textbf{M} =
\left[
\begin{array}{cccc}
\varepsilon_\infty & 0 &0 &0 \\
0 & -\mu_0 &0 &0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{array}
\right ],
$$
the left eigenvector $\textbf{x}_m^\bot$ is not equal to $\textbf{x}_m$, but is given as
$$\textbf{x}_m^\bot = \left[ \begin{array}{cc}
\tilde{\textbf{E}}_m\\
\tilde{\textbf{H}}_m \\
\dfrac{\omega_0^2}{\varepsilon_\infty \omega^2_p} \tilde{\textbf{P}}_m \\
\dfrac{\tilde{\textbf{Q}}_m}{\varepsilon_\infty \omega^2_p}
\end{array}
\right ].$$
As a result, we still obtain the modal excitation coefficient \eqref{eq:FormulaAlpha} and the normalization \eqref{eq:Norm}.
\section{Numerical results}
\label{sec:Numeric}
The numerical results have been obtained with the software {\texttt Montjoie} \cite{Montjoie} for the computation of finite element matrices $\textbf{M}_h$ and $\textbf{K}_h$ given in section \ref{sec:Core}.
In this section, all the eigenvalues of the matrix $\textbf{M}_h^{-1} \textbf{K}_h$ are computed with Lapack.
We represent adimensionalized pulsations $\omega_m$ defined as
$$ \omega_m = \dfrac{\tilde{\omega}_m}{\omega_{\mbox{adim}}} $$
where
$$ \omega_{\mbox{adim}} = \dfrac{c_0}{L_0}, \quad L_0 = 10^{-7}. $$
$c_0$ is the speed of light and $L_0$ the characteristical length (here 100nm).
All of the eigenvalues such that $|\omega_m|< 10^{-3}$ are dropped in order to remove static modes. Since the eigenvalues are complex conjugate,
only eigenvalues (and associated eigenvectors) such that $Re(\tilde{\omega}_m) \ge 0$ are stored. The eigenvalues such that
$\lambda_m = \sigma_i$ ($\sigma_i$ is the damping function in PMLs) are also excluded, since the auxiliary fields $\textbf{H}, \textbf{E}^*, \textbf{H}^*$
cannot be eliminated (division by zero) for these eigenvalues. In practice, we have observed that the associated eigenvectors have null components (at machine precision)
for the unknown $\textbf{E}_m$ and do not contribute to the field $\textbf{E}_S$.
Finally, if two pulsations $\omega_i$, $\omega_j$ are close enough (i.e. $|\omega_i-\omega_j|<10^{-6}$) they are considered degenerate.
In this section, the three formulas \eqref{eq:FormulaAlpha} (denoted as Usual) \eqref{eq:FormuleWei} (denoted as Alternative Source) and \eqref{eq:FormuleMarseille} (denoted as Order2) will be compared. Since the source term $\textbf{F}_h$ is null inside the PML layers, the formula \eqref{eq:AlphaDiscrete} is equal to
$$
\alpha_m = \dfrac{1}{i(\tilde{\omega}_m-\omega)} \langle \textbf{F}_h,\textbf{x}_m\rangle,
$$
The two formulas \eqref{eq:FormulaAlpha} and \eqref{eq:FormuleWei} are implemented by taking a different source term as explained in sections \ref{sec:Core} and \ref{sec:ComparWei}. For the formula \eqref{eq:FormuleMarseille}, we did not implement matrices $\textbf{M}_h$ and $\textbf{K}_h$ introduced in section \ref{sec:ComparMarseille}, but we use the discrete equivalent of \eqref{eq:FormuleMarseille}:
$$
\alpha_m = \dfrac{\omega}{i \tilde{\omega}_m \, (\tilde{\omega}_m-\omega)} \langle \textbf{F}_h,\textbf{x}_m\rangle,
$$
with the source term $\textbf{F}_h$ of section \ref{sec:Core}.
\subsection{2-D disk}
We first look at the case of the field diffracted by a dielectric disk with a radius of 100 nm, where the material is modeled by a Lorentz model with
$$ \varepsilon_\infty = 6, \quad \omega_0 = 4.572 \cdot 10^{15} \text{rad/s}, \quad \omega_p = \dfrac{\omega_0}{2}, \quad \gamma = 1.332\cdot 10^{15} \text{rad/s} $$
The physical computation domain is 400 nm long and 200 nm wide (see figure \ref{fig:Maillage2D}). PML layers are added to the mesh of figure \ref{fig:Maillage2D}. The thickness of PML is equal to 100nm with two cells in direction of PMLs. The damping of PMLs $\sigma$ is taken equal to $3$.
\begin{figure}[!h]
\centerline{\includegraphics[height=6cm]{MaillageDemiDisque.png}}
\caption{Mesh used for the scattering of a disk}
\label{fig:Maillage2D}
\end{figure}
The field driving the system is a TE polarized plane wave, propagating along the x-axis, at the real frequency $\omega$. As a result only the component $E_z$ is non null and is discretized with continuous finite elements (here $\mathbb{Q}_4$ with the mesh of figure \ref{fig:Maillage2D}).
\begin{figure}[!h]
\centerline{\includegraphics[height=3.5cm]{SolTE_PmlQNM0.jpg} \includegraphics[height=3.5cm]{SolTE_PmlQNM10.jpg}}
\centerline{\includegraphics[height=3.5cm]{SolTE_PmlQNM20.jpg} \includegraphics[height=3.5cm]{SolTE_PmlQNM30.jpg}}
\caption{Real part of the scattered field for $\omega = \omega_0/2$(top-left), $\omega = \omega_0$ (top-right), $3 \, \omega_0/2$ (bottom-left) , $2 \, \omega_0$(bottom-right).}
\label{fig:SolQNM}
\end{figure}
The solution is plotted for four frequencies in figure \ref{fig:SolQNM}. For the maximal frequency $\omega = 2 \omega_0$, we have computed a relative $L^2$ error of 0.164\% between the numerical solution and the analytical solution (computed with Hankel functions). We compute the field diffracted by the disk for 31 angular frequencies $\omega$ evenly spaced in the interval $[\omega_0/2, 2 \omega_0]$. We represent in figure \ref{fig:SpectrumDisque} the adimensionalized pulsations $\omega_m$.
\begin{figure}[!h]
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}[h]{0.35 \textwidth}
\centerline{\includegraphics[height=4.2cm]{SpectreDisque.pdf}}
\caption{Whole spectrum}
\label{fig:SpectrumDisque}
\end{subfigure}
\begin{subfigure}[h]{0.65 \textwidth}
\centerline{\includegraphics[height=5cm]{SpectreDisqueAnal.pdf}}
\caption{Part of the spectrum. \\ In red, analytical QNM pulsations.}
\label{fig:SpectrumDisqueAnal}
\end{subfigure}
\caption{Numerical adimensionalized pulsations for the disk $\omega_m$ for the disk (blue points).}
\end{figure}
We can compare these pulsations with analytical QNMs for the disk (computed with Bessel functions).
The comparison is displayed in figure \ref{fig:SpectrumDisqueAnal}. We see that QNM's are correctly computed, and the presence of other modes that we call PML modes. We observe also two accumulation points corresponding to a pole and a zero of $\varepsilon(\omega)$.
\begin{figure}[!h]
\centering
\includegraphics[height=8cm]{ConvergenceDisqueTE_Pml.pdf}
\caption{Relative error between the scattered field computed with the modal expansion and with a direct FEM solver as a function of the spectral width. }
\label{fig:Convergence_TE_PML}
\end{figure}
The matrices $\textbf{M}_h$ and $\textbf{K}_h$ have 5300 rows. Among the 1798 eigenvectors stored, 286 are associated with a degenerate eigenvalue.
In figure \ref{fig:Convergence_TE_PML}, we display the relative error between the modal solution
$$
\textbf{E}_{S}^{\mbox{modal}} = \sum_m \alpha_m \tilde{\textbf{E}}_m
$$
and the direct FEM solution
$$
\textbf{E}_{S}^{FEM} = (-i\omega \textbf{M}_h + \textbf{K}_h)^{-1}\textbf{F}_h
$$
as a function of the width of the spectrum. For a given spectral width, the relative error is computed for 31 frequencies and the maximum value of this error is retained and plotted. For a given spectral width $L$, only the modes whose eigenfrequencies $\tilde{\omega}_m$ verify
$$
\text{Re}(\tilde{\omega}_m) \in [-L \, \omega_{\mbox{adim}}, L \, \omega_{\mbox{adim}}] \, \text{and} \, \text{Im}(\tilde{\omega}_m) \in [- \omega_{\mbox{adim}}L/2, 0]
$$
are included in the expansion. The relative error is computed on the whole physical domain $\Omega_p$ (PMLs are not included) by the formula
$$
\text{Relative Error} = \sqrt{ \dfrac{\int_{\Omega_p} \left|\textbf{E}_{S}^{\mbox{modal}} -\textbf{E}_{S}^{FEM} \right|^2 d\Omega_p}{\int_{\Omega_p} \left|\textbf{E}_{S}^{FEM} \right|^2 d\Omega_p}}
$$
\begin{figure}[!h]
\centering
\includegraphics[height=8cm]{ConvergenceDisqueTE_Imag1.pdf}
\caption{Relative error between the scattered field computed with the modal expansion and with a direct FEM solver as a function of the spectral width (only modes such that $Im(\omega_m) \ge -1$ are kept). }
\label{fig:Convergence_TE_PML_Imag1}
\end{figure}
In the figure \ref{fig:Convergence_TE_PML}, the three formulas \eqref{eq:FormulaAlpha} (denoted as Usual), \eqref{eq:FormuleMarseille} (denoted as Order2) and \eqref{eq:FormuleWei} (denoted as Alternative Source) are compared. It is observed that all of these formulas provide a modal solution that converges towards the direct FEM solution as expected. The two formulas \eqref{eq:FormuleWei} and \eqref{eq:FormulaAlpha} are very close, while the last formula \eqref{eq:FormuleMarseille} is a bit more accurate when the spectral width $L$ is small. In the figure \ref{fig:Convergence_TE_PML_Imag1}, we have displayed the relative error computed on the disk (of radius 100 nm) versus the spectral width $L$, by keeping only modes satisfying
only the modes whose eigenfrequencies $\tilde{\omega}_m$ verify
$$
\text{Re}(\tilde{\omega}_m) \in [-L \, \omega_{\mbox{adim}}, L \, \omega_{\mbox{adim}}] \, \text{and} \, \text{Im}(\tilde{\omega}_m) \in [- \omega_{\mbox{adim}}, 0]
$$
By this criterion, we tried to select mostly QNM modes, the error is computed inside the disk, since it is well-known that QNM modes are complete only in the cavity (see \cite{Leung}). As expected, we observe a stagnation of the error when $L$ grows, the formula \eqref{eq:FormuleMarseille} provides the most accurate results.
\FloatBarrier
\subsection{3-D sphere}
We consider the case of a field diffracted by a dielectric sphere with a radius of 100 nm with the same values as in 2-D:
$$ \varepsilon_\infty = 6, \quad \omega_0 = 4.572 \cdot 10^{15} \text{rad/s}, \quad \omega_p = \dfrac{\omega_0}{2}, \quad \gamma = 1.332\cdot 10^{15} \text{rad/s} $$
The physical computation domain is the parallepiped box $[0, 150\mbox{nm}] \times [0, 150 \mbox{nm}] \times [-150 \mbox{nm}, 150 \mbox{nm}]$ with a quarter of the dielectric ball (see figure \ref{fig:Maillage3D}). PML layers are added to the mesh of figure \ref{fig:Maillage3D}. The thickness of PML is equal to 100nm
with only one cell in direction of PMLs. The damping of PMLs $\sigma$ is taken equal to $2$.
\begin{figure}[!h]
\centerline{\includegraphics[height=6cm]{MaillageSphere.png}}
\caption{Mesh used for the scattering of a sphere}
\label{fig:Maillage3D}
\end{figure}
The source is an incident plane wave oriented in z-direction and polarized in x-direction
$$ \textbf{E}_{\mbox{inc}} = e^{i k z} \textbf{e}_x $$
We impose a Perfectly conducting condition on plane $x=0$ (i.e. $\textbf{E} \times n = 0$) and a Neumann
condition on plane $y=0$ (i.e. $\textbf{H} \times n = 0$) in order to have the same solution as for the whole sphere.
Fourth order edge elements are used for the unknown $\mbox{E}$ and the mesh of figure \ref{fig:Maillage3D}. We compute the field diffracted by the sphere for 31 angular frequencies $\omega$ evenly spaced in the interval $[\omega_0/2, 2 \omega_0]$. Because of the coarse mesh, the numerical error obtained for the last frequency $2 \omega_0$ is equal to 3.73\%. This error is computed by comparing the numerical solution with the analytical solution computed with Mie's series. These two solutions are displayed in figure \ref{fig:SolSphere}.
\begin{figure}[!h]
\centerline{\includegraphics[height=8cm]{SolAnalyticFmaxEy_Sphere.jpg}
\includegraphics[height=8cm]{SolNumericFmaxEy_Sphere.jpg} }
\caption{Real part of diffracted field (component $E_x$ of electric field) for the plane $y=0$. On the left,
numerical solution, on the right analytical solution. }
\label{fig:SolSphere}
\end{figure}
For this case, the matrices $\textbf{M}_h, \textbf{K}_h$ have 31 246 rows. Among the 8055 stored eigenvectors, 919 are associated with degenerate eigenvalues.
Numerical pulsations are plotted in figure \ref{fig:SpectrumSphere} with the same adimensionalization coefficient $\omega_{\mbox{adim}}$ as in 2-D.
\begin{figure}[!h]
\centerline{\includegraphics[height=8cm]{SpectrumNumericMaxwellSphere.pdf}}
\caption{Numerical adimensionalized pulsations for the sphere.}
\label{fig:SpectrumSphere}
\end{figure}
\begin{figure}[!h]
\centerline{\includegraphics[height=8cm]{ComparaisonPulsationSphere.pdf}}
\caption{Numerical adimensionalized pulsations for the sphere. Numerical eigenvalues are in blue,
analytical QNMs in red.}
\label{fig:SpectrumSphereCompar}
\end{figure}
When we zoom in on the box $Re(\omega) \in [0, 5 \, \omega_{\mbox{adim}}], Im(\omega) \in [-0.75 \, \omega_{\mbox{adim}}, 0.0]$, we obtain
pulsations $\omega_m$ of the figure \ref{fig:SpectrumSphereCompar}. In this figure, we have also represented
the analytical pulsation of QNMs. Since the mesh is much coarser in 3-D, some QNMs are not correctly approximated. We have two accumulation points, one for
$$ \omega/\omega_{\mbox{adim}} \approx 1.5088 - 0.2221i$$
which corresponds to a pole of $\varepsilon(\omega)$ and one for
$$ \omega/\omega_{\mbox{adim}} \approx 1.6905 - 0.2221i$$
which corresponds to a zero of $\varepsilon(\omega)$. Similarly to the 2-D case, we compute the relative error between the modal solution and the direct FEM solution. However, the relative error is computed with the curl of E in order to remove the contribution of static modes:
$$
\text{Relative Error} = \sqrt{ \dfrac{ \displaystyle \int_{\Omega_p} \left| \nabla \times \textbf{E}_{S}^{\mbox{modal}} - \nabla \times \textbf{E}_{S}^{FEM} \right|^2 d\Omega_p}{\displaystyle \int_{\Omega_p} \left| \nabla \times \textbf{E}_{S}^{FEM} \right|^2 d\Omega_p}}
$$
This error is plotted in figure \ref{fig:ConvergenceSphere} for formulas \eqref{eq:FormulaAlpha}, \eqref{eq:FormuleMarseille}and \eqref{eq:FormuleWei}.
\begin{figure}[!h]
\centerline{\includegraphics[height=8cm]{ConvergenceSphereRotE.pdf}}
\caption{Relative error on curl of E versus the spectral width. Case of the sphere. }
\label{fig:ConvergenceSphere}
\end{figure}
Similarly to what has been observed in 2-D, the three formulas provide a modal solution that converges towards the direct FEM solution. Similarly to the 2-D case, only modes such that
$$
\text{Re}(\tilde{\omega}_m) \in [-L \, \omega_{\mbox{adim}}, L \, \omega_{\mbox{adim}}] \, \text{and} \, \text{Im}(\tilde{\omega}_m) \in [- \omega_{\mbox{adim}}L/2, 0]
$$
are kept, where $L$ is the spectral width.
When a reduced spectrum is selected, the formula \eqref{eq:FormuleMarseille} is the most accurate. If the electric field is desired, a nice approach consists in discretizing $\textbf{H}$ with edge elements (instead of $\textbf{E}$), reconstructing $\textbf{H}$ with the modal expansion:
$$ \textbf{H}^{\mbox{modal}} = \sum \alpha_m \tilde{\textbf{H}}_m. $$
and of computing $\textbf{E}$ by using Maxwell's equations
\begin{equation}
\textbf{E} = \dfrac{1}{-i \omega \varepsilon(\omega)} \left( \textbf{J} + \nabla \times \textbf{H}^{\mbox{modal}} \right)
\label{eq:ReconstructE}
\end{equation}
\begin{figure}[!h]
\centerline{\includegraphics[height=8cm]{ConvergenceSphereReconstructE.pdf}}
\caption{Relative error on electric field E (as computed in \eqref{eq:ReconstructE}) versus the spectral width. Case of the sphere. }
\label{fig:ConvergenceSphereE}
\end{figure}
In figure \ref{fig:ConvergenceSphereE}, the relative error on the electric field has been computed by using this method. Only the formulas \eqref{eq:FormulaAlpha} and \eqref{eq:FormuleWei} can be used to obtain $\textbf{H}^{\mbox{modal}}$ with the coefficients $\alpha_m$. The coefficients $\alpha_m$ given by the formula \eqref{eq:FormuleMarseille} can be used only to reconstruct $\textbf{E}^{\mbox{modal}}$ (with equation \eqref{eq:modal_expansion}). The reason is that this formula has been established by introducing the unknowns $\textbf{E}, \textbf{E}', \textbf{P}, \textbf{Q}$ (see section \eqref{sec:ComparMarseille}). Thus, only these four unknowns can be reconstructed with this formula and not $\textbf{H}$. In figure \ref{fig:ConvergenceSphereE}, we observe that the reconstructed field $\textbf{E}$ with this method converges correctly to the numerical eletrical field. However, the accuracy obtained on $\textbf{E}$ is not as good as the accuracy we obtained on $\textbf{H}$ (in figure \ref{fig:ConvergenceSphere}).
\section{Acknowledgements}
Alexandre Gras acknowledges the support of the DGA and INRIA. Philippe Lalanne would like to thank Boris Gralak and Guillaume Demesy for fruitful discussions.
\section{Funding}
This work was funded by the Agence Nationale de la Recherche (ANR-16-CE24-0013), the Agence de l'Innovation de la D\'efense (DGA), and the Institut National de Recherche en Informatique et en Automatique (INRIA).
\section{Conclusion}
In this paper, we have discussed how the scattered field $\textbf{E}_S, \textbf{H}_S$ can be computed from the discrete eigenmodes of Maxwell's equations. Due to the discrete nature of the problem, these discrete eigenmodes form a complete basis, i.e. the numerical solution can be written exactly as a combination of the eigenmodes. However, there is no uniqueness of the coefficients $\alpha_m$ that appear in the expansion. We have shown that an infinity of formulas exists for the computation of $\alpha_m$. New formulas can be found by choosing a different linearization of dispersive Maxwell's equations or a different splitting of the source term. With our common formalism, we have been able to recover the three formulas \eqref{eq:FormulaAlpha}, \eqref{eq:FormuleMarseille} and \eqref{eq:FormuleWei} that have been previously proposed in the literature. Numerical experiments show that all these formulas converge towards the numerical solution. In the tested cases, we observed that the formula \eqref{eq:FormuleMarseille} is slightly more accurate than other formulas when a small part of the eigenvalues are selected.
We also explain how degenerate eigenvalues are treated with a simple Gram-Schmidt orthogonalization. This procedure is essential in order to construct an orthogonal basis of eigenmodes with respect to matrix $\textbf{M}_h$, which can be seen as a non-classical scalar product. We detail how dispersive PMLs can be handled with our formalism. Because of the symmetry of the original dispersive Maxwell's equations, there is no need to compute the biorthogonal eigenvector (or left eigenvector) since this eigenvector can be computed directly from the right eigenvector. However, for more complex cases such as gratings with quasi-periodic conditions where the Maxwell's equations are no longer symmetric, the computation of left eigenvectors would be required.
\FloatBarrier
\bibliographystyle{apalike}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,601 |
Rails.application.config.session_store :cookie_store, key: '_gongwei-xyz_session'
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,341 |
O Sistema de Ligas de basquetebol da Espanha ou Pirâmide de Ligas do basquetebol Espanhol é uma série de ligas profissionais de basquetebol na Espanha conectadas. O sisteman possui formato hierárquico de promoção e rebaixamento englobando competições em diferentes níveis.
Masculino
Os homens competem em cinco diferentes níveis de competição na pirâmide - o 1º nível Liga ACB, o 2º nível LEB oro, 3º nível LEB plata, 4º nível Liga EBA e o 5º nível Primera División, que compreende as divisões inferiores regionais.
Liga ACB é organizada pela Associación de Clubs de Baloncesto. As Ligas LEB e a Liga EBA são organizadas pela Federação Espanhola de Basquetebol. As divisões inferiores são organizadas pelas federações regionais.
As Divisões
Para a Temporada 2014–15, as divisões do basquete espanhol são as seguintes:
1ª División (15 groups, distribuído um grupo por Comunidade Autônoma exceto País Basco, La Rioja e Navarra que compartilham do mesmo grupo, na Catalunha é conhecido por Copa Catalunya.
Divisões Regionais
Referencias
Basquetebol da Espanha | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,382 |
Q: Comparing securestring from inputfield with database in c# XMAL
<PasswordBox PasswordChar="*" PasswordChanged="PasswordBox_PasswordChanged" Background="#545d6a" Foreground="White" FontSize="18"/>
Code behind
private void PasswordBox_PasswordChanged(object sender, RoutedEventArgs e)
{
if (this.DataContext != null)
{ ((dynamic)this.DataContext).SecurePassword = ((PasswordBox)sender).SecurePassword; }
}
I have a class Klant with a property Paswoord that i want to compare with the secureString.
ViewModel
public SecureString SecurePassword { private get; set; }
Klant = DataBaseOperations.OphalenKlantViaUsername(UserName);
if (Klant != null)
{
if (Klant.Paswoord == SecurePassword.ToString())
{
the password is correct and the program continues
}
else
{
MessageBox.Show("the password is incorrect");
}
}
else
{
User does not exist.
}
Can somebody help me?
A: You are currently comparing equality of the password with the string representation of the SecureString class. SecureString.ToString will not return the secured password as string. You will have to explicitly convert it:
private bool IsPasswordValid(SecureString referencePassword, SecureString password)
{
IntPtr valuePtr = IntPtr.Zero;
try
{
valuePtr = Marshal.SecureStringToGlobalAllocUnicode(password);
string plainTextPassword = Marshal.PtrToStringUni(valuePtr);
valuePtr = Marshal.SecureStringToGlobalAllocUnicode(referencePassword);
string plainTextReferencePassword = Marshal.PtrToStringUni(valuePtr);
return plainTextReferencePassword.Equals(plainTextPassword, StringComparison.Ordinal);
}
finally
{
Marshal.ZeroFreeGlobalAllocUnicode(valuePtr);
}
}
Usage
if (IsPasswordValid(Klant.Paswoord, this.SecurePassword)
{
// Password is valid
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,113 |
Q: important or importantly This is an important empirical properties.
This is an importantly empirical properties.
Which one is correct? Do they mean different things?
A: You need important.
But more importantly: neither sentence is correct due to the plural properties.
These are important empirical properties
or
This is an important empirical property
A: As mplungjan noted, you need to use the plural form (or the singular form) consistently, but in either case, important is the word you want to use.
Important and importantly are two different words: the first is an adjective (similar to quick, happy, stunning), the second is an adverb (similar to quickly, happily, stunningly). As a general rule (there are exceptions), adverbs can be created from adjectives by adding the -ly suffix: so usually (although again, there are exceptions), if you see a word ending in -ly, it is an adverb.
Adjectives describe nouns (He is quick), while adverbs describe verbs (He runs quickly). Since property is a noun, you need to use an adjective in your sentence.
So, the two correct sentences would be:
These are important empirical properties.
This is an important empirical property.
That said, you could also use importantly in two similar-looking sentences:
Importantly, these are empirical properties.
Importantly, this is an empirical property.
Here, the word in question is modifying the verb (is), and not the noun. However, the meaning of these sentences is different: in this case, you're not saying the properties are important, but that the fact that they're are empirical properties is an important fact.
A: [I prepared this but the question was migrated before I updated the screen...]
Neither is correct, and they do mean different things.
Correcting is easy: This, is and an are singular so properties needs to be singular as well (property); or, if there really are properties, then you should have These are or perhaps These are some.
This is an important empirical property.
This is an importantly empirical property.
Important is an adjective and describes a noun, property. The property is important as well as being empirical.
Importantly is an adverb. Adverbs do not describe nouns in the same way; but they can modify adjectives. Using importantly describes empirical; it does not say anything directly about the property. We might infer that the property is important, but what is definitely important is that it is empirical. It's not a particularly good example to try to explain the difference, because importantly empirical doesn't mean very much.
A better example might be
This is an important packaged product.
This is an importantly packaged product.
In the first sentence, the product is important and packaged. In the second, importantly describes packaged: the package makes the product look important.
The difference might be made clearer with commas and hyphens, but the usages of adjectives and adverbs fix the meaning well enough without them.
This is an important, packaged, product.
This is an importantly-packaged product.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,286 |
As an open-source software editor, we owe a lot to the PrestaShop community. One good example is that the software translations are actually handled by members of the community from all over the world.
We use Crowdin, a collaborative translation platform to share our multilingual translation projects with you. Check the Project's official page for more info.
Our team is working hard to issue the 1.7 version, coming very soon, which already brought a new translation system. It makes it quicker and easier for our community members to translate PrestaShop in your language.
Translating is one way to join our amazing community and to allow new success stories to come to life in your country ! If you have questions on how to translate PrestaShop, contact us at translation@prestashop.com. We're looking forward to welcoming you in the translators' community. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,777 |
Tetele is a village in Cena Parish, Jelgava Municipality in the Semigallia region of Latvia. The village is located on the Lielupe river approximately 39 km from the capital Riga and 8 km from the city of Jelgava.
References
Towns and villages in Latvia
Jelgava Municipality
Doblen County
Semigallia | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,145 |
Q: Github Actions Workflow that runs when branch is main AND a matching tag is present I'm having trouble figuring out how to make the events that trigger a workflow respond in an "AND" manner, as they seem to default to "OR" - the below YAML results in the workflow running twice (I presume once in response to the branch being correct and the other being due to the tag being matched):
on:
push:
branches:
- main
tags:
- v*
What I'm looking for is the workflow to run when the commit is to the main branch and tagged with a v* tag. What actually happens is that the workflow will run when either of these conditions are true (or twice if they both are).
Is there a way to restrict the workflow to only running when all on events are matched?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,427 |
Q: Search text with agreegate in mongoDB I am trying to search text in aggregated mongo query but unable to do it. I have applied lots of solutions but nothing is working.
Here is my code...
I have two collections named is "Posts" and "Users". I am applying aggregate on Posts collection.
E.g.
(posts collection)
posts = [{
"_id": ObjectId("601223fecb7cff30f5e198f9"),
"user_id": ObjectId("5feb09d01292aa121551adaf")
},
{
"_id": ObjectId("601229e8204fdd422f8b38d1"),
"user_id": ObjectId("5feb0a181292aa121551adb1")
},
{
"_id": ObjectId("6013a0551755a1730c6a424d"),
"user_id": ObjectId("5feb09d01292aa121551adaf")
}]
(users collection) users = [{
"_id": ObjectId("5feb09d01292aa121551adaf"),
"name": "john doe"
},
{
"_id": ObjectId("5feb0a181292aa121551adb1"),
"name": "Sim tim"
},
{
"_id": ObjectId("5feb09d01292aa121551adaf"),
"name": "john sina"
}]
My aggregation query is as follow
db.posts.aggregate([
{ $match: {"users_data.name": { $regex: "john", $options: 'i' }}},
{ $sort: { "createdAt": -1 } },
{ $limit: 10 },
{
$lookup: {
from: 'users',
let: { userId: "$user_id" },
pipeline: [{
$match: {
$expr: { $eq: ["$_id", "$$userId"] },
}
},
],
as: 'users_data',
}
},
{
$unwind: {
path: "$users_data",
preserveNullAndEmptyArrays: true // optional
}
},
{
$project: {
"users_data":"$users_data",
"_id": 1
}
},
]);
This query gives me a blank array. but when I pass a blank object {} in $match filter then it gives me all record and when I search with a user name like john then gives me a blank array.
can anyone help me with this, how can I search string from users collection directly with aggregation query?
Thanks in advance
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 388 |
{"url":"https:\/\/en.wikipedia.org\/wiki\/Talk:BPP_(complexity)","text":"Talk:BPP (complexity)\n\nWikiProject\u00a0Mathematics (Rated\u00a0Start-class,\u00a0Mid-importance)\nThis article is within the scope of WikiProject\u00a0Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.\nMathematics\u00a0rating:\n Start\u00a0Class\n Mid\u00a0Importance\nField: Discrete mathematics\nWikiProject Computer science (Rated Start-class, Mid-importance)\nThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.\nStart\u00a0 This article has been rated as Start-Class on the project's quality scale.\nMid\u00a0 This article has been rated as Mid-importance on the project's importance scale.\n\nUntitled\n\nSome examples of BPP \/ BPP-complete problems might be appreciated\n\nWhat does this 1\/4-clause mean in long run\u00a0? That chance of failure in many runs is (1\/4)^N, 1\/2*(1\/2)^N or what\u00a0? --Taw\n\nThe idea is that running it n times and choosing the answer that occurred most often (a \"majority vote\") will almost always be correct for relative small n (like say, 20-50). The calculations aren't quite as simple as you describe, but are a consequence of the Chernoff bound. Deco 10:31, 24 Mar 2005 (UTC)\n\nI'm not familiar with Wikipedia customs, and English is not my mother tongue, so I may not express myself clearly enough, but let me ask a question to all the editors of this page: Those who don't understand computational complexity theory why don't just shut up here? I quote: \"The existence of certain strong Pseudorandom number generators imply that P=RP=BPP. This implication is conjectured to be true.\"\n\nJesus. The implication is true, it is not a conjecture, it is a theorem by Nisan and Wigderson. The existence of those generators is the conjecture, which depends on certain hardness assumptions. There are several such results, I think the latest and strongest is by Impagliazzo and Wigderson from 1997, who prove that P=BPP if E contains a language which has 2^Omega(n) circuit complexity for almost every n.\n\nThis even got a corrected version: \"It has been conjectured that the existence of certain strong pseudorandom number generators implies that P=RP=BPP.\" Just as stupid as the previous one.\n\nPlease, no personal attacks. Your change is correct to my knowledge, and we appreciate the help. The original editor's knowledge may have been out-of-date, since this is not a classical result, or it may have been referring to some stronger result based on a stronger definition. I'm not really sure what happened, but even experts make incorrect statements in their area. Deco 04:47, 4 Jan 2005 (UTC)\n\nThe above commenter may know one thing about complexity theory, but does not know anything about manners. He\/she definitely does not represent the complexity theory community, which I know to mostly contain very nice and respectful people. --Boaz, March 21, 2005.\n\nHmm. There is some subtle difference between P and BPP. Example: Let we have probabilistic algorithm for adding two binary numbers of length n. This algorithm uses rnd() somewhere. On non-deterministic turing machine, this algorithm will have probability of success A , that can be made close to 1 by computing it many times (it is never 1 , but with some luck i can expect to get correct results).\n\nOn deterministic turing machine, one has to use pseudorandom number generator always seeded with same value(otherwise algorithm isn't deterministic), and this algorithm will fail for some inputs. Thus it is not an universal algorithm for adding binary numbers. The difference is subtle, but it is there. It has nothing to do with complexities, it's purely linguistical; it's just that P algorithm for addition of binary numbers does solve one problem (for pair of binary numbers, compute sum), and \"BPP\" algorithm using pseudorandom numbers solves other problems (for SOME pairs of binary numbers, give out sum, for some pairs, give out garbage)\n\nSo: If there is no algorithm that solves certain problem A in polynomial time, there could quite well be BPP algorithm for doing same thing _sometimes_(sometimes it will solve and sometimes it will fail)in polynomial time. And using strong pseudorandom number generator, there will be deterministic polynomial time algorithm for solving this problem for some inputs. But generally speaking, this algorithm will not be the solution to original problem A, because it will only work for some inputs.\n\nMy point is that \"there is P algorithm to some problem A\" and \"there is BPP algorithm for some problem A\" mean different things and you can not use pseudorandom number generator to turn latter into former (you will get \"P algorithm that solves A for some restricted set of inputs and gives out garbage for other set\")\n\n--DL , December 03, 2005.\n\nYou are correct that ordinary pseudorandom number generators cannot (as far as we know) be used to produce polynomial-time algorithms based on BPP algorithms; instead, a source of \"true\" randomness is required. It has been proven that some probabilistic algorithms can solve a problem in polynomial time given only a small number of random bits, just enough for a \"seed\". It is an open question in research whether P=BPP, and research in this direction has generally followed the idea of producing pseudorandom generators sophisticated enough to actually replace truly random bits in certain applications. But to claim as a matter of fact that P\u2260BPP is really jumping the gun - there's convincing evidence to the contrary. Deco 19:29, 3 December 2005 (UTC)\nI think you have not read what I wrote well enough. Issue has nothing to do with strength of pseudorandom numbers(longer explanation on bottom of page). Yes, with strong pseudorandom numbers you can emulate non-deterministic machine, and for sake of this talk i can assume that you emulate it perfectly. But it only let you turns what we name \"BPP solutuion to problem A\" into what is named \"P solution for problem A that gives out garbage for certain inputs\", and latter is NOT equivalent to \"P solution for problem A [it is implied that it'll work for all inputs]\". It's classic example of English terminology being not accurate enough to clearly specify the difference between \"being able to emulate BPP on deterministic machine\", and \"BPP solution to problem A\" being equivalent to \"P solution to problem A\". I agree with you that with strong pseudorandom number generator you can run BPP algorithms on deterministic machine. But it is not related to what i argue about.\n\nP=NP or P=BPP?\n\nIs it really true that one of P=NP or P=BPP is true? I find this a bit hard to believe. Who added this claim (regarding exponential size circuits for EXPTIME) and where's your reference? Thanks. Deco 19:37, 3 December 2005 (UTC)\n\nI've commented out that paragraph, since it's most likley wrong. P=NP would imply P=BPP, so the disjunction (P=NP or P=BPP) would in turn imply P=BPP unconditionally. This is not known to be the case. --MarkSweep\u00a0(call me collect) 18:18, 4 December 2005 (UTC)\n\nBPP and P terminology\/linguistics\n\nMy point is that:\n\nLet I have developed \"BPP algorithm for adding two binary numbers\", and \"P algorithm for adding two binary numbers\" (i.e. both algorithms works in polynomial time but former fails with probability 1\/3). Them are in fact solving two different problems:\n\n\"BPP algorithm\" solves the problem SOMETIMES, and \"P algorithm\" solves it ALWAYS.\n\nOn deterministic machine, result depends soliely to input. If you \"convert\" BPP to P using pseudorandom number generator, no matter how strong, you will get \"P algorithm that gives out sum for SOME pairs of binary numbers (and for some it gives garbage)\". (unless you believe that with your pseudorandom number genrator it will work way better than it worked with true random numbers\u00a0:-) )\n\nSo, there is \"P algorithm that gives sum for ANY pair of binary numbers\", and \"P algorithm that gives sum for SOME pairs of binary numbers (and garbage for other pairs)\".\n\nIt is clear that these two algorithms are solving _different_ problems. It has precisely nothing to do with strength of pseudorandom number generator, as long as it is deterministic.\n\nYou could even use \"ultimately strong\" (pseudo?)random number source, such as file with data obtained from true random number generator. (we can say that this file is part of algorithm\u00a0:-) It's as close to true randomness as you can get on deterministic machine (even to the point that some people would argue it isn't deterministic\u00a0:-) ). But the _only_ property i use is that with same seed it gives same sequence, so my argumentation will work regardless of strength of random number generator, or even with true random numbers if them is written to file for repeatability.\n\nIf you are thinking about using different seeds and \"inheriting\" seed from previous run like how you do that in software, then you get \"P algorithm that takes seed and pair of binary numbers, and for some parameters gives out sum of binary numbers (for other, garbage), and the final random seed\" (It's not original problem at all.)\n\nIn summary, my point is: If we say that \"there is BPP solution to problem A\" it doesn't imply that we can say \"there is P solution to problem A\". However, with strong pseudorandom numbers, we can run same BPP algorithm on deterministic machine and get P solution to problem A that gives correct result for SOME inputs, and garbage for other. But it's not what we can name \"P solution to problem A\" because this BPP-like solution would give garbage for other inputs. If number of possible inputs is infinite, no matter from how many trials you compute majority vote, there will be some inputs when this algorithm will give out garbage.\n\nAnyway, it absolutely doesn't matter what I think on subject and what logical reasoning i do have. If there is no references to research where it is shown that \"existence of strong pseudorandom number generaturs implies BPP=P\", or that \"NP=P or BPP=P\", these claims is \"original insight\", and it's way worse than \"original research\". Or maybe these claims is really trivial(i don't think it is the case), then whoever inserted them must provide proof. Sure, if it's so trivial that it doesn't need reference he shouldn't have problems proving it in few lines.\n\nMy point is related to terminology and linguistics. The problem is that when we say that \"problem is BPP\" and when we say \"problem is P\" it has different meanings regardless of random number generators. It is related to the human language. I actually agree that (most likely) problems that are in BPP are in \"P that is allowed to fail for some inputs\".\n\n--DL , December 04, 2005. (will register soon.)\n\nI'm not sure I begin to understand your objection, but this won't stop me from trying to say something: Don't confuse problems with algorithms. Take a problem such as SATISFIABILITY: it asks whether a given propositional formula has a satisfying truth assignment. Now we can build several algorithms which work on this problem. Some of them will always return the correct answer while potentially taking a very long time. Some of them will return the correct answer most of the time while never requiring an unreasonable amount of time. Some of them will almost never return the correct answer, but will be blazingly fast.\nThe class BPP is a class of problems. It consists of those problems for which algorithms exist which, loosely speaking, produce the correct solution more often than not, and in reasonable time. So a direct way of showing that a given problem is in BPP is to exhibit a polynomial-time algorithm with a better-than-average chance of obtaining the correct answer.\nAs discussed above, a key issue is to quantify the computational advantage provided by a truly random source over a deterministically generated pseudorandom sequence. One way of proving P=BPP would be to show that there exist strong pseudorandom number generators that cannot be distinguished from a truly random source by an algorithm that is constrained to run in a certain amount of time. --MarkSweep\u00a0(call me collect) 18:36, 4 December 2005 (UTC)\nOkay, say we define a class ErrorP of problems such that some instances can be solved correctly in polynomial time, while other instances are solved incorrectly. In this case, your argument does clearly verify that BPP is contained in ErrorP. Unfortunately, this isn't very useful, because just about every problem is in ErrorP (just hard code the answer to one instance and return no for the rest). You can try to specify that it's only rarely wrong, but if you choose an arbitrary random number generator, it may so happen that you choose one that happens to get many instances wrong, perhaps almost all instances.\nRelated is the idea of universal hashing, which says that, with high probability, any hash function from a class of functions is unlikely to produce long chains on random inputs. If one doesn't work, we can change to another and hope it works. Simple linear congruential random number generators are an example of such a class. But I don't know of any way to ensure that you will get a hash function that won't produce long chains or to show that this is unlikely if you choose a generator at random. Deco 19:25, 4 December 2005 (UTC)\nThe trick to showing that P = BPP is that, while any one seed may fail for certain inputs, a \"good\" random number generator will work for any input, given the correct seed. Then it's just a matter of looping over all possible seeds, which can be done in polynomial time, provided the required seed size is at most logarithmic in the problem size. Ben Standeven 00:43, 24 March 2006 (UTC)\nThat would only work for P = RP. To show P = BPP, you need to be able to decide that a particular seed was one of the correct seeds.\nJumpDiscont (talk) 20:15, 13 July 2010 (UTC)\nMost seeds are correct. You are supposed to loop over all possible seeds, and (for BPP) take a majority vote among the answers received.\u2014Emil\u00a0J. 11:42, 14 July 2010 (UTC)\nIs your idea that it _should_ be easy to show that most seeds are correct, or that someone has already shown that for some specific (hopefully nonempty) class of prngs, most seeds are correct?\nJumpDiscont (talk) 20:20, 14 July 2010 (UTC)\nIt's not my idea, it's the definition of a pseudorandom generator. See e.g. the classical Nisan\u2013Wigderson paper[1].\u2014Emil\u00a0J. 10:08, 15 July 2010 (UTC)\n\nBPP and Monte Carlo\n\nIs BPP equivalent to Monte Carlo? We have elsewhere ZPP is equivalent to Las Vegas. -Ozga 20:18, 27 March 2006 (UTC)\n\nYes, BPP-algorithms are equivalent to one of those casino towns. -- EJ 22:39, 27 March 2006 (UTC)\n\nWould it be a good thing to mention this in the article? I think so. -- Ozga 05:36, 28 March 2006 (UTC)\n\nPapdimitriou defines Monte Carlo as equivalent to RP. Hmm. Ozga 17:06, 30 March 2006 (UTC)\n\nBPP is referred to as \"Atlantic City,\" not Monte Carlo or Las Vegas. Sadeq 04:44, 17 August 2007 (UTC)\n\nBounds\n\nThe bound ek\/18 on the error for k runs is suboptimal, it seems to be an artifact of whatever general version of Chernoff's bound was used to derive it. The actual value is of the order ${\\displaystyle {\\frac {1}{\\sqrt {k}}}\\left({\\frac {2{\\sqrt {2}}}{3}}\\right)^{k}}$, which is about ${\\displaystyle e^{-k\/16.98}\\,}$. It does not make any practical difference (especially since the convention of using 1\/3 for the error of one run is itself completely arbitrary), but still I think that it is a bit misleading to give arbitrary numbers in the table which make a false impression of being tight. \u2014\u00a0Emil\u00a0J. 19:07, 11 November 2009 (UTC)\n\nI agree. What do you suggest? Putting in the value \"16.98\" also seems pretty pointless. --Robin (talk) 22:00, 11 November 2009 (UTC)\nI'm not quite sure what would be the best way to solve it. Maybe just say something like \"2ck for some constant c > 0\", and leave the exact value unspecified? \u2014\u00a0Emil\u00a0J. 11:44, 12 November 2009 (UTC)\nThe phrase \"for some constant c > 0\" seems too long to put in the infobox. But I have no other ideas, so go ahead with this. Maybe someone will think of something better. --Robin (talk) 13:01, 12 November 2009 (UTC)\nOK, I've implemented the change. Feel free to fix it if you get a better idea. \u2014\u00a0Emil\u00a0J. 13:20, 12 November 2009 (UTC)","date":"2016-07-30 22:56:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 2, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6894404292106628, \"perplexity\": 757.5716416419654}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-30\/segments\/1469258943369.84\/warc\/CC-MAIN-20160723072903-00006-ip-10-185-27-174.ec2.internal.warc.gz\"}"} | null | null |
Q: gradle parent pom like feature At my work we use Maven. I am going to try gradle for the first time. We use a common parent pom for all project which has setting for commonly used maven plugins and few comon dependencies. Is there a similar option available in gradle?
My second question is regarding release management. We use maven release plugin, which works pretty good for us. Is there something similar available in Gradle?
A: To share stuff within multiple projects of the same build, use allprojects { ... }, subprojects { ... }, etc. Also, extra properties (ext.foo = ...) declared in a parent project are visible in subprojects. A common idiom is to have something like ext.libs = [junit: "junit:junit:4.11", spring: "org.springframework:spring-core:3.1.0.RELEASE", ...] in the top-level build script. Subprojects can then selectively include dependencies by their short name. You should be able to find more information on this in the Gradle Forums.
To share logic across builds, you can either write a script plugin (foo.gradle), put it up on a web server, and include it in builds with apply from: "http://...", or write a binary plugin (a class implementing org.gradle.api.Plugin), publish it as a Jar to a repository, and include it in builds with apply plugin: ... and a buildscript {} section. For details, see the Gradle User Guide and the many samples in the full Gradle distribution.
A current limitation of script (but not binary) plugins is that they aren't cached. Therefore, a build will only succeed if it can connect to the web server that's serving the plugin.
As to your second question (which should have been a separate question), there are a couple of third-party release plugins available, for example https://github.com/townsfolk/gradle-release.
A: I think the best way to do things like maven parent pom is to to use gradle "apply from".
Something like this:
allprojects { // or: subprojects { ... }
apply from: "gradle/script/common.gradle"
}
The link and be a related path or an URL. Hope it helps.
Reference:
Import a Gradle script from the root into subprojects
Super POM, Parent POM type of hierarchy management in Gradle
A: The io.spring.dependency-management plugin allows you to use a Maven bom to control your build's dependencies:
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath "io.spring.gradle:dependency-management-plugin:0.5.3.RELEASE"
}
}
apply plugin: "io.spring.dependency-management"
Next, you can use it to import a Maven bom:
dependencyManagement {
imports {
mavenBom 'io.spring.platform:platform-bom:1.1.1.RELEASE'
}
}
Now, you can import dependencies without specifying a version number:
dependencies {
compile 'org.springframework:spring-core'
}
A: I too wanted this type of feature, I have created a plugin to provide this here: https://github.com/boxheed/gradle-pater-build-plugin
A: You can convert the Parent pom content in to Gradle init file very easily.
Gradle init script provides same functionality as Maven super/parent pom. The basic difference is that you can call init script
Run time
As many as of them This gives us flexibility to change the init
script on run time but doubt of not tracking the changes.
You need to take repository, distribution management, profiling and other checks like findbugs, checkstyle etc in to init script.
The detail is huge, You can find complete information here by me.
http://www.scmtechblog.net/2015/12/how-to-migrate-parent-pom-from-maven-to.html
I have explained about gradle release plugin which is similar to maven release plugin.
A: to achive your goal you could apply the concept of 'multiproject build' explained in the gradel user guide here
Basically you can create an umbrella project which define a set of common configurations by creating a gradle.build file and a gradle.settings file.
The build file contains the properties, dependencies and plugins commons to all projects, the settings.gradle defines what subprojects inherits those configurations.
Moreover, to have an idea of the gradle plugin ecosystem you could check this source.
A: It is currently not possible, if you want the parent to be cached locally and stored in a Maven repository.
I have added feature request here:
http://forums.gradle.org/gradle/topics/support_for_gradle_parent_shared_between_projects_cached_locally
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,706 |
Крэшпэд () — специальный мат для лазания с гимнастической страховкой, как правило, для лазания по боулдеринговым трассам на скалодромах и скалах.
Виды крэшпэдов
Условно крэшпэды можно разделить на 2 вида.
Крэшпэды для скалодромов
Крэшпэды для скалодромов, как правило, большего размера. Толщина крэшпэда — 20-50 см, длина и ширина — 2-4 м. Устанавливаются вплотную друг к другу без щелей во избежание попадания в щель рук или ног при падении.
Крэшпэды для скал
Крэшпэды для скал имеют меньший размер, обычно складываются, имеют вшитые ручки или лямки для удобного переноса. Толщина — 5-15 см, ширина и длина около — 1 м.
Снаряжение для альпинизма и скалолазания | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,055 |
Hey. I'm Cassie cherry Cajun and I'm the creator, and lead developer of the p5.js, web editor, in this, video I'll show you an overview of the basic features of the editor. If. You're not familiar with it the p5.js, web editor is a website, for hosting, and creating, p5 sketches, it's, designed, to be beginner friendly you don't need to download anything, or, configure, anything to get started, just open it and you can go, for. More information, about how the editor got made feel, free to check out the welcome video in the video description also. I'm going to assume a basic familiarity, with p5.js, and web development, if you're, not feeling confident, about that I've linked a few resources, in the video description as well all, right let's get started. The. First thing you'll want to do is navigate, to the website which is that alpha, editor. P5.js. Org, or to, be moved into the future. Adjust. Editor, p5.js. Org, and you'll, see on the left your, code and on the right the preview, of your sketch, and so. In order to see the preview you'll hit play, and then it will render your p5.js, sketch on the right and to, stop it you just hit the stop button, and. So let's. Hit play again so you can we can see our results and let's say we want to change the. Background color, of our sketch from grey. To purple. So. We'd have to change this background function. Right, and add different arguments, so let's change let's make this a three, value color so it's gonna be red to. 2004. The green and 255. For the blue and. You'll. See that nothing's. Changed, and this is because we need to hit the play button again right and now it's purple and, so, if you don't like hitting play. Every, time you want to refresh the sketch you can click this auto refresh button and, then. We'll, see we, delete this and, then it will automatically, reload, as you type which is pretty nifty so. Let's add a little bit more the sketch will do ellipse. Let's. Put it out 5050, with the 50 pixels height of 50 pixels oh I, did, 550. And so. As, we would expect there. Is there's our ellipse and so, let's make it interactive too so, I'll do Mouse X. Mouse. Y. 5050. Great. Say. You're happy with your sketch and you want to save a copy of it so you could access again later or show it off to your friends then, you're gonna need to make an account so we, don't we're not logged in right now so let's create a new one, let's. Call our username. Shiny. Mountain. Just, come up with a shiny. Mountain. At gmail.com. Who, knows that that's a real email address we'll make a password. And. Sign. Up, cool. So now you'll see we're logged in because we, have it says hello shiny mountain in the corner and you'll also see in my account, and so. In that web editor you have all the basic features of logging. In and logging out account, saving, sketches, and all that, jazz. Now. You can, see if your sketch so what you'll need to do is go to file, save. And then you'll see this little thing comes up and it says project, saved and autosave, enabled, so, what's. That autosave, thing what that means is as, you, type and you work on your sketch it will save your changes as you go if you, save the sketch previously. It won't see that if you haven't ever saved, the sketch so, let's say we want to make let's.
Hitting The play button a, lot to refresh, your sketch and you. Don't want to you don't want to use this auto refresh because, you're you're, typing pretty slowly or what-have-you but this. Is getting annoying nothing up here the. Web editor has keyboard, shortcuts, so you could hit command, and enter if you're on Windows, would be control enter but I'm using a Mac so it's command enter and that refresh is the sketch you'll see when I hit that it. Does a little little white screen, thing and so if I hit stop I do shift command, enter this. Is for the stop it'll stop it so command enter start, shift command under stop and if. You're wondering how to figure out what all these keyboard shortcuts are there. In this keyboard shortcuts, pop up and, there are a lot more the. Ones I find really useful or, this tidy and save so, say this. Autosave, is not saving, often enough for you are. You just like literally, want to hit save as I I, do you, know hit command, s or, ctrl, s on Windows and I'll be project, saved. You. Can also use this code tidy, thing so just say like indentation. Is all weird, like, this you hit shift, tab. And, it, will indent, it properly I find that really useful. When. For, a lot of beginner. Coders kind of indenting, as hard and so be, trying to be trying to help someone in their sketches, all like this and I can't really figure out what's going on so use hit that and it's magically. Tidied, for you. Previously. I talked about some basic, things you can do with an account just like create an account log in save sketches, but let's talk a little bit more about, some. Other stuff you can do so, you, can see all the sketches you have saved by going to my account and then, my, sketches, we'll see we have the two ones we created, right the first time we create and then let me duplicate it it and. So, you can also get to the screen, by going to file, open. Right, so we could change back to the original, right. File, open, and go to our copy, just, even cooler. We. Can also, see. A bunch of example, sketches. That. Are included with the editor so we go to examples. If. You're, gonna check out get inspired, we, can do let's go to flocking. It. Hit play and we'll see this cool, flocking. Example. Then, we could we could copy of it right remember, I said we could duplicate, ones that we didn't that we didn't own, so we hit duplicate, right, now we have our own copy and that's saved to our account. Shiny mountain so if I go to my sketches, right, we'll see that in there now so I can go back to the. One I was working with before, the, next thing I'll show you is some basic, editor, settings. We could change pretty, simple but just who could change the theme if we want it to be dark, light. We could change our text size if, you want to be smaller and larger, we. Could also turn autosave, off we could change the indentation amount. Lots. Lots of different things. The. Last thing I'll show you is where, you go to like change your password, or email or whatever you go to my account and then. Settings. And. Right. You change your email here your password, username say, that the, last thing I'll talk about is. Settings. Here see it says login with github you can your are able, to create an account with Google or, github. Another. Feature of the editor is the sidebar which, you can use to access different files in the sketch and edit them so if you click on this arrow here, this, expands, the sidebar, image you'll see a bunch of different files, we've only been editing, sketch j/s which is the p5.js. Sketch, file and so. You could click on the HTML or, the CSS, if you needed to edit, those for any reason so, one reason we need to edit them is let's say we want to change the name of this file, we. Rename, this to sketch, underscore. Cool. Right. And then in order to get our HTML.
Text Based canvas and what this is is, we've. Been we've been looking at outputs, as purely, visual and if you click on any, of these which I won't demonstrate you. Can get some sound feedback as well as well as visual as, well as visual feedback and. Another thing about the. Editor, is that it's fully keyboard, accessible, so you could tap your the website and access. Any anything, on it thanks. For watching this. Video is being made with our first public, release of the web editor, and since, it is a website it is a living, breathing thing and it subjects, to change as we add features, and fix, bugs so, please, if. You would like to give feedback to the web, editor, or, report. Bugs or whatever please, check out in our video description. For how to do that and also. If you'd like to contribute you can check out how to do that in the video description as, well and thanks, for watching.
Great job y'all! From what I've seen so far sketches management could use some improvement. The same problem I have with OpenProcessing: it's hard to go through a long list of sketches. It would be cool to have sketches thumbnails and to organize sketches into collections. I see that you guys are already working on those features though. Cool! Also kudos to the UI designer! Love the aesthetics!
Crushed it! Clear, concise, interesting and completely ready for classroom use.
You are a very attractive woman.
Hi im a newbie learning programming and was following a tutorial and I was trying to figure out the correct syntax before the tutorial explained it and I found a line of code that breaks the web editor when you run it, even on a blank sketch. " while (mousePressed = true); "
Great job! You guys are incredible! | {
"redpajama_set_name": "RedPajamaC4"
} | 2,638 |
{"url":"https:\/\/bioinformatics.stackexchange.com\/questions\/15389\/reliably-create-female-and-male-individuals-in-a-ped-file-for-plink","text":"# Reliably create female and male individuals in a -ped file for plink\n\nFor a study I need some kind of \"decoy\" GWAS-data. For this I created a panel with some genes, the same ones as in the real data. From NCBI I fetch some of their SNPs and create my own artificial .ped and .map file. Furthermore I save for each snp if it lies on the X-Chromosome.\n\nI then go on and randomly create female individuals by giving them heterozygous alleles on their X-Chromosome genes. Males only get homozygous ones.\n\nBut when I then run pLink with the --check-sex argument I get a sexcheck file where there's only zeroes in the SNPSEX column. So do you know how --check-sex works that I can then create decoy-individuals that reliably (as much as possible) get detected as a given gender?\n\n\u2022 I was able to create a sample map\/ped files that result in SNPSEX column being populated in plink 1.19 using 1000genomes Xchrom - found in their FAQ: internationalgenome.org\/faq\/where-are-snps-xymitochondrial-chr. The relevant conversion command .\/plink --check-sex --vcf ..\/ALL.chrX.phase3_shapeit2_mvncall_integrated_v1b.20130502.genotypes.vcf --maf 0.05 --out ped_map_chrX\/chrmX --recode . Even in this dataset, SNPSEX is 0 (ambiguous) in 400 (~16%) of cases. One could start pruning from here to arrive at a suitable minimal set of SNPs that are still recognised in plink. Feb 18 at 12:42\n\u2022 Ok, thanks! I'll try that out. Feb 19 at 10:03","date":"2021-09-26 00:19:51","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.34311845898628235, \"perplexity\": 5282.386977806864}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057787.63\/warc\/CC-MAIN-20210925232725-20210926022725-00270.warc.gz\"}"} | null | null |
H&M Silk Scarf ($13) // Basic Black Shirt // Black Circle Skirt // J.Crew Tights // Target Heels, old (suede heels) // Love's Affect Earrings c/o (similar) // Rebecca Minkoff Purse // Stella & Dot Bracelet // David Yurman Cuff // Shop Ditto Sunglasses (first month free with code SOUTHERNANCHORS) // Essie Polish in "Twin Sweater Set"
It was a chilly, windy day when we took these photos but boy did it do wonders for this scarf. It already has such movement to it but now it's really just showing off. I love the unexpected pairing of navy, mustard yellow, blush, and black with traces of purple. It's a true statement piece with a bit of 70's flair. Being able to wear a scarf like this was a welcome change since the majority of winter blanket scarves makes me feel like I'm being swallowed. Don't get me wrong - I love my blanket scarves - but a different material is a nice change of pace to get me out of a winter rut!
These pictures are beautiful Jen! Obsessed with your scarf and sunglasses! | {
"redpajama_set_name": "RedPajamaC4"
} | 2,319 |
\section{Introduction}
There is a common lore that a ``first principle" determination of
the order parameters
characterizing the (chiral) dynamical symmetry
breaking (DSB), such as the $\langle \bar q q \rangle$ condensate typically,
is definitively out of
the reach of the basic QCD perturbation theory. This is
largely justified, traditionally, by the fact that DSB is an
essentially non-perturbative mechanism. However, it may depend
on what exactly one
means by perturbation theory.
For instance,
since the pioneering work of Nambu and Jona-Lasinio
(NJL)~\cite{NJL}, it has been understood how
it is possible to
resum a relevant class of graphs to obtain
the qualitative (and some quantitative
as well) properties of DSB explicitly,
at least in specific approximations and/or models.
Also, independently of the NJL idea, the modern
Chiral Perturbation Theory (ChPT)~\cite{GaLeut}
gives a consistent effective description of data at low energies
where the QCD perturbative series is not applicable.
Indeed, definite progress have been made to relate ChPT with
generalized NJL models~\cite{ENJL}, although
a precise connection between the (numerous) ChPT parameters, and
the basic QCD coupling and quark mass
parameters is far
from being resolved at present.
With a more formal (but related) motivation, it has been also
explored since long ago how
definite non-perturbative informations
may be inferred from
appropriately modified perturbation
series~\cite{delta}, at least in simplified or exactly solvable
models. In particular,
the convergence of ordinary perturbation
can be systematically {\em improved}
by a variational--like procedure, in which
the separation of the
action into ``free" and ``interaction" parts
is made to depend on a set of auxiliary parameters,
to be fixed by some optimization procedure~\footnote{
In $D =1$ field theories, this optimized perturbation
theory (``delta-expansion")
gives a rigorously convergent~\cite{converg}
series of approximations, even in strong coupling cases.}.
As a partial attempt to merge some of these ideas,
we have re-examined~\cite{qcd1,qcd2} with a new approach
the above mentioned old problem of generating from
the basic QCD Lagrangian
non-trivial values for the
quark condensate, pion decay constant, or
dynamical quark
masses.
The basic point is to transform the
ordinary perturbative expansion, in $\alpha_s$, into an expansion
in an {\em arbitrary} mass parameter, around
a non-trivial (fixed-point)
solution of the renormalization group evolution,
proportional to the basic scale $\bar\Lambda$.
In some sense it may be viewed as a systematic, order by order,
improvement of the
original NJL construction, but with
a consistent treatment of the renormalization (and directly applied
to the QCD quark-gluon interactions).
\section{A crude dynamical mass ansatz}
As a crude first illustration of the mechanism, consider
the renormalization group (RG) evolution of the running mass,
\begin{equation}
m(\mu^{'}) = m(\mu )\;\; {exp\{ -\int^{g(\mu^{'})}_{g(\mu )}
dg {\gamma_m(g) \over {\beta(g)}} \} }\;,
\label{runmass}
\end{equation}
where $\beta(g)$, $\gamma_m(g)$ drive the running of the coupling
$g(\mu)$ and mass $m(\mu)$, respectively. Solving (\ref{runmass}) for the
``fixed point" boundary condition:
$ M \equiv m(M)$,
gives (to first RG order)
\begin{equation}
M_1 = {m(\mu) \over{[1 +2b_0 g^2(\mu) \ln({M_1 \over{\mu}})
]^{\gamma_0 \over{2b_0}}}}\;,
\label{MRG1}
\end{equation}
where $b_0$, $\gamma_0$ are the one-loop RG-coefficients (normalization
is such that $\beta(g) = -b_0 g^3 -b_1 g^5 -\cdots$,
$\gamma_m(g) = \gamma_0 g^2 +\gamma_1 g^4
+\cdots$). \\
Although expression (\ref{MRG1}) is initially related
to the ``current" mass $m(\mu)$ via (\ref{runmass}), it
has the trademarks
of a {\em pole} mass, thanks to the boundary
condition $ M_1 \equiv m(M_1)$~\footnote{In particular~\cite{qcd2},
(\ref{MRG1})
is {\em scale} invariant (in contrast with $m(\mu)$)
and gauge-invariant,
as the pole mass should be.}.
This coincidence between the
pole mass $M$ and the current mass
$m(\mu \equiv M)$, is, however, only an artifact
of our crude approximation, neglecting at the moment the non-logarithmic
perturbative corrections~\cite{Broadhurst}.
Now, the most important
property of expression (\ref{MRG1})
is that it is
{\em non-zero} in the chiral limit, $m(\mu) \to 0$.
Indeed, (\ref{MRG1}) identically reads
\begin{equation}
M_1 (\ln (M_1/\bar\Lambda) )^{\frac{\gamma_0}{2b_0}} = \hat m
\label{M1rewritten}
\end{equation}
where for convenience we introduced
the RG invariant scale $\bar\Lambda =
\bar\mu \, e^{-{1\over{2b_0 \bar g^2}}}$
(at first RG order), and the {\em scale-invariant} mass
$\hat m \equiv m(\bar\mu) (2b_0 g^2(\bar\mu))^{-\frac{\gamma_0}{2b_0}} $.
(\ref{M1rewritten}) may then be seen as a function
$\hat m(M_1)$, and requiring its inverse, $M_1(\hat m)$, to be defined
on the whole physical domain $ 0 < \hat m < \infty$, and to match the
ordinary perturbative asymptotic behavior for $ \hat m \to \infty$,
implies~\footnote{
Another, a priori possible solution of
(\ref{M1rewritten}),
$M_1 \to 0$ for $\hat m \to 0$, is rejected because it
is only defined for $0 \le \vert \hat m \vert \le
(\gamma_0/2b_0)^{\gamma_0/2b_0}
e^{-\gamma_0/2b_0} \bar\Lambda < \bar\Lambda$, and is therefore not compatible with the
asymptotic perturbative behavior of (\ref{MRG1}) for $m(\mu) \gg
\bar\Lambda $~\cite{qcd2}.} $M_1 (\hat m \to 0) \to \bar\Lambda $.
It is of course desirable to go beyond the one-loop RG approximation,
and to take into account as well the non-logarithmic
corrections, necessary to make contact with the usual perturbative
pole mass~\cite{Broadhurst}.
Our aim is
to obtain a variational ``mass gap" where the
non-trivial chiral limit property of (\ref{MRG1})
is preserved, while
at the same time providing us with a systematically (order by order)
improvable ansatz, thanks to a particular reorganization of the
basic perturbative expansion, as will be explained in the
next section.
\section{Resumming the delta-expansion}
In the present context, a simplest form of the so-called
delta-expansion~\cite{delta} may
be defined
by formally substituting everywhere in the bare QCD Lagrangian:
\begin{equation} m_0 \to m_0\; (1-x); ~~~~g_0 \to g_0\; (x)^{1/2}\;.
\label{substitution}
\end{equation}
The
parameter $x$ in
(\ref{substitution}) just
interpolates between the free Lagrangian, for
$x=0$, and the interacting but {\em massless} Lagrangian, for $x=1$.
In the simplest field-theoretical applications, one would then use
(\ref{substitution}) to expand any perturbative expression of
($m_0$, $g^2_0$)
to a given order $x^q$, and try to
apply some optimization
prescription with respect to the (arbitrary) mass,
$m_0$. Accordingly, the somewhat empirical
but most often successful idea~\cite{delta} is that the
{\em least sensitive} region with respect to $m_0$ (entering at any
fixed order $q$)
should give the best approximation to the exact result, which is
{\em independent} of
$m_0$. But, in many non-trivial field theories, and in particular
in the present QCD case, before anything the whole procedure
should be made consistent with renormalization. As it turns out, the only
way to get
a finite and non-zero result (e.g, $M(m \to 0) \neq 0$) is
to resum
the $x$-series, using
an appropriately constructed
contour integral transform~\cite{gn2}. At first RG order,
this essentially gives a mass as an integral over
expression (\ref{MRG1}) (with substitution (\ref{substitution})
understood). Beyond the one-loop approximation,
our final mass ansatz reads~\footnote{$v$ in (\ref{contour7})
is related to the original
expansion parameter $x$ as $x = 1-v/q$, $q$
being the order of the $x$-expansion.}:
\begin{eqnarray}
{ M^P_2 (m^{''})\over \bar\Lambda}
= {2^{-C} m''\over{2 i \pi}} \oint dv {e^{\;v}
\over{F^A(v) [C + F(v)]^B}} \nonumber \\
\cdot {\left(1 +{{\cal M}_{1}\over{F(v)}}
+{{\cal M}_{2}\over{F^2(v)}}+\cdots \right)},
\label{contour7}
\end{eqnarray}
where the contour is around the $] -\infty, 0] $ axis;
\begin{equation}
F(v) \equiv \ln [m''v] -A \; \ln F -(B-C)\; \ln [C +F],
\label{Fdef}
\end{equation}
with $A =\gamma_1/(2 b_1)$, $B =\gamma_0/(2 b_0)-\gamma_1/(2 b_1)$,
$C = b_1/(2b^2_0)$;
$\bar\Lambda$ is the (RG-invariant) scale
at two-loop order;
and finally
\begin{equation}
m''\equiv \displaystyle{\left(\frac{m(\bar\mu)}{ \bar\Lambda}\right) \;
2^{C}\;[2b_0 \bar g^2]^{-\frac{\gamma_0}{2b_0}}
\;\left[1+\frac{b_1}{b_0}\bar g^2\right]^B}
\;
\label{msec2def}
\end{equation}
is the scale-invariant, arbitrary (dimensionless) ``mass" parameter.
By construction, $F(1)$ in the integrand of (\ref{contour7})
resums the leading and next-to-leading logarithmic
dependence in $m(\bar\mu)$ to all orders~\cite{qcd2}.
The non-logarithmic perturbative coefficients,
${\cal M}_{1} \equiv (2/3)~(\gamma_0/2b_0)$ and ${\cal M}_2$,
connect~\cite{Broadhurst} the pole mass with
the running mass $m(M)$. \\
Note that it is implicitly always possible to choose a
renormalization scheme (RS) such that $b_i = \gamma_i =0$ for
$i \geq 2$, since $b_i$, $\gamma_i$ are then RS--dependent. In that sense,
eq.~(\ref{contour7}) resums
the full RG dependence in $\ln (m'' v)$.
In contrast, the purely perturbative (non-logarithmic)
information, contained in ${\cal M}_{1}$, ${\cal M}_{2}$, is
limited by present knowledge to two-loop order.
This is where the variational principle
and optimization play their role, whereby we hope to obtain a sensible
approximation to the true dynamical mass.
Observe first
that, were we in a simplified theory where
${\cal M}_{1} = {\cal M}_{2} = \cdots = 0$,
(\ref{contour7}) would have a very simple
behaviour near its optimum (located at $m'' \to 0$),
giving a simple pole
with residue $M_2 = (2C)^{-C}\;\bar\Lambda $.
Now, in the more realistic cases, ${\cal M}_1$, ${\cal M}_2$,...
cannot be neglected,
but we can obtain
a series of approximants to the dynamical mass,
by expanding (\ref{contour7}) in successive powers of $m'' v$,
using the standard relation
\begin{equation}
\label{hankel}
\frac{1}{2i \pi} \oint dv\: e^v \: v^\alpha =
\frac{1}{\Gamma[-\alpha]}\; ,
\end{equation}
and then looking for optima $M^P_2(m''_{opt})$, $m''_{opt} \neq 0$. \\
The previous construction is quite general and therefore
directly applicable to any (asymptotically free) model,
taking obviously its appropriate values of the RG coefficients,
$b_i$, $\gamma_i$.
The ansatz (\ref{contour7}) was
confronted~\cite{gn2} to the exactly known mass
gap~\cite{FNW}
for the ${\cal O}(N)$ Gross-Neveu (GN) model, for {\em arbitrary}
$N$.
The results
of different optimization prescriptions gave
estimates with errors of ${\cal O}$(5\%) or less, depending
on $N$ values~\cite{gn2}.
It is important to note
that expression (\ref{contour7}), for arbitrary $N$ in the
GN model, uses exactly the {\em same}
amount of (perturbative plus RG) information than the one
at disposal at present for a QCD quark mass: namely,
the {\em exact} two-loop RG-resummed
plus perturbative ${\cal M}_1$, ${\cal M}_2$
dependence.
Since our construction essentially relies on RG-properties (and
analytic continuation), going from 2 to 4 dimensions
is not expected to cause major changes, at least naively.
\section{Hidden singularities of the mass ansatz}
One complication, actually, {\em does} occur:
as a more careful examination of
relation (\ref{Fdef}) indicates, there are
branch cuts
in the $v$ plane, with ${\rm Re}[v_{cut}] > 0$
for the relevant case of $n_f =$ 2 or 3 in QCD.
These make the expansion undefined when
approaching the origin, $v=0$, and simply indicate
the non single--valuedness of (\ref{contour7})
below those branch points.
The origin of those singularities has some
similarity with the renormalon ones~\cite{renormalons},
as they also appear
when extrapolating a RG--resummed expression
down to an infrared scale $m'' \simeq 0$.
However, a main difference with renormalons is that
in our construction
it is possible~\cite{qcd2} to move those extra cuts to
a safe location, ${\rm Re}[v^{'}_{cut}] \leq 0$, observing that
the actual position of those cuts
depends, at second order, on the RS, via
$\gamma_1$. Performing
thus a second-order
perturbative RS change in $m(\mu)$, $g(\mu)$,
which changes $\gamma_1(\overline{MS})$ to a (singularity-safe) $\gamma^{'}_1$,
it is then sensible, in the present context,
to invoke a variant of the ``principle of
minimal sensitivity" (PMS)~\cite{delta}, requiring
a flat optimum (plateau) of (\ref{contour7})
with respect to the
{\em remnant} RS arbitrariness~\cite{qcd2}. \\
One may perhaps legitimately wonder why the ordinary
renormalon singularities of the pole mass~\cite{BeBr94}
do not seem to appear
in our construction. In fact, the usual renormalon
singularities always appear as a result of crossing
the Landau pole~\footnote{Actually, this is an oversimplified
picture, valid at one-loop
RG level only~\cite{EdRPe96}.
However, higher order properties of renormalons
do not affect, qualitatively, our argument.},
which simply reflects an ambiguity from perturbation theory, calling for
non-perturbative corrections which are typically in the form of
power corrections~\cite{renormalons}.
In contrast, (\ref{contour7}) is such that the Landau pole
(corresponding to $F =0$ in our language) is {\em not} crossed, but only
smoothly reached from above, ${\rm Re} F >0$.
(Moreover, due to the recurrent dependence
in $F$, (\ref{Fdef}), implying that $F(v) \simeq m^{''} v$ for
$m^{''} v \to 0$, the poles of (\ref{contour7}) at $F=0$ ($v =0$)
entirely come
from the purely
perturbative part, i.e. due to ${\cal M}_1, {\cal M}_2 \neq 0$).
Note that, on more
phenomenological grounds, there is no strong contradiction with the usual
consequences of the presence of renormalons: while the latter indicate,
in the pole mass case,
an ambiguity of ${\cal O}(\bar\Lambda)$~\cite{BeBr94},
our construction necessarily exhibits an {\em arbitrary} renormalization
scheme (RS) dependence, via the above mentioned
$\gamma_1$ coefficient, calling for optimization.
Practically we have obtained:
\begin{equation}
M^2_{opt}(m''_{opt} \to 0) \simeq 2.97\;\bar\Lambda(2)\;
\label{Mnum}
\end{equation}
for $n_f=2$, and a similar result for $n_f =3$.
\section{Order parameters: $F_\pi$ and $\langle \bar q q \rangle$}
The previous dynamical quark mass, although it has some meaning
as regards DSB in QCD, hardly has a direct physical
interpretation, e.g.
as a pole of the S-matrix, due to the confinement.
In other words, it is not a properly defined order parameter.
It is however possible to apply the same construction as the one
leading to
(\ref{contour7}),
to obtain
a determination of the ratios $F_\pi/\bar\Lambda$ and $\langle \bar q q \rangle(\mu)/\bar\Lambda^3$.
The latter gauge-invariant quantities are
unambiguous order parameters, i.e. $F_\pi \neq 0$ {\em or} $\langle \bar q q \rangle \neq 0$
{\em imply} DSB.
The appropriate generalization
of (\ref{contour7}) for $F_\pi$ is~\cite{qcd2}
\begin{eqnarray}
& \displaystyle{{F^2_\pi \over{\bar\Lambda^2}} = (2b_0)\;
{2^{-2 C} (m'')^2\over{2 i \pi}} \oint {dv\over v}\; v^2 {e^{\: v}}}
\; \nonumber \\
& \displaystyle{ \cdot \;\frac{1}{F^{\;2 A-1} [C + F]^{\;2 B}}
\; \delta_{\pi }
\left(1 +{\alpha_{\pi}\over{F}}+{\beta_{\pi}
\over{F^2}}
\right) }
\label{Fpiansatz}
\end{eqnarray}
in terms of the same
$F(v)$ defined in eq.~(\ref{Fdef}) (therefore leading to the same
extra cut locations as in the mass case), and where
$\delta_\pi$, $\alpha_\pi$ and $\beta_\pi$
are fixed by matching the perturbative $\overline{MS}$
expansion, known to 3-loop order~\cite{Avdeev}.
A numerical optimization with respect to the RS-dependence, in a way
similar to the mass case, gives e.g for $n_f =2$:
\begin{equation}
F_{\pi ,opt}(m''_{opt} \to 0)
\simeq 0.55\;\bar\Lambda(2)\;.
\label{Fpinum}
\end{equation}
Concerning $\langle \bar q q \rangle$, an ansatz similar
to (\ref{Fpiansatz}) can be derived (with
coefficients $\delta$, $\alpha$, $\beta$
specific to $\langle \bar q q \rangle$ and appropriate changes in the
$m''$, $F$ and $v$ powers),
but for the RG-invariant combination $m \langle \bar q q \rangle$,
due to the fact that our construction
only apply to RG-invariant quantities.
To extract an estimate of
the (scale-dependent) condensate $\langle \bar q q \rangle(\mu)$ is only
possible by introducing
an {\em explicit} symmetry-breaking
quark mass $m_{exp}$ (i.e.
$m_{exp} \neq m $),
and expanding the $m\langle \bar q q \rangle$ ansatz to first order in $m_{exp}$.
This gives
for $n_f =2$~\cite{qcd2}:
\begin{equation}
\langle \bar q q \rangle^{1/3} (\bar\mu = 1\;\mbox{GeV}) \simeq 0.52 \; \bar\Lambda(2)\;.
\label{qqnum}
\end{equation}
Confronting (\ref{Mnum}), (\ref{Fpinum}) and (\ref{qqnum})
gives a fairly small value of the quark condensate~\footnote{The smallness
of $\langle \bar q q \rangle$ is however essentially correlated with the smallness of the
$F_\pi/\bar\Lambda$ ratio estimate in our framework, eq.~(\ref{Fpinum}).}
(and a fairly
high value of the dynamical mass), as compared to other non-perturbative
determinations~\cite{sumrules}.
Although small values of $\langle \bar q q \rangle$ are
not experimentally excluded
at present~\cite{Stern}, it is also clear that our relatively crude
approximation deserves more refinements
for more realistic QCD predictions.
\section{Conclusion and discussion}
The variationally improved expansion in arbitrary $m''$, first developed
in the GN model~\cite{gn2}, has been formally
extended to the QCD case.
It gives non-trivial relationships between
$\bar\Lambda$ and the dynamical masses and order parameters, $F_\pi$
and $\langle \bar q q \rangle$.
To make progress, what is certainly restrictive is the
relatively poor knowledge
of the purely perturbative part of the expansion (only known to
two-loop order in most
realistic field theories). Accordingly, our final numerical results
crucially depend on the optimization~\footnote{
For instance, results for (\ref{contour7}), (\ref{Fpiansatz})
are substantially different~\cite{qcd2}
in the unoptimized $\overline{MS}$ scheme.}. Apart from a few
models where the series is known to large orders
(as in the
anharmonic oscillator~\cite{delta,bgn},
or in the GN model for $N \to \infty$),
we can hardly compare successive orders of this
expansion to estimate, even qualitatively, the
{\em intrinsic} error of such a method.
Invoking the PMS principle~\cite{delta}, although physically
motivated, may
artificially
force the series to converge, with no guarantees that it is
toward the right result.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 921 |
{"url":"https:\/\/quant.stackexchange.com\/tags\/cvar\/hot","text":"# Tag Info\n\n4\n\nOn 1, I suspect that is a typo and that the second formula should sum to r. On 2, that is applying well-known techniques in how to handle piece-wise linear functions in an optimizer. For instance, see page 4 of these lecture notes. It's basically doing the same thing with a few additional complications. In CVaR optimization, there are more things to sum ...\n\n3\n\nIf $Y=-\\pi(\\mu,D)$ then the first formula is $$\\mathrm{CVaR}_\\eta(-Y)=\\max_{\\nu\\in R}\\left\\{\\nu+\\frac1\\eta E((-Y-\\nu)^-)\\right\\}$$ where $X^-=\\min (X,0)$ and $X^+=\\max(X,0)$. Note that $(-X)^-=-(X^+)$. If we let $1-\\alpha=\\eta$ and $\\nu=-a$ this becomes (assuming $\\max=\\sup$, i.e. the sup is attained, and using $\\sup(\\mathcal A)=-\\inf(-\\mathcal A)$): $$\\... 3 VaR_\\alpha is a scalar choice variable in the minimization problem. In the Rockafeller-Uryasev paper, it is simply called \\alpha\\in R. (C.f., the program described in Theorem 2 of that paper, or the programming problem described after equation (17); alternatively, look at the structure of the choice vector x on page 16 of the Yollin slides.) VaR_\\... 3 One: Your VaR CI relies on normal approximation and might be (very) bad depending on the number of samples and the target function (P&L). Often it is better to use the exact approach based on the empirical distribution (see here: https:\/\/stats.stackexchange.com\/a\/284970\/8298) Two: To estimate CVaR confidence intervals you may use bootstrap confidence ... 2 The minimum value is always attained at d=0. In this proof, I will assume that the distribution of the random variable X is absolutely continuous and monotonically increasing, and thus the CDF of X is invertible (though I believe the result holds generally). Fix \\beta\\in(0,1). We have that$$ \\Psi(d,\\alpha)\\equiv\\int_{\\min\\{d,x\\}\\leqslant\\alpha}p(...\n\n2\n\nWe consider the case where the distribution function $F$ of $X$ is strictly increasing. Then \\begin{align*} VaR_{\\alpha}(X) &= \\inf\\{x: P(X >x) \\le \\alpha \\}\\\\ &=\\inf\\{x: F(x)\\ge 1-\\alpha \\}\\\\ &=F^{-1}(1-\\alpha). \\end{align*} Moreover, we note that the distribution function $G$ of $-X$ is defined by \\begin{align*} G(x) &= P(-X \\le x) \\\\ &...\n\n2\n\nThis sounds correct, however step 2 is a little vague, so I will try to restate the steps here for you. The assets in your portfolio must be priced with respect to a set of risk factors (e.g. interest rate curve). Each scenario consists of a value for each of your risk factors. Given the value of your risk factors you can price your portfolio. You want to ...\n\n2\n\nI have solved it myself. The key was to realize that for $X \\geq 0$ and $S_X(t) = \\mathbb{P}(X>t)$ $$\\int_0^\\infty S(t) dt = \\int_0^1 F_X^{-1}(u) du = \\mathbb{E}\\left[X \\right].$$ This is elegantly explained in Characterization of $\\mathbb{E}$. Now this relationship can be extended for the whole real line, thus \\int_0^1 F_X^{-1}(u) du = \\int_0^\\... 1 Yes, conditional VaR (aka Expected Shortfall) is a coherent risk measure and thus, satisfies Monotonicity, Translation invariance, Positive homogeneity and Subadditivity. The latter means that CVaR(R_1+R_2) \\leq CVaR(R_1) + CVaR(R_2) which directly extends to sums of n random variables. Sub-additivity captures the notion that diversification is ... 1 Given the main uses of the VaR relate to risk management such as limit management, and measurement of P&L volatility, it is usually calculated under the physical\/real world measure. Reason being that the risk measure are normally used to predict or explain the P&L movements from one day to another, which one can relate to their historical movements. ... 1 A slightly different take here: 1 Let F be the cumulative distribution function of X. We assume that F is continuous. Then, for x\\ge 0, \\begin{align*} F^{-1}(x) = \\inf\\{s: F(s) \\ge x \\}. \\end{align*} Moreover, \\begin{align*} \\text{VaR}_{\\alpha}(X) &= \\inf\\left\\{x :1-F(x) \\le \\alpha\\right\\}\\\\ &=F^{-1}(1-\\alpha). \\end{align*} Consequently \\begin{align*} E\\Big(\\big(X-\\text{VaR}... 1 It really depends on the way you calculate your Var and CvaR. If your are able to get a closed form solutions for the derivative then you must use those, for faster results. Otherwise, you can use bump and reval.\\frac{\\partial V}{\\partial d} = \\frac{V(d+h) - V(d)}{h}$$1 Maybe prove that$$CVaR_\\alpha (X) = \\frac{1}{\\alpha} \\int_0^\\alpha F^{-1}_X(u) du$$has the distortion function$$ g(u)= \\begin{cases} \\frac{u}{\\alpha}, \\quad \\; u \\leq \\alpha \\\\ 1, \\qquad u > \\alpha\\end{cases} would be easier?\n\n1\n\nFrom Ziegel (2013) : The risk of a financial position is usually summarized by a risk measure. As this risk measure has to be estimated from historical data, it is important to be able to verify and compare competing estimation procedures. In statistical decision theory, risk measures for which such verification and comparison is possible, are called ...\n\nOnly top voted, non community-wiki answers of a minimum length are eligible","date":"2019-11-12 09:25:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9962626695632935, \"perplexity\": 739.4851115096216}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496664808.68\/warc\/CC-MAIN-20191112074214-20191112102214-00460.warc.gz\"}"} | null | null |
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="info_panel_background">#e9e9e9</color>
<color name="red">#FF0000</color>
<color name="green">#00FF00</color>
<color name="blue">#0000FF</color>
<color name="light">#d6d6d6</color>
<color name="dark">#a7a7a7</color>
<color name="black">#000000</color>
<color name="transparent_white">#FFFFFFFF</color>
<color name="transparent_white_200">#ffffffc8</color>
<color name="white">#FFFFFF</color>
<color name="orange">#FF9933</color>
<color name="darkorange">#993300</color>
<color name="mygreen_dark">#378756</color>
<color name="mygreen_light">#66CC99</color>
<color name="hydrogreen">#5d9d76</color>
</resources> | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,882 |
Bart syndrome, also known as aplasia cutis congenita type VI, is a rare genetic disorder characterized by the association of congenital localized absence of skin, mucocutaneous blistering and absent and dystrophic nails.
History
This clinical trial was first described by Bruce J Bart in 1966, who reported a large family with 26 affected members.
Clinical
1. Absence of skin at birth, involving the lower legs and feet, healing within a few months, leaving scarring and fragile skin.
2. Widespread blistering of the skin and mucous membranes.
3. Variable absence and dystrophy of nails.
Genetics
The syndrome is inherited by autosomal dominant transmission with complete penetrance but variable expression. This means that children of an affected parent that carries the gene have a 50% chance of developing the disorder, although the extent to which they are affected is variable.
Blistering in Bart syndrome represents a form of epidermolysis bullosa caused by ultrastructural abnormalities in the anchoring fibrils. Genetic linkage of the inheritance of the disease points to the region of chromosome 3 near the collagen, type VII, alpha 1 gene (COL7A1).
See also
List of cutaneous conditions
Bart-Pumphrey syndrome
References
External links
Genodermatoses
Collagen disease
Syndromes affecting the skin | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,390 |
{"url":"http:\/\/teambi0s.gitlab.io\/bi0s-wiki\/crypto\/caesar-cipher\/","text":"# Substitution Cipher\n\n## Introduction\u00b6\n\nSubstitution cipher is an encryption scheme, in which position of plaintext units are altered, keeping the sequence same. Basically it means that each character of the message is substituted with a different character to make the ciphertext It is the oldest and simplest way of encrypting data.\n\nSome popular examples of substitution ciphers are: Caesar cipher, ROT13 etc\n\n### Caesar cipher\u00b6\n\nCaesar cipher is one of the oldest and simplest method used for secret communication. The cipher is named after Julius Caesar who used it to send secret messages to his generals. It is a monoalphabetic cipher which means a single character is encrypted at a time. It is also a shift cipher which means that each letter of the plaintext is shifted by a fixed number down the alphabet to get the corresponding ciphertext. So if alphabet \u2018a\u2019 is to be encrypted using key 3 then it will be encrypted as, \u2019a\u2019 + 3 = \u2018d\u2019.\n\nHere's a diagrammatic represntation of the same:\n\nLet us understand the concept better using this example ,\n\nSuppose Alice and Bob want to send messages to each other through an insecure channel. They wanted to use the \u201csimplest\u201d way of encrypting the messages so they agreed upon using Caesar cipher. Hence Alice sets a key of 3. She wants to send \u201cHELLO\u201d, so she replaces each alphabet with the corresponding 4th alphabet that is, \u2018H\u2019 -> \u2018K\u2019 , \u2018E\u2019 -> \u2018H\u2019 , \u2018L\u2019 -> \u2018O\u2019 , \u2018L\u2019 -> \u2018O\u2019 , \u2018O\u2019 -> \u2018R\u2019\n\nFinally, plaintext: \u201cHELLO\u201d \u2192 ciphertext: \u201cKHOOR\u201d\n\nTo decrypt the message Bob is going to backshift the ciphertext with the given key. As no one else has the key, no one can know the message and hence security is ensured. But this is not true in the present world. As the key length is limited to maximum of 26. Thus, breaking the caesar cipher becomes easier. It can be brute forced easily to find the actual message. Also the repetition of same alphabets makes patterns and gives the attacker a clue to break it.\n\nROT13\n\nROT13 is a special case in Caesar cipher. ROT13 stands for \u201crotate by 13\u201d i.e it always replaces each plaintext character with the corresponding 13th alphabet. To put it simply it is a case of caesar cipher where the key is taken as 13. You can see that as 13 is the half of 26, it made to sense to some to take the key as 13, which is most distant from 0 or 26. One can get the plaintext by re-doing the same operation as explained above. Even ROT13 isn\u2019t secure like Caesar cipher. It can be broken easily. ROT13 was earlier used in net.jokes newsgroup in the early 1980s.\n\nROT13(ROT13(X)) == X\n\n\nFor example, \u2018M\u2019 -> \u2018Z\u2019 , \u2018I\u2019 -> \u2018V\u2019 , \u2018D\u2019 -> \u2018Q\u2019 , \u2018D\u2019 -> \u2018Q\u2019 , \u2018L\u2019 -> \u2018Y\u2019 , \u2019E\u2019->\u2019R\u2019\n\nFinally, plaintext: \u201dMIDDLE\u201d \u2192 ciphertext: \u201cZVQQYR\u201d\n\nPractice Challenges","date":"2021-04-23 18:16:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3678569495677948, \"perplexity\": 2224.542724744365}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618039596883.98\/warc\/CC-MAIN-20210423161713-20210423191713-00602.warc.gz\"}"} | null | null |
Ретеаг () насеље је у Румунији у округу Бистрица-Насауд у општини Петру Рареш. Oпштина се налази на надморској висини од 250 -{m}-.
Становништво
Према подацима из 2002. године у насељу је живело 2790 становника.
Попис 2002.
Референце
Спољашње везе
Насељена места у Румунији
Википројект географија/Насеља у Румунији | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,217 |
{"url":"https:\/\/stats.stackexchange.com\/questions\/191995\/why-is-the-training-score-i-get-from-the-learning-curve-of-multinomial-naive-b","text":"Why is the \u201ctraining score\u201d I get from the learning curve of Multinomial Naive Bayes so different from the training score of the Bernoulli version?\n\nI'm comparing the learning curves of Bernoulli and Multinomial Naive Bayes using the 20_newsgroups dataset from scikit-learn for text-classification. I considered both the \"training score\" and the \"cross validation score\", but I noticed that while in the Multinomial version the training score is very high at the beginning and decreases and the cross-validation score is very low at the beginning and increases, in the Bernoulli version I have a low training score at the beginning (and then it increases). Is it normal or am I doing something wrong? It sounds a bit strange to me.\n\nHere's the Multinomial plot:\n\nThis one is the Bernoulli one:\n\nHere is some of my Python code (Bernoulli version):\n\n####load dataset####\nfrom sklearn.datasets import fetch_20newsgroups\ncategories = ['alt.atheism', 'sci.electronics','rec.sport.hockey']\ntrain = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42)\ny = train.target\ntest = fetch_20newsgroups(subset='test', categories=categories, shuffle=True, random_state=42)\n\n####bag of words####\nfrom nltk.corpus import stopwords\nstopwords = stopwords.words('english')\nfrom sklearn.feature_extraction.text import CountVectorizer\ncount_vectorizer = CountVectorizer(stop_words=stopwords, binary=True)\nmatrix_train = count_vectorizer.fit_transform(train.data)\n\nfrom sklearn.naive_bayes import BernoulliNB\nbernoulli = BernoulliNB(alpha = 1.0, fit_prior = True)\n\n####learning curve####\nimport matplotlib.pyplot as plt\nfrom sklearn.learning_curve import learning_curve\ndef plot_learning_curve(estimator, title, X, y, ylim, cv, n_jobs=1, train_sizes=np.linspace(.1, 1.0, 8)):\nplt.figure()\nplt.title(title)\nplt.xlabel(\"Training examples\")\nplt.ylabel(\"Score\")\ntrain_sizes, train_scores, test_scores = learning_curve( estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)\ntrain_scores_mean = np.mean(train_scores, axis=1)\ntrain_scores_std = np.std(train_scores, axis=1)\ntest_scores_mean = np.mean(test_scores, axis=1)\ntest_scores_std = np.std(test_scores, axis=1)\nplt.grid()\nplt.fill_between(train_sizes, train_scores_mean - train_scores_std,\ntrain_scores_mean + train_scores_std, alpha=0.1,\ncolor=\"r\")\nplt.fill_between(train_sizes, test_scores_mean - test_scores_std,\ntest_scores_mean + test_scores_std, alpha=0.1, color=\"g\")\nplt.plot(train_sizes, train_scores_mean, 'o-', color=\"r\",\nlabel=\"Training score\")\nplt.plot(train_sizes, test_scores_mean, 'o-', color=\"g\",\nlabel=\"Cross-validation score\")\n\nplt.legend(loc=\"best\")\nreturn plt\n\ntitle = \"Learning Curves (Naive Bayes)\"\nfrom sklearn import cross_validation\ncv = cross_validation.ShuffleSplit(matrix_train.shape[0], n_iter=100, test_size=0.2, random_state=0)\nplot_learning_curve(bernoulli, title, matrix_train, y, ylim=(0.7, 1.01), cv=cv, n_jobs=1)\nplt.show()\n\n\nWhy are they so different? The cross validation score is like what I was expecting both in Multinomial and Bernoulli, but the training score should be high at the beginning, right?\n\n\u2022 This may be due to the different scaling of the y-axis. If you look at it closely, the decrease in performance in the first plot is very small. This may happen for various reasons, like having some mislabeled examples (e.g. duplicate examples with different labels). The fact that they converge at a different rate indicates that the first classifier is better suited for this problem. \u2013\u00a0George Jan 22 '16 at 17:37\n\u2022 I'm pretty sure the Multinomial plot is ok, my problem was with the Bernoulli one. I thought they didn't have to be so different from each other since their differences are transparent to the programmer using scikit-learn, and mainly one has to pay attention to the representation of the document-vector (Bernoulli requires a binarized vector). I don't understand where is the error. Thank you very much for your answer \u2013\u00a0Trevor Jan 22 '16 at 18:11","date":"2019-08-21 16:23:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5242378115653992, \"perplexity\": 4078.451684987348}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027316075.15\/warc\/CC-MAIN-20190821152344-20190821174344-00342.warc.gz\"}"} | null | null |
Category: Issue I Fall 2008
From an Amateur's Angle: The Impact of the Visual Image in Defining Abu Ghraib
Abstract: Many have deemed the invasion of Iraq as the American government's 'brass-knuckled quest for information' – a strong statement given that the self-appointed 'land of the free' is insinuating...
By gnovisguest Published December 16, 2008 Issue I Fall 2008, Journal, Journal Volume IX
What Good is the 'You' in YouTube? Cyberspectacle and Subjectivity
The spectacle manifests itself as an enormous positivity, out of reach and beyond dispute. All it says is: "Everything that appears is good; whatever is good will appear." - Guy Debord (1994, p. 15)
Self-disclosure of Religious Identity on Facebook
Abstract: Social networking Web sites, such as MySpace and Facebook, have in the last five years become indispensable communication tools for large numbers of young people in the United States....
The "Sufficient Backdoor" Test: A New Model for Indecency Regulation of Converged Media
Abstract: Content-based regulation is subject to the "strict scrutiny" standard in the Supreme Court. The "strict scrutiny" standard takes into account three issues: (1) whether the regulation furthers a compelling...
Fall 2008 Editor's Note
One of the pleasures of writing an editor's note for a journal like gnovis, which covers such a wealth of inspired topics, is the opportunity to spend a quiet afternoon looking at a stack of seemingly unrelated papers– searching for the common thread (or threads) that holds the stack together. Some threads are easier to find than others but, like a thread pulled from a sweater, once discovered they seem to have no end.
By brad.weikel@gmail.com Published December 16, 2008 Editor's Note, Issue I Fall 2008, Journal, Journal Volume IX
The technologies, policies, governance, and standards of telecommunications have gone through many changes over the course of its history. The three regimes that have formed this history have moved from a monopolistic approach, mostly involving radio and telegraph technologies, to an institution-centered approach, including television and satellites, to the current regime, with a more diffused approach to governance, with a variety of state, institutional, and private actors involving more Internet and mobile phone technologies.
By klp Published December 16, 2008 Issue I Fall 2008, Journal, Journal Volume IX | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 245 |
package com.github.ckwen.je.dp.factory.password;
public interface PasswordService {
void reset();
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,072 |
package concurrency.ch5;
import static org.junit.Assert.*;
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Semaphore;
import java.util.concurrent.TimeUnit;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
public class H20Test {
private final int n = 5;
private final Semaphore mutex = new Semaphore(1);
private int oxygen = 0;
private int hydrogen = 0;
private final CyclicBarrier barrier = new CyclicBarrier(3);
private final Semaphore oxyQueue = new Semaphore(0);
private final Semaphore hydroQueue = new Semaphore(0);
private final CountDownLatch latch = new CountDownLatch(3*n + 1);
@Test
public void test() throws InterruptedException {
ExecutorService oxygenPool = Executors.newFixedThreadPool(n);
for(int i = 0; i < n; i++){
oxygenPool.submit(new O());
}
ExecutorService hydrogenPool = Executors.newFixedThreadPool(2*n);
for(int i = 0; i < 2*n; i++){
hydrogenPool.submit(new H());
}
latch.countDown();
latch.await(5, TimeUnit.SECONDS);
}
private class O implements Runnable {
public void run() {
latch.countDown();
try {
mutex.acquireUninterruptibly();
++oxygen;
if( hydrogen >= 2 ){
System.out.println("=====");
hydroQueue.acquireUninterruptibly(2);
hydrogen -= 2;
oxyQueue.release();
oxygen -= 1;
}else{
mutex.release();
}
oxyQueue.acquireUninterruptibly();
System.out.println("Bond O");
barrier.await();
mutex.release();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (BrokenBarrierException e) {
e.printStackTrace();
}
}
}
private class H implements Runnable {
public void run() {
latch.countDown();
try {
mutex.acquireUninterruptibly();
++hydrogen;
if(oxygen >= 1 && hydrogen >= 2){
System.out.println("=====");
hydroQueue.release(2);
hydrogen -= 2;
oxyQueue.release();
oxygen -= 1;
}else{
mutex.release();
}
hydroQueue.acquireUninterruptibly();
System.out.println("Bond H");
barrier.await();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (BrokenBarrierException e) {
e.printStackTrace();
}
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,640 |
Robert Wyatt has said if he wins this year's MERCURY MUSIC PRIZE it would be "a disgrace".
The ex-[/a] drummer – who is nominated for his latest album 'Cuckooland' – has hit out the award which could land him the £20,000 prize.
The drummer has played with a host of legendary musicians over his career including Jimi Hendrix and [/a], though made his name with [a]. He went solo in the 1970, concentrating on his songwriting, singing and drumming.
Wyatt has been wheelchair bound for 30 years since he broke his back after falling from a fourth floor window.
As previously reported on NME.COM, the winner of this year's Mercury Music Prize will be announced on September 7. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,516 |
4 Eco Services, Kansas City Areas' premier eco-friendly and environmentally conscious plumbing, HVAC, and water filtration provider, has announced today that it will be exclusively providing Environmental Water Systems (EWS) as their water filtration product of choice.
Environmental Water Systems has been a proven leader in Whole Home Water Filtration for 30 years. EWS was chosen as the exclusive water filtration for the 2011 National Association of Home Builders (NAHB) New American Showcase Home, showcased by numerous publications, and is available in over 700 kitchen and bath showrooms in the United States.
With today's city tap water containing chlorine or chloramine (a toxic chlorine-ammonia compound that is lethal to marine life and corrosive to pipes and appliances), pharmaceutical residues, pesticides, obesogens, and more, we are putting ourselves at risk for developing health issues. Our bodies absorb more chlorine in a 10-minute shower than if we drank a gallon of unfiltered tap water, so we must be aware that we can drink, inhale, and absorb these contaminants.
"The President's Cancer Panel reports that there are over 80,000 understudied and unregulated contaminants in our tap water, some of which are directly linked to cancer. EWS has never focused on taste or gimmicks or the latest trend. We focus on health. We focus on our products' ability to remove these contaminants and protect us – not cover it up by making the water simply taste better. Taste does not equate to health or quality. If it did, then French fries and cookies would be health foods," says Truncale.
"4 Eco Services is committed to providing our customers with unparalleled service and exceptional value, providing EWS is just an extension of that philosophy. It was important for us to align with a company that is continually improving their products to meet the changing water conditions of the families we serve. Their new chloramine filtration line is like none other on the market, and is just one example of their commitment to cutting edge performance," says Williams.
EWS whole home filtration appliances are meticulously engineered and made in the USA to meet years of uninterrupted, hassle-free protection. The quality, value, and longevity of EWS appliances are unsurpassed, and backed by an award-winning customer service and support team. Their proprietary carbon media is the highest grade and purest granular activated carbon available, with zero fillers or binders, and no chemicals or additives.
For more information on 4 Eco Services and their eco-friendly home services, visit http://www.4ecoservices.com or contact Ian Williams at 816-241-2112.
4 Eco Services is a leading home services company in Kansas City, Missouri, that is committed to protecting and preserving the environment with home systems that are eco-friendly and energy efficient. We are happy to provide our customers with reliable, same-day services for their home. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,484 |
Tobias Tandler, auch: Tendeler, Tendier, Tandeler; (* 24. Juli 1571 in Dresden (oder Torgau); † 3. August 1617 in Wittenberg) war ein deutscher Mediziner und Mathematiker.
Leben
Den Sohn des Baumeisters Christoph Tandler hatte man am 24. Dezember 1583 in die Fürstenschule St. Augustin in Grimma aufgenommen. Die Schule avancierte unter ihrem ersten Rektor Adam Siber zur Schmiede des sächsischen Pfarrer- und Beamtennachwuchses. In einem straff organisierten Tagesablauf wurde den Schülern vor allem Wissen in Religion und alten Sprachen beigebracht. Hier erwarb sich Tandler eine gediegene Bildung, Einsichten und Erfahrungen, die Charakter und Lebensart herausbildeten und nachhaltig prägten. Bereits früh war für ihn ein akademischer Werdegang vorgesehen. Unter jenem Eindruck ist seine Immatrikulation an der Universität Wittenberg am 1. September 1584 zu betrachten.
Nachdem er das Gymnasium am 12. Mai 1589 verlassen hatte, begab er sich zum Beginn seines Studiums nach Wittenberg. Mit der Unterstützung eines kurfürstlichen Stipendiums absolvierte er zunächst ein Grundlagenstudium an der philosophischen Fakultät. Lehrer wie Paulus Oleander, Kaspar Straub, Peter Otto († 1594) und Andreas Schato (1539–1603) dürften hier keine unwesentlichen Eindrücke hinterlassen haben. Nachdem er eine kurze Zeit 1599 an der Universität Helmstedt seine Studien fortgesetzt hatte, kehrte er nach Wittenberg zurück und erwarb sich am 25. September 1599 den höchsten akademischen Grad der Philosophie, einen Magister der Weltweisheit, auch der sieben freien Künste genannt.
Seine naturwissenschaftlichen Neigungen ließen ihn zu Medizin überwechseln. Am 3. Oktober 1600 erlangte er unter Schato mit der Dissertation De apoplexia, den Grad eines Lizentiaten der Medizin und am 14. Oktober 1600 promovierte er mit der Oratio de contagione zum Doktor der Medizin. Mehrere Male hatte er sich Tandler bei Neubesetzungen der medizinischen Lehrstühle 1602 und 1603 in Erinnerung gebracht. Dennoch waren seine Bewerbungen vom kurfürstlichen Hause verworfen worden. Um den Unterhalt seiner seit 1600 bestehenden Familie zu sichern, habilitierte er sich daher am 30. April 1605 an die philosophische Fakultät, wurde am 1. Mai 1605 Adjunkt und übernahm am 10. Oktober desselben Jahres die Professur der niederen Mathematik. Obwohl er seit dem 7. April 1606 als Examinator an der philosophischen Fakultät aktiv war, war dieses Amt von vornherein für ihn nur eine Übergangsstation.
Denn am 4. März 1607 fand er Aufnahme in die medizinische Fakultät der Hochschule, wurde Professor für Anatomie und Botanik und übernahm 1616 die zweite medizinische Professur. In seiner akademischen Zeit an der Wittenberger Hochschule erfüllte er auch organisatorische Aufgaben. So war er in den Wintersemestern 1607 und 1613 Rektor der Wittenberger Akademie. Tandler erkrankte an der Wassersucht, bekam damit verbunden Fieber und verstarb. Sein Leichnam wurde am 6. August 1617 in Wittenberg beigesetzt. Tandler hat sich in vielen naturwissenschaftlichen Bereichen wie der Metrologie und der Geburtshilfe einen Namen in der Geschichte erworben. Genealogisch wäre anzumerken, dass er sich am 19. Oktober 1600 in Wittenberg mit Sybilla Strauch, der Witwe des Hieronymus Nymmann und Tochter des Wittenberger Bürgers Aegidius Strauch vermählt hatte. Aus dieser Ehe ist die Tochter Barbara Tandler (* 3. August 1601 in Wittenberg; † 12. März 1618 in Wittenberg) bekannt.
Werkauswahl
(Als Respondent) De natura et curatione tussis. Meisner, Wittenberg 1595.
Disp. III ex Aphorismis Hippocratis de praeparatione, Wittenberg 1599
Disp. IX ex Aphorismis Hippocratis, Wittenberg 1600
De apoplexia, Wittenberg 1600
Disp. I—XII (De anima et corpore humano), Wittenberg 1601
Disp. physica medica de noctisurgio (resp. Horst), Wittenberg 1602, 1613
Disp. physicarum I—IX, Wittenberg 1604
Dissertationes physicarum enneados tertiae, Wittenberg 1605/06
Dissertationum meteorologicarum I—IX, Wittenberg 1606/07
De melancholia (resp. Schmilauer und Anomäus), Wittenberg 1608, 1613
De fascinationibus, Wittenberg 1613
Oratio de spectris, quae vigilantibus obveniunt [Prom. Schmilauer], Wittenberg 1608
Anatomes cultorum recensus et ad eadem invitatus, Wittenberg 1609
Democriti de natura hominis Epitome ad Hippocratem Coum, Wittenberg 1609
Diaskepseon cheirourgikon dekas (resp. Assverus Schmitner), Wittenberg 1610
De ischiade (resp. Samuel Hafenreffer), Wittenberg 1612
Dissertationes physiso medicae, Wittenberg 1613, 1629
Diaskepseon meteorologicon (IX Dispp.),Wittenberg 1613
De calculo renum et vesicae (resp. Valentin Emericus), Wittenberg 1613
De matricis praefocatione (resp. Krös, B. Hettenbach, Cademann), Wittenberg 1614
De anorexia ventriculi (resp. Franz Joel), Wittenberg 1615
De humoribus humani corporis (resp. Anton Kindler), Wittenberg 1616
De terra et ejus differentiis (resp. Wolfgang Sigismund Espich), Wittenberg 1617
Literatur
Christian Gottlob Lorenz: Grimmenser-album: Verzeichniss sämmtlicher Schüler der königlichen Landesschule zu Grimma, Verlags-comptoirs, Grimma 1850
Walter Friedensburg: Geschichte der Universität Wittenberg. Max Niemeyer, Halle (Saale) 1917
Hans Theodor Koch: Die Wittenberger Medizinische Fakultät (1502–1652) – Ein biobibliographischer Überblick, in Stefan Oehmig: Medizin und Sozialwesen in Mitteldeutschland zur Reformationszeit, Evangelische Verlagsanstalt, Leipzig 2007, S. 319, ISBN 978-3-374-02437-7
August Hirsch: Biographisches Lexikon der hervorragenden Aerzte aller Zeiten und Völker, Band 5. Urban & Schwarzenberg, Leipzig – Wien 1887, S. 612
Einzelnachweise
Mediziner (17. Jahrhundert)
Mathematiker (17. Jahrhundert)
Hochschullehrer (Leucorea)
Deutscher
Geboren 1571
Gestorben 1617
Mann
Absolvent der Martin-Luther-Universität Halle-Wittenberg | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,384 |
As Tiffany, the diva honed her skills in the company's developmental system, Florida Championship Wrestling. A shot on the main roster led to a prominent position as an onscreen authority figure on the ECW brand. The New Orleans native's stint as general manager gave her a large amount of microphone time.
"When I first started in WWE, I was in the ring a lot with Natalya," Terrell said. "She taught me a ton in the beginning. Fit Finlay was there. Those were people who were very instrumental with my training, along with Dr. Tom Prichard. There are definitely people I look up to. Tommy Dreamer is one of those people. He has kind of been there the entire time of my wrestling career. He is very honest and someone I trust.
From WWE to TNA, Terrell feels her improvement in the ring comes from having more matches under her belt. She also credits being more comfortable mentally and physically every time she laces up the boots. For the knockout, it comes down to believing in one's self. Terrell turned heads and really became perceived as a serious women's wrestler through her matches with Gail Kim.
"There is one word to describe the matches with Gail Kim: Magic," she said. "Gail is an incredible wrestler. She is good. You get in the ring with her and you have the ability to not just have a good match, but to create something that is memorable and magical. I could never imagine have imagined that was going to happen. When we were out there and wrestling, you could feel it. You could feel the energy of the match was different. The Slammiversary match and the ladder match, I think there is a lot of trust out there. We just threw everything out there.
After her run with WWE ended, Terrell began to pursue acting. She got her Screen Actors Guild card for her role in "The Campaign" with Will Ferrell and Zach Galifianakis. Along with film and television credits, the star has built a resume for her stunt work as well. She recently spent time on sets such as "Get Hard," "Daddy's Home" and even "Jurassic World." Terrell believes stunts and pro wrestling can go hand-in-hand.
Terrell took a break from both businesses to have her child Emerson a little over a year ago. Within four or five weeks, she found herself on the stunt job for "Get Hard." The TNA grappler returned to the ring not long after that.
"I knew that I wanted to continue wrestling," she said. "I wasn't done. I wanted to become a champion. This was my dream and my passion. I can't imagine being done with it just because I was a mother. I think as a mother I want my daughter to know you should go for your dreams. The amazing thing with TNA is I have the availability to be a mom at home the majority of the time and be with her. I travel and take her on the road with me.
"It's great. The travel schedule is pretty light. That has been a wonderful thing. I'm blessed that I have this opportunity because I don't think that many people get the chance to live their dream and it doesn't take away from being a mom.
Moving forward standing atop the knockout mountain, she hopes one day have another run-in with a familiar face from her past.
- "TKO Night of Knockouts" is headlined by Taryn Terrell defending the knockouts championship against Kong 8 p.m. EST Friday, April 24 on Destination America. The sole men's match on the special edition of Impact Wrestling features TNA champion Kurt Angle taking on Eric Young.
Visit www.ImpactWrestling.Com for details on everything TNA, including their upcoming free Impact Wrestling tapings May 8-11 at Universal Studios Orlando. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,138 |
Melanie Ann Sykes (born 7 August 1970) is an English television and radio presenter. She is best known for co-hosting Today with Des and Mel with Des O'Connor and Let's Do Lunch with Gino D'Acampo. She also co-hosted Going Out with Alan Carr on BBC Radio 2 with Alan Carr from May 2010 until it ended in March 2012, and returned with him for Alan and Mel's Summer Escape from 2017. Sykes currently co-presents Shop Well For Less alongside Joanna Page on BBC One.
Early life
Sykes was born in 1970 at Ashton-under-Lyne to an English father and a Catholic Anglo-Indian mother. She attended Mossley Hollins High School and studied A-level Religious Studies at Ashton Sixth Form College. Sykes was a member of the Ashtonian Brass Band, along with her father, mother and two sisters, playing the baritone horn.
Career
In the mid-1990s Sykes first came to public attention as the face of the Boddingtons Bitter advertisements.
Television
Sykes' TV presenting career started by hosting Sky One's Real TV UK series in the mid-90s. After first reporting for The Big Breakfast, Sykes has continued her television career, including presenting stints on I'm a Celebrity...Get Me Out of Here! and as presenter of EastEnders Revealed. In 1999, Sykes presented Melanie Sykes' Southall Stories, a documentary for BBC Two on Asian culture in Great Britain. She has also hosted a variety of awards ceremonies, including Miss World, the BAFTA Awards and the Q Awards.
Sykes' television career stalled for a period, although she made a successful comeback as host of Today with Des and Mel with Des O'Connor in 2002. On 12 May 2006, ITV announced that the show would be one of a number to be axed in a "painful, but utterly necessary" move.
Sykes' other work for ITV has included hosting shows The Vault (2003–2004), Celebrities Under Pressure (2003–2004) and The British Soap Awards (2003).
Sykes appeared as a guest panellist on Loose Women in October and November 2005 and later returned as a guest anchor in October 2008 and May 2009.
From 2006 to 2009, Sykes regularly filled in for Paul O'Grady as presenter of The Paul O'Grady Show when O'Grady was unable to appear. She presented the show on eleven occasions.
Sykes was the presenter of the daytime series Gene Detectives for BBC One. In 2008, Sykes was a judge on 'The Sofa Factor', an item for GMTV, where viewers sent in short clips and the winner got to present TV Pick of the Day and win a trip to Las Vegas.
In 2010, Sykes guest presented five episodes of The 5 O'Clock Show with Denise van Outen on Channel 4.
In August 2011, Sykes returned to daytime television, co-hosting Let's Do Lunch with Gino & Mel and Let's Do Christmas with Gino & Mel with Gino D'Acampo on ITV. The show was cancelled in 2014.
On 13 September 2011, Sykes co-hosted the three-part series Missing Millions with Paul Heiney on ITV.
In February 2014, BT Sport announced that Sykes would become part of their MotoGP team. Due to other work commitments, she left the channel in May 2014.
In 2014, Sykes took part in the fourteenth series of I'm a Celebrity...Get Me Out of Here! where she finished in third place, behind runner-up Jake Quickenden and winner Carl Fogarty.
In 2015, Sykes co-presented Humble Pie, a cookery series for Watch, alongside Marco Pierre White. On 4 April 2017, Sykes was confirmed as the new voice of Blind Date, taking over the role most famously held by Graham Skidmore in the original series. The show is broadcast on Saturday nights on Channel 5.
In 2017, Sykes appeared on a "celebrity" charity edition of TV quiz show, The Chase. She departed early and did not make the final round.
In March 2018, Sykes won Star Baker in Channel 4's The Great Stand Up to Cancer Bake Off.
In 2021 Sykes alongside Joanna Page were the new presenters of BBC One's consumer series Shop Well for Less.
In August 2021 it was announced that Sykes would be a competitor on BBC's Celebrity MasterChef. She progressed to the semi-final where she won Star Chef on the fourth day.
Radio
Sykes presented The A List, a national radio chart show based on total sales, produced by Unique the Production Company at the studios of London's Heart 106.2. She left the show towards the end of 2006 and was replaced by Gail Porter.
Sykes was heard on BBC Radio 2 with Aled Jones sitting in for Steve Wright in the Afternoon on 22, 23 and 24 December 2008. In May 2010, she began co-hosting the BBC Radio 2 show Going Out with Alan Carr after Emma Forbes' resignation. Sykes continued on the show until its end in March 2012. She returned on Boxing Day 2015 with Alan Carr for a one-off show on BBC Radio 2. In January/February 2017 she and Carr again returned to Radio 2 to sit in for Paul O'Grady for 4 weeks.
Sykes and Carr presented their own Saturday morning show called Alan and Mel's Summer Escape on BBC Radio 2 for 10 weeks during the summer of 2017. She presented a show with Carr on BBC Radio 2 on Christmas Eve and another on New Year's Eve in 2017. Sykes and Carr returned for further specials for Easter, for the Royal Wedding, and for further runs of Alan and Mel's Summer Escape in 2018, 2019 and 2020.
For a time, Sykes presented her own radio show on Capital FM.
Other work
Sykes has been the face of Head & Shoulders shampoo in Britain, appearing in print and television advertising and the high street chain Matalan. She has also appeared in advertisements for Morrisons, Wynsors World of Shoes and Churchill Insurance.
In 2006, the book Blooming Beautiful: My Plan for Looking Great, Being Healthy and Surviving Hormonal Havoc, Throughout Pregnancy and as a New Mum, co-written by Sykes with Hilary Boyd, was published by Michael Joseph.
Sykes posed naked for the December 2011 edition of men's magazine Esquire.
Sykes has her own line of underwear and lingerie with high street chain and retailer Debenhams.
In June 2016, Sykes created FRANK Magazine, "the magazine for open-minded women of all ages across the world".
Personal life
Sykes married actor Daniel Caltagirone in January 2001. The couple have two sons, Roman born in 2002 and Valentino born in 2004. In July 2008, it was reported that the couple had separated, with Sykes having "grown apart" from Caltagirone, who had moved out of their Hampstead, London home. They divorced in June 2009. Sykes married her boyfriend Jack Cockings on 18 May 2013 at Sherborne Castle in Dorset.
In November 2013, Sykes was arrested and cautioned by police over an alleged common assault against Cockings. Soon after the caution, after less than one year of marriage, Sykes began the process of filing for divorce. The caution was later revoked and deleted from police records, and Sykes has since campaigned for better treatment of women facing domestic violence, including from partners who make spurious allegations of assault so as to legally and financially harm their victims. She has said that the police handled her case badly and misled her into accepting a caution as the quickest way out of her situation.
In November 2021, Sykes received an autism diagnosis, which she said gave her a "deeper understanding of myself, my life, and the things I have endured". Her younger son Tino is also autistic, having been diagnosed at three years old.
Filmography
Television
References
External links
Living people
1970 births
Anglo-Indian people
BBC radio presenters
British people of English descent
English female models
English game show hosts
English television presenters
English people of Indian descent
I'm a Celebrity...Get Me Out of Here! (British TV series) participants
People from Ashton-under-Lyne
Television personalities from Lancashire
British women radio presenters
People on the autism spectrum | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 621 |
<?xml version="1.0" encoding="UTF-8"?>
<Context antiJARLocking="true" path="/HibernateJNDITestApp">
<Resource name="jdbc/MyLocalDB"
global="jdbc/MyLocalDB"
auth="Container"
type="javax.sql.DataSource"
driverClassName="org.h2.Driver"
url="jdbc:h2:mem:testdbforjndi;DB_CLOSE_ON_EXIT=TRUE;LOCK_TIMEOUT=60000"
username="sa"
password=""
maxActive="100"
maxIdle="20"
minIdle="5"
maxWait="10000"/>
</Context> | {
"redpajama_set_name": "RedPajamaGithub"
} | 5,748 |
Gilt bronze. 3/4" height with raised border surround and quartered nubs, the significance of which is now lost. Fine full rounded section. Made by the open face mold technique. The heart stood for bravery, fortitude, loyalty, integrity, all attributes of the Viking warrior. It is abundantly referenced in Viking literature. Professionally refurbished with the gold overlay restored for modern wear. Gift boxed with a certificate of authenticity. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,942 |
Employability programme launches for young people in Thames Valley
10:27AM, Monday 07 December 2020
An online employability programme for young people in the Thames Valley has been launched this week.
Starting today (Monday), people aged 16 to 25 will be able to take part in the five-day employability skills programme, being run by the charity Adviza.
The programme, called Reach Up, will coach young people employability and work skills through a series of Zoom workshops taking place between 10.30am and 1.30pm, which will be delivered by UK Youth, Adviza, Coca-Cola European Partners and the Healthy Living Centre in Aylesbury.
Participants will also have the opportunity to network during the programme.
At the end, participants will receive a £50 shopping voucher and a £100 bursary.
Adviza's Lee Teideman, project manager, said: "Reach Up is a short programme that can have long-lasting impact.
"It provides young people—especially those not currently in work or education—with an opportunity to engage with organisations that can inspire their next step in education, employment and training.
"It can be a stepping stone to a better career or future."
If you would like to participate in the programme or refer a young person, email allanpotter@adviza.org.uk | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,234 |
Over the years, LG has really impressed us with their latest and greatest and this year their "OLED Canyon" was pretty impressive indeed. Last year they had a full tunnel of OLED panels, and this year they had a full canyon to walk through that showcased their ability to bend their panels both concave (traditional) and convex (outward). This looked very impressive and shows that LG is thinking about practical applications for large displays as well as showing off some pretty tight curves that they can do with their displays as well. The results are almost breathtaking and while some may argue that VR gives you the same effect, the fact is that this canyon experience is a bit better than 4K - closer to 800K worth of craziness makes it incredible.
LG was also showing off their very nice Thunderbolt 3 displays that are capable of daisy-chaining multiple 4K displays. The 32UK950 pictured below is capable of the HDR600 specification - making it far superior to many companies HDR10 spec that just meets minimum requirements. While the image below looks pretty vivid and incredible, you really have to see it first-hand to appreciate the beauty that is the 32UK950 series.
LG also showcased their gaming displays capable of 240Hz, but without the full HDR600 specification. LG does support both Freesync and G-Sync in their gaming displays.
There was much more to see and take in at LG, but these are the main highlights I think you'll enjoy! | {
"redpajama_set_name": "RedPajamaC4"
} | 5,173 |
Q: urllib.urlretrieve encoding is not kept I'm using python 3.4.
When I use urllib.request.urlretrieve(link, filename="file.html") on a utf-8 file, the resulting file.html is not properly encoded. How do I make sure the file is encoded using utf-8?
How to implement the .decode(utf-8) in this case?
EDIT
This is the original part of page:
« Écoute, mon peuple, je parle ; Moi, Dieu, je suis ton Dieu ! Je ne t'accuse pas pour tes sacrifices ; tes holocaustes sont toujours devant moi. « Je ne prendrai pas un seul taureau de ton domaine, pas un bélier de tes enclos. Tout le gibier des forêts m'appartient et le bétail des hauts pâturages. « Si j'ai faim, irai-je te le dire ? Le monde et sa richesse m'appartiennent. Vais-je manger la chair des taureaux et boire le sang des béliers ? « Qu'as-tu à réciter mes lois, à garder mon alliance à la bouche, toi qui n'aimes pas les reproches et rejettes loin de toi mes paroles ? »
And this is what I get in the saved file:
� �coute, mon peuple, je parle ;�Moi, Dieu, je suis ton Dieu !�Je ne t'accuse pas pour tes sacrifices ; tes holocaustes sont toujours devant moi.�� Je ne prendrai pas un seul taureau de ton domaine, pas un b�lier de tes enclos.�Tout le gibier des for�ts m'appartient et le b�tail des hauts p�turages. � Si j'ai faim, irai-je te le dire ? Le monde et sa richesse m'appartiennent.�Vais-je manger la chair des taureaux et boire le sang des b�liers ?�� Qu'as-tu � r�citer mes lois,�� garder mon alliance � la bouche,�toi qui n'aimes pas les reproches et rejettes loin de toi mes paroles ?��
I noticed that in certain parts of the page accented characters are not really utf-8 encoded but the browser shows it properly. For example instead of É there is É and when the file is downloaded this seems to cause problems.
A: You can unescape the HTML escape sequences in the file line by line using the method shown here.
import html.parser
h = html.parser.HTMLParser()
with urllib.request.urlopen(link) as fin, open(
"file.html", 'w', encoding='utf-8') as fout:
for line in fin:
fout.write(h.unescape(line.decode('utf-8')))
A: I advice to use it handle this for you: It convert the loaded document implecitly to utf-8
markup = "<h1>Sacr\xc3\xa9 bleu!</h1>"
soup = BeautifulSoup(markup)
soup.h1
# <h1>Sacré bleu!</h1>
soup.h1.string
# u'Sacr\xe9 bleu!'
BeautifulSoup documentation: here
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,741 |
We will be partnering with Gorham Parks & Recreation to offer a rugby program through Challenger Sports. Challenger Sports is proud to present Rugby clinics to communities across the United States and Canada. Our Rookie Rugby programs are non-contact. Players practice and play a form of flag rugby. It is a fun, safe, team game that develops a range of ball handling, running and evasion skills. Participants learn the importance of teamwork and respect for opponents, coaches and referees, which are key elements of all athletic endeavors. Our Rookie Rugby programs are co-ed. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,487 |
\section{Introduction}
The theoretical results of atomic parity non-conservation (PNC) when
combined with the experimental results is an important probe of
physics beyond the standard model of particle physics \cite{khriplovich-91}.
There are two sources of PNC in atoms, nuclear spin-independent (NSI) and
nuclear spin-dependent (NSD). The NSI-PNC is well studied and experimentally
observed in several atoms. The most precise measurement till date is in the
case of atomic Cs \cite{wood-97}. The same experiment also indicated a
signature of NSD-PNC effects. The most dominant source of which is the nuclear
anapole moment (NAM), a parity odd nuclear electromagnetic moment arising from
parity violating interaction within the nucleus
\cite{flambaum-80,flambaum-84,zeldovich-58}. However, there are two other
contributions to NSD-PNC, these are the NSD electron-nucleus $Z$ exchange
interaction and the combined effect of hyperfine interaction and NSI
electron-nucleus $Z$ exchange interaction.
The parameters describing nucleon-nucleon coupling, effect of NAM is subsumed
into it, extracted from the Cs PNC experiment do not concur with the nuclear
data \cite{haxton-01}. This certainly calls for the further investigation
of the NSD-PNC effects in other atomic systems as well. An example of
an alternative experiment is the proposal to measure the PNC in Ba$^+$ ion,
suggested by Fortson \cite{fortson-93} and is in progress at
Seattle \cite{sherman-05,sherman-08}. This experiment could lead to an
unambiguous observation of NAM in the $6s\;^2S_{1/2}-5d\;^2D_{5/2}$
transition, as the NSI-PNC alone does not contribute to this transition.
It is important to note that the major difficulty to a clear
observation of NAM is the large NSI signal, which overwhelms the NSD
signature. The Ra$^+$ ion has also been suggested and is considered to be an
important candidate for the PNC measurement \cite{wansbeek-08,versolato-10}.
Apart from Ba$^+$ and Ra$^+$ ions which are one-valence systems the other
promising candidate for PNC, the NAM in particular, measurement is the atomic
Yb. An enhanced effect of PNC has already been reported
\cite{tsigutkin-09,tsigutkin-10} in neutral Yb,
the $6s^2\;^1S_0-6s5d\;^3D_2$ transition, and for further refinement
of the experiment is in progress at Berkeley. The
$6s\;^2S_{1/2}-5d\;^2D_{3/2}$ transition in Yb$^+$, has also been suggested
to reveal the NAM signature and is being investigated at Los Alamos
\cite{torgerson-10,das-99}.
The atomic theory results using reliable and accurate many-body methods are
key to estimate the expected value of PNC transition amplitudes and
extracting NAM. For the theoretical calculations, the relativistic
coupled-cluster (RCC) theory \cite{coester-58,coester-60} can be of great
significance, as it is one of the most reliable many-body theory
to incorporate electron correlation in atomic calculations.
The RCC has been used extensively in atomic structure calculations
\cite{eliav-96,pal-07,sahoo-09,nataraj-08,wansbeek-08,pal-09,porsev-10}
of properties like transition energies, hyperfine structure constants,
electromagnetic transition amplitudes, intrinsic electric dipole moment and
PNC in atoms. Apart from atomic physics, it has also been used with great
success in nuclear \cite{hagen-08}, molecular \cite{isaev-04} and the
condensed matter \cite{bishop-09} physics.
In this work, we employ perturbed relativistic coupled-cluster (PRCC) theory
to calculate NSI and NSD-PNC amplitudes of the
$[4f^{14}]6s\;^2S_{1/2}-[4f^{14}]5d\;^2D_{3/2}$ transition in the case of
$^{171}$Yb$^+$ ion. This is timely as there are few theoretical results,
Sahoo {\em et al} \cite{sahoo-11} and Dzuba {\em et al} \cite{dzuba-11} for
NSI-PNC and Dzuba {\em et al} \cite{dzuba-11} and
Porsev {\em et al} \cite{porsev-12} for NSD-PNC are the previous works.
The NSI-PNC results from Ref. \cite{sahoo-11} calculated using RCC method
differ substantially from Ref. \cite{dzuba-11} where the
correlation-potential-method with sum-over-state approach is employed to
calculate NSI and NSD-PNC. The NSD-PNC results reported in
Ref. \cite{porsev-12} are based on RPA and, in general, is in agreement with
the results reported in Ref. \cite{dzuba-11}. However, the later is based on
the sum-over-state approach, at the level of PNC matrix elements. The
PRCC method \cite{chattopadhyay-12,latha-09,mani-11} employed in present work
is different from the sum-over-states approach. It accounts for the all singly
and doubly excited intermediate states. There are two sets of the cluster
amplitudes in the PRCC, and the summation over states in the first order
time-independent perturbation is incorporated in one set of the cluster
amplitudes.
The paper is organized as follows. In Section. \ref{method}, we provide
a brief description of the theoretical methods. The unperturbed RCC equations
for close-shell and one-valence systems are given to serve as a easy
reference. The perturbed RCC is then discussed in detail and PRCC equations
are derived. The expression for E1PNC using PRCC wave function and some
leading order diagrams are also discussed. Results from the work and
uncertainty estimates are presented and discussed in
Section. \ref{results}.
\section{Theoretical methods}
\label{method}
In absence of PNC interaction the atomic states are of definite parity, and
we consider these as the eigen states of the no-virtual-pair Dirac-Coulomb
Hamiltonian \cite{sucher-80}
\begin{eqnarray}
H^{\rm DC}& =& \Lambda _+ \sum_{i=1}^N\left [c\bm{\alpha}_i\cdot \mathbf{p}_i
+ (\beta_i-1)c^2 - V_N(r_i)\right ] \nonumber \\
& & +\sum_{i<j}\frac{1}{r_{ij}} \Lambda_+,
\label{dchamil}
\end{eqnarray}
where $\bm{\alpha}_i$ and $\beta$ are the Dirac matrices, $\mathbf{p}$ is the
linear momentum, $V_N(r)$ is the nuclear Coulomb potential and the last term
is the electron-electron Coulomb interactions. The operator $\Lambda_+$
projects on the positive energy eigenstates to avoid the negative energy
continuum solutions. The Hamiltonian $H^{\rm DC}$ satisfies the
eigen value equation
\begin{equation}
H^{\rm DC}|\Psi_v \rangle = E_v |\Psi_v \rangle,
\label{hdc_eqn}
\end{equation}
where $|\Psi_v \rangle$ is the exact atomic state of the one-valence system
and $E_v$ is the corresponding energy. Here after, for compact notation, we
use $H$ to represent $H^{\rm DC}$. In the present work, we use
RCC theory with the single and doubles (CCSD) excitation approximation to solve
Eq. (\ref{hdc_eqn}). In RCC, $|\Psi_v \rangle$ is expressed in terms of
the closed-shell and one-valence cluster operators, $T^{(0)}$ and $S^{(0)}$
respectively, as
\begin{equation}
|\Psi_v\rangle = e^{T^{(0)}} \left [ 1 + S^{(0)} \right ] |\Phi_v\rangle,
\label{psi_unptrb}
\end{equation}
where superscript $(0)$ represents the unperturbed RCC operators. The
one-valence Dirac-Fock (DF) reference state $|\Phi_v\rangle$ is
obtained by adding an electron to the closed-shell reference state,
$|\Phi_v \rangle = a^\dagger_v|\Phi_0\rangle$.
In the CCSD approximation, $T^{(0)} = T^{(0)}_1 + T^{(0)}_2$
and $S^{(0)} = S^{(0)}_1 + S^{(0)}_2$. Using the second quantized
representation
\begin{subequations}
\begin{eqnarray}
T_1 &=& \sum_{a, p}t_a^p a_p^{\dagger}a_a, \text{ and }
T_2 = \frac{1}{2!}\sum_{a, b, p, q}t_{ab}^{pq}
a_p^{\dagger}a_q^{\dagger}a_ba_a, \\
S_1 &=& \sum_{p}s_v^p a_p^{\dagger}a_v, \text{ and }
S_2 = \sum_{a, p, q}s_{va}^{pq}
a_p^{\dagger}a_q^{\dagger}a_aa_v.
\end{eqnarray}
\end{subequations}
Here, $t_{\cdots}^{\cdots}$ and $s_{\cdots}^{\cdots}$ are the cluster
amplitudes. The indexes $abc\ldots$ ($pqr\ldots$) represent core (virtual)
states and $vwx\ldots$ represent valence states. The operators $T_1$ ($S_1$ )
and $T_2$ ($S_2$) give single and double replacements after operating on the
closed(open)-shell reference states. The diagrammatic representation of
these operators are shown in Fig. \ref{ts_fig}.
\begin{figure}[h]
\begin{center}
\includegraphics[width = 7.0 cm]{ts1v.pdf}
\caption{Diagrammatic representation of the single and double excitation
unperturbed cluster operators in closed shell and one-valence
sectors.}
\label{ts_fig}
\end{center}
\end{figure}
The open-shell cluster operators are then the solutions of nonlinear
equations \cite{mani-10}
\begin{subequations}
\label{s0_eqn}
\begin{eqnarray}
\langle \Phi_v^p|\bar H_N \! +\! \{\contraction[0.5ex]
{\bar}{H}{_N}{S} \bar H_N S^{(0)}\} |\Phi_v\rangle
&=&E_v^{\rm att}\langle\Phi_v^p|S^{(0)}_1|\Phi_v\rangle ,
\label{s01_eqn} \\
\langle \Phi_{va}^{pq}|\bar H_N +\{\contraction[0.5ex]
{\bar}{H}{_N}{S}\bar H_N S^{(0)}\} |\Phi_v\rangle
&=& E_v^{\rm att}\langle\Phi_{va}^{pq}|S^{(0)}_2|\Phi_v\rangle,
\label{s02_eqn}
\end{eqnarray}
\end{subequations}
where $\bar H_{\rm N}=e^{-T^{(0)}}H_{\rm N}e^{T^{(0)}} $ is the
similarity transformed Hamiltonian,
$H_{\rm N} = H -\langle\Phi_0|H|\Phi_0\rangle$ is the normal ordered
Hamiltonian and $E_v^{\rm att}$ is the attachment energy of the
valence electron. The operators $T^{(0)}$ are the solutions of a similar
set of nonlinear coupled equations
\begin{subequations}
\label{t0_eqn}
\begin{eqnarray}
\langle\Phi^p_a|\bar H_{\rm N}|\Phi_0\rangle = 0,
\label{t01_eqn} \\
\langle\Phi^{pq}_{ab}|\bar H_{\rm N}|\Phi_0\rangle = 0.
\label{t02_eqn}
\end{eqnarray}
\end{subequations}
The details on the derivation of these equations are given in
our previous work \cite{mani-09}.
In presence of PNC interaction atomic states mix with the opposite
parity states and the total atomic Hamiltonian is
\begin{equation}
H_{\rm a} = H^{\rm DC} + \lambda H_{\rm PNC}.
\label{total_H}
\end{equation}
Here, $\lambda$ is the perturbation parameter and $H_{\rm PNC}$
represents the any general PNC interaction Hamiltonian. It has two components,
the NSI and NSD interaction. These are
\begin{subequations}
\begin{eqnarray}
H_{\rm PNC}^{\rm NSI}&=&\frac{G_{\rm F}Q_W}{2 \sqrt{2}} \sum_i
\gamma_5\rho_{\rm{N}}(r_i),
\label{hpnc_nsi} \\
H_{\rm PNC}^{\rm NSD}&=&\frac{G_{\rm F}\mu'_W}{\sqrt{2} I}\sum_i
\bm{\alpha}_i\cdot \mathbf{I}\rho_{\rm{N}}(r),
\label{hpnc_nsd}
\end{eqnarray}
\end{subequations}
where, $G_F (=2.22\times10^{-14} a.u.)$ is the Fermi coupling
constant, $Q_W$ and $\mu'_W$ are respectively the weak nuclear charge and
the weak nuclear moment of the nucleus expressed in terms of neutron and
proton numbers, $\bm{\alpha}$ and $\gamma_5$ are the Dirac matrices,
$\rho_{\rm N}(r)$ is the normalized nuclear density and $I$ is the
nuclear spin. Compared to the NSI-PNC, the NSD-PNC require two important
considerations because of the nuclear spin operator $\mathbf{I}$. First, the
cluster operators in the electron space are rank one operators, and second,
the atomic states in the one-valence sector are eigenstates of total angular
momentum $\mathbf{F} = \mathbf{I} + \mathbf{J}$.
\begin{figure}[h]
\begin{center}
\includegraphics[width = 7.0 cm]{pts1v.pdf}
\caption{Diagrammatic representation of the single and double excitation
NSD-perturbed cluster operators in closed-shell and one-valence
sectors. The extra line in the $T_2^{(1)} $ and $S_2^{(1)}$ is to
indicate the multipole structure of the operators.}
\label{pts_nsd_fig}
\end{center}
\end{figure}
Similar to the unperturbed eigen value equation, Eq. (\ref{hdc_eqn}),
we may write the perturbed eigenvalue equation, satisfied by the total
atomic Hamiltonian, as
\begin{equation}
H_{\rm a} |\widetilde{\Psi}_v \rangle =
\widetilde{E}_v |\widetilde{\Psi}_v \rangle,
\label{ht_eqn}
\end{equation}
where $|\widetilde{\Psi}_v \rangle$ is the perturbed atomic state
and $\widetilde{E}_v$ is the corresponding energy. To the first-order
in $\lambda $,
$|\widetilde{\Psi}_v \rangle = |\Psi_v\rangle +
\lambda |\bar{\Psi}^{1}_v\rangle$ and
$\widetilde{E}_v = E_v + \lambda E^{1}_v$, where the bar in
$|\bar{\Psi}^{1}_v\rangle$ denotes it's parity is opposite to $|\Psi_v\rangle$.
From here on, to derive the PRCC equations we consider the NSD-PNC interaction
Hamiltonian. Using Eq. (\ref{hpnc_nsd}), we can rewrite Eq. (\ref{ht_eqn}) as
\begin{equation}
\left ( H^{\rm DC} + \lambda {\mathbf{H}}_{\rm elec}^{\rm NSD}\cdot\mathbf{I} \right) |
\widetilde{\Psi}_v \rangle = E_v| \widetilde{\Psi}_v \rangle.
\label{ht_elc_eqn}
\end{equation}
Here, $H_{\rm elc}^{\rm NSD} =( G_{\rm F}\mu'_W/\sqrt{2})\sum_i
\alpha_i\rho_{\rm{N}}(r)$ is the electronic part of $H_{\rm PNC}^{\rm NSD}$.
While writing above equation we have used
$E^1_v = \langle \Psi_v|H_{\rm PNC}^{\rm NSD}|\Psi_v\rangle = 0$,
as $H_{\rm PNC}^{\rm NSD}$ is an odd parity operator it connects
opposite parity states only. In the PRCC theory, the perturbed wave function
is expressed as
\begin{equation}
| \widetilde{\Psi}_v \rangle = e^{T^{(0)}}\left[ 1
+ \lambda \mathbf{T}^{(1)} \cdot\mathbf{I} \right] \left[ 1
+ S^{(0)} +\lambda \mathbf{S}^{(1)} \cdot\mathbf{I} \right] |\Phi_v \rangle,
\label{psi_ptrb}
\end{equation}
where $\mathbf{T}^{(1)}$ and $\mathbf{S}^{(1)}$ are the closed-shell and one-valence PRCC operators,
respectively. The superscript $(1)$ is used to indicate the perturbation.
The diagrammatic representation of these cluster operators are shown in
Fig. \ref{pts_nsd_fig}.
Using Eq. (\ref{psi_ptrb}) in Eq. (\ref{ht_elc_eqn}), we can rewrite
the eigenvalue equation as
\begin{eqnarray}
&&\left( H + \lambda{\mathbf{H}}_{\rm elec}^{\rm NSD} \cdot\mathbf{I} \right)
e^{T^{(0)}} \left[ 1 + \lambda \mathbf{T}^{(1)} \cdot\mathbf{I} \right]
\left[ 1 + S^{(0)} \right . \nonumber \\
&& \left . + \lambda\mathbf{S}^{(1)}\cdot\mathbf{I} \right] |\Phi_v \rangle
= E_v e^{T^{(0)}} \left[ 1 + \lambda \mathbf{T}^{(1)}\cdot\mathbf{I} \right]
\left[ 1 + S^{(0)} \right . \nonumber \\
&& \left. + \lambda \mathbf{S}^{(1)}\cdot\mathbf{I} \right] |\Phi_v \rangle.
\end{eqnarray}
To derive the PRCC equations, we project the above equation on $e^{-T^{(0)}}$
and retain the terms linear in $\lambda$. In addition, for further
simplification, we use normal-ordered form of the Hamiltonian
$H_{\rm N} = H - \langle\Phi_v|H|\Phi_v\rangle$. After these sequence of
operations, the eigenvalue equation is modified to
\begin{eqnarray}
\left[ \bar H_{\rm N}\mathbf{S}^{(1)} + \bar H_{\rm N}\mathbf{T}^{(1)} ( 1 + S^{(0)} ) +
\bar {\mathbf{H}}_{\rm elec}^{\rm NSD} ( 1 + S^{(0)} ) \right]
|\Phi_v \rangle \nonumber \\
=\left[ \Delta E_v \mathbf{S}^{(1)} + \Delta E_v \mathbf{T}^{(1)} ( 1 + S^{(0)} ) \right]|\Phi_v
\rangle,
\label{deltae1v}
\end{eqnarray}
where $\Delta E_v = E_v - \langle\Phi_v|H|\Phi_v\rangle$, is the correlation
energy of the one-valence system. Like $\bar{H}_{\rm N}$ introduced earlier,
$\bar {\mathbf{H}}_{\rm elec}^{\rm NSD}
=e^{-T^{(0)}}H_{\rm elc}^{\rm PNC}e^{T^{(0)}}$ is the similarity
transformed NSD-PNC interaction Hamiltonian in the electronic space.
The PRCC equations of $\mathbf{S}^{(1)}$ can now be derived by projecting
Eq. (\ref{deltae1v}) with the excited determinants $\langle\Phi^p_v|$ and
$\langle\Phi^{pq}_{va}|$ as
\begin{widetext}
\begin{subequations}
\begin{eqnarray}
\langle \Phi^p_v |\{ \contraction[0.5ex]{}{H}{_{\rm N}}{S}\bar{H}_{\rm N}
\mathbf{S}^{(1)} \} + \{ \contraction{}{H}{_{\rm N}}{S}\bar{H}_{\rm N}
\mathbf{T}^{(1)} \} + \{ \contraction[0.5ex]{}{H}{_{\rm N}}{T}
\contraction[0.8ex]{}{V}{_{\rm N}T^{(1)}}{S}\bar{H}_{\rm N}
\mathbf{T}^{(1)}S^{(0)}\} + \bar{\mathbf{H}}_{\rm elec}^{\rm NSD}
+ \{ \contraction[0.5ex]{}{H}{_{\rm elec}^{\rm NSD}}{S}
\bar{\mathbf{H}}_{\rm elec}^{\rm NSD}{S}^{(0)} \}|\Phi_v \rangle &=&
E_v^{\rm att} \langle \Phi^p_v | \mathbf{S}^{(1)}_1|\Phi_v \rangle, \\
\langle \Phi^{pq}_{vb}|\{ \contraction[0.5ex]{}{H}{_{\rm N}}{S}\bar{H}_{\rm N}
\mathbf{S}^{(1)}\}+\{ \contraction[0.5ex]{}{H}{_{\rm N}}{S}\bar{H}_{\rm N}
\mathbf{T}^{(1)} \} + \{ \contraction[0.5ex]{}{H}{_{\rm N}}{T}
\contraction[0.8ex]{}{V}{_{\rm N}T^{(1)}}{S}\bar{H}_{\rm N}
\mathbf{T}^{(1)}S^{(0)}\} + \bar{\mathbf{H}}_{\rm elec}^{\rm NSD}
+ \{ \contraction[0.5ex]{}{H}{_{\rm elec}^{\rm NSD}}{S}
\bar{\mathbf{H}}_{\rm elec}^{\rm NSD}{S}^{(0)} \}|\Phi_v \rangle &=&
E_v^{\rm att} \langle \Phi^{pq}_{vb} | \mathbf{S}^{(1)}_2|\Phi_v \rangle.
\label{ccsptrb1v2}
\end{eqnarray}
\label{prcc_eqn}
\end{subequations}
\end{widetext}
While deriving the equations we have used the relations
$ \langle \Phi^p_v | \mathbf{T}^{(1)} |\Phi_v \rangle = 0$ and
$\langle \Phi^p_v| \mathbf{T}^{(1)} S| \Phi_v \rangle = 0$. These follows as $\mathbf{T}^{(1)}$ is an
operator of closed-shell sector, it does not contribute to the PRCC equation of
$\mathbf{S}^{(1)}_1$ and $\mathbf{S}^{(1)}_2$. The closed-shell operators $\mathbf{T}^{(1)}$ are the solutions
of the similar set of coupled equations \cite{mani-09}
\begin{subequations}
\label{pcceq}
\begin{eqnarray}
\langle \Phi^p_a |\{ \contraction{}{H}{_{\rm N}}{T}
\bar{\mathbf{H}}_{\rm N}\mathbf{T}^{(1)} \} |\Phi_0\rangle &=&
-\langle \Phi^p_a | \bar {\mathbf{H}}_{\rm elec}^{\rm NSD}
- \Delta E_0 \mathbf{T}^{(1)} |\Phi_0 \rangle, \;\;\;\;\;\;\;\;
\label{pcceq1} \\
\langle \Phi^{pq}_{ab} | \{\contraction{}{H}{_{\rm N}}{T}
\bar{\mathbf{H}}_{\rm N}\mathbf{T}^{(1)} \} |\Phi_0 \rangle &=&
-\langle \Phi^{pq}_{ab} | \bar {\mathbf{H}}_{\rm elec}^{\rm NSD}
- \Delta E_0 \mathbf{T}^{(1)} |\Phi_0 \rangle .
\label{pcceq2}
\end{eqnarray}
\end{subequations}
These equations can be derived from the closed-shell perturbed eigenvalue
equation. We can also derive a similar set of PRCC equations for the NSI-PNC
interaction Hamiltonian. One major difference is, the cluster operators are
rank zero operators.
After solving the RCC and PRCC equations, we can use the atomic states
for the properties calculations. The RCC expressions and the diagrams
contributing to the hyperfine structure (HFS) constants and the E1 transition
amplitudes are derived and discussed in our previous work \cite{mani-10}. In
the present work, we use the same expressions and diagrams to compute
HFS constants and E1 transition amplitudes. The PNC induced electric dipole
transition amplitude, using PRCC wave function, is
\begin{equation}
{\rm E1PNC} = \langle \widetilde{\Psi}_w|\!| \mathbf{D} |\!|
\widetilde{\Psi}_v \rangle,
\end{equation}
where $\mathbf{D}$ is the dipole operator. This expression, unlike the
conventional sum-over-sates approach, implicitly account for all the possible
intermediate states. From Eq. (\ref{psi_ptrb}), for the NSD-PNC interaction,
the transition amplitude is
\begin{eqnarray}
E1_{\rm PNC}^{\rm NSD} && = \langle \Phi_w |\!| {e^{T^{(0)}}}^\dagger
\left[ 1 + \lambda \mathbf{T}^{(1)}\cdot\mathbf{I} \right]^\dagger \left[ 1
+ S^{(0)} + \lambda \mathbf{S}^{(1)}\cdot\mathbf{I} \right]^\dagger
\nonumber \\
&& \mathbf{D} e^{T^{(0)}} \left[ 1 + \lambda \mathbf{T}^{(1)}\cdot\mathbf{I} \right]
\left[ 1 + S^{(0)} + \lambda \mathbf{S}^{(1)}\cdot\mathbf{I}\right] |\!|
\Phi_v \rangle.
\end{eqnarray}
Consider terms linear in $\lambda$ and retain only those up to second order
in cluster amplitude. Define the electronic component as
$E1_{\rm elec}^{\rm NSD} $, corresponding to the $H_{\rm elec}^{\rm NSD}$,
it is then given as
\begin{eqnarray}
E1_{\rm elec}^{\rm NSD}& \approx & \langle\Phi_w|\!|
\mathbf{D} \mathbf{T}^{(1)} + {T^{(0)}}^\dagger \mathbf{D} \mathbf{T}^{(1)} +
{\mathbf{T}^{(1)}}^\dagger \mathbf{D} T^{(0)}
\nonumber \\
&& +{\mathbf{T}^{(1)}}^\dagger \mathbf{D} + \mathbf{D} \mathbf{T}^{(1)} S^{(0)}
+ {\mathbf{T}^{(1)}}^\dagger {S^{(0)}}^\dagger \mathbf{D}
\nonumber \\
&& +{S^{(0)}}^\dagger \mathbf{D} \mathbf{T}^{(1)}
+ {\mathbf{T}^{(1)}}^\dagger \mathbf{D} S^{(0)} + \mathbf{D} \mathbf{S}^{(1)}
+{\mathbf{S}^{(1)}}^\dagger \mathbf{D} \nonumber \\
&& + {S^{(0)}}^\dagger \mathbf{D}
\mathbf{S}^{(1)} + {\mathbf{S}^{(1)}}^\dagger \mathbf{D} S^{(0)} |\!|\Phi_v\rangle.
\label{e1pnccc1v}
\end{eqnarray}
To calculate ${\rm E1PNC}$, we use diagrammatic analysis to identify the
Goldstone diagrams from these terms. However, we exclude the structural
radiation diagrams, arising from the terms involving two-body cluster
operators, for example, $\mathbf{T}^{(1)}_2\mathbf{D}T^{(0)}_2$. The selected diagrams
from the leading order and next to leading order terms are shown in the
Fig. \ref{e1pnc_cc_fig}.
\begin{figure}[h]
\begin{center}
\includegraphics[width = 8.0cm]{e1pnc_cc1v.pdf}
\caption{Some of the leading order PRCC diagrams which contribute to the
$E1_{\rm elec}^{\rm PNC}$ of one-valence atoms.}
\label{e1pnc_cc_fig}
\end{center}
\end{figure}
\section{Results and discussions}
\label{results}
\subsection{Single-particle basis functions}
For all the calculations we use Gaussian type orbitals (GTOs) or single
particle wave functions with $V^{N-2}$ potential. As mentioned earlier, to
incorporate the relativistic effects we use the Dirac-Coulomb atomic
Hamiltonian. For the nuclear potential we consider the finite size Fermi
density distribution
\begin{equation}
\rho_{\rm nuc}(r) = \frac{\rho_0}{1 + e^{(r-c)/a} },
\end{equation}
here, $a = t 4\ln 3$. The parameter $c$ is the half-charge radius, that is
$\rho_{\rm nuc}(c)=\rho_0/2$ and $t$ is the skin thickness. The orbitals
are of the form
\begin{equation}
\psi_{n\kappa m}(\bm{r})=\frac{1}{r}
\left(\begin{array}{r}
P_{n\kappa}(r)\chi_{\kappa m}(\bm{r}/r)\\
iQ_{n\kappa}(r)\chi_{-\kappa m}(\bm{r}/r)
\end{array}\right),
\label{spin-orbital}
\end{equation}
where $P_{n\kappa}(r)$ and $Q_{n\kappa}(r)$ are the large and small component
radial wave functions, $\kappa$ is the relativistic total angular momentum
quantum number and $\chi_{\kappa m}(\bm{r}/r)$ are the spinor-spherical
harmonics. The radial components are then defined as linear combination of
Gaussian type functions \cite{mohanty-89,chaudhuri-99}
\begin{eqnarray}
P_{n\kappa}(r) = \sum_p C^L_{\kappa p} g^L_{\kappa p}(r), \nonumber \\
Q_{n\kappa}(r) = \sum_p C^S_{\kappa p} g^S_{\kappa p}(r).
\end{eqnarray}
The index $p=1, 2, \ldots, m$, where $m$ is the number of basis functions and
$C_{\kappa p}^{\cdots}$ are the coefficients of linear combination. For large
component we choose
\begin{equation}
g^L_{\kappa p}(r) = C^L_{m_{\kappa i}} r^{n_\kappa} e^{-\alpha_p r^2},
\end{equation}
where $n_\kappa$ is an integer and $C^L_{m_{\kappa i}}$ is the normalization
constant. The small component are derived from the large components using
kinetic balance condition. The $\alpha_p$ follow the general relation
\begin{equation}
\alpha_p = \alpha_0 \beta^{p-1}.
\label{param_gto}
\end{equation}
The parameters, $\alpha_0$ and $\beta$, are optimized such that the single
particle energies of the core and valence orbitals are in good agreement with
the numerical results, obtained from GRASP92 \cite{parpia-96}. In
Table. \ref{grasp_e_tab}, we compare the energy of the valence orbitals
from the GTO with the GRASP92 data.
\begin{table}[h]
\begin{center}
\caption{The valence orbital and SCF energies of Gaussian type orbitals (GTO)
are compared with the GRASP92 data.}
\begin{ruledtabular}
\begin{tabular}{ccc}
Orbitals & GTO & GRASP92 \\
\hline
$6s\;^2S_{1/2}$ & $-0.413668$ & $-0.413665$ \\
$6p\;^2P_{1/2}$ & $-0.301112$ & $-0.301113$ \\
$6p\;^2P_{3/2}$ & $-0.288305$ & $-0.288307$ \\
$5d\;^2D_{3/2}$ & $-0.303070$ & $-0.303071$ \\
$5d\;^2D_{5/2}$ & $-0.300885$ & $-0.300886$ \\
$E_{\rm SCF}$ & $-14067.0622$ & $-14067.0676$
\end{tabular}
\end{ruledtabular}
\label{grasp_e_tab}
\end{center}
\end{table}
\subsection{Excitation energies, hyperfine structure constants
and E1 transition amplitudes}
The excitation energies, hyperfine structure constants and the E1 transition
amplitudes from our calculations are listed in the
Tables. \ref{ee_tab}, \ref{hfs_tab} and \ref{e1_tab}, respectively.
These results are obtained using a fairly large basis of
$177$ active GTOs, it consists of $19$, $17$, $17$, $17$, $15$ and $13$
orbitals in the $s$, $p$, $d$, $f$, $g$ and $h$ symmetries, respectively.
To arrive at this basis set we start with a moderate size of $100$
active orbitals with the combination $12s$, $10p$, $10d$, $10f$, $8g$ and
$6h$. And perform seven sets of calculations by adding one orbital to each
symmetry in every successive sets. The \% change in HFS
constants and E1 transition amplitudes with respect to number of
active orbitals are shown in Fig. \ref{hfs_fig}. As we see in the
figure, the E1 transition amplitudes converge and there is no observable
change in the amplitudes after $155$. On the other hand, for HFS constants
we observe a slower convergence pattern. It is evident from the figure that
the HFS results are close to convergence. The maximum uncertainty is about
0.5\%, in the case of $5d\;^2D_{5/2}$, but it is smaller for the states
$6s\;^2S_{1/2}$, $6p\;^2P_{1/2}$, $6p\;^2P_{3/2}$ and $5d\;^2D_{3/2}$, the
uncertainties are 0.3\%, 0.3\%, 0.1\% and 0.05\%, respectively.
\begin{figure}[h]
\begin{center}
\includegraphics[width = 6.0cm, angle = -90]{orbitals_vs_hfse1.pdf}
\caption{The convergence (\% change) of the hyperfine structure constants
and the E1 transition amplitudes with respect to the number of
active orbitals.}
\label{hfs_fig}
\end{center}
\end{figure}
\begin{table}[h]
\begin{center}
\caption{Excitation energy for some of the low lying excitations in
$^{171}$Yb$^+$. The values are in cm$^{-1}$.}
\begin{ruledtabular}
\begin{tabular}{cccc}
Level & This work & Other works & Exp.{Ref\cite{nist}.} \\
\hline
$5d_{3/2}$ & $23983$ & $23926^{\rm a}$ & $22961$ \\
& & $21238^{\rm b}$ & \\
& & $22711^{\rm c}$ & \\
& & $22820^{\rm d}$ & \\
$5d_{5/2}$ & $25576$ & $22449^{\rm b}$ & $24333$ \\
& & $24178^{\rm c}$ & \\
& & $24261^{\rm d}$ & \\
$6p_{1/2}$ & $27985$ & $28749^{\rm a}$ & $27062$ \\
& & $28048^{\rm b}$ & \\
& & $27945^{\rm c}$ & \\
& & $27945^{\rm d}$ & \\
& & $28109(1000)^{\rm e}$ & \\
$6p_{3/2}$ & $31757$ & $32376^{\rm a}$ & $30392$ \\
& & $31411^{\rm b}$ & \\
& & $31403^{\rm c}$ & \\
& & $31481^{\rm d}$ & \\
& & $31604(800)^{\rm e}$ & \\
\end{tabular}
\end{ruledtabular}
\begin{tabbing}
$^{\rm a}$ Reference\cite{dzuba-11}. \\
$^{\rm b}$ Reference\cite{safronova-09}.\\
$^{\rm c}$ Reference\cite{porsev-12}-MBPT + corrections.\\
$^{\rm d}$ Reference\cite{porsev-12}-All-order. \\
$^{\rm e}$ Reference\cite{sahoo-11}.
\end{tabbing}
\label{ee_tab}
\end{center}
\end{table}
The excitation energies from our calculations are listed in
Table. \ref{ee_tab}. As described in Sec. \ref{method}, these are calculated
using RCC with the CCSD approximation. Except for the $6p\;^2P_{3/2}$
excitation energy, our results are better or on par with the previous
theoretical results when compared with the experimental data. The all-order
results reported in Ref. \cite{porsev-12} are closer to the experimental
data than the other theoretical results, including the present work.
For the $6p\;^2P_{3/2}$, our result is in close to the RCC result of
Sahoo and collaborators \cite{sahoo-11}. Among the other three results for this
level, our result is closer to the Ref. \cite{porsev-12}.
The dominant excitations which contribute to the denominator of the
NSI-PNC matrix are $6s\;^2S_{1/2}-6p\;^2P_{1/2}$ and
$6p\;^2P_{1/2}-5d\;^2D_{3/2}$. However for the NSD-PNC, these are
$6s\;^2S_{1/2}-6p\;^2P_{1/2}$, $6p\;^2P_{1/2}-5d\;^2D_{3/2}$,
$6s\;^2S_{1/2}-6p\;^2P_{3/2}$ and $6p\;^2P_{3/2}-5d\;^2D_{3/2}$. The accuracy
achieved for these in the present work are 3.4\%, 4.5\%, 2.4\% and 4.6\%,
respectively. We have incorporated these errors in the total uncertainty
estimates for the PNC results.
\begin{table}[h]
\begin{center}
\caption{Magnetic dipole hyperfine structure constants of $^{171}$Yb$^+$ in
the unit MHz.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
State & This work & Other works & Exp \\
\hline
$6s_{1/2}$ & $13488.314$& $13217^{\rm a},13172^{\rm b},$
& $12645(2)^{\rm e}$ \\
& & $13091^{\rm c},13332(1000)^{\rm d},$ \\
& & $12730(2)^{\rm e}$ & \\
$6p_{1/2}$ & $2348.036$ & $2533^{\rm a},2350^{\rm b},$
& $2104.9(1.3)^{\rm e}$ \\
& & $2371^{\rm c},2516(400)^{\rm d},$ \\
& & $2317^{\rm e}$ & \\
$6p_{3/2}$ & $313.522$ & $388^{\rm a},311.5^{\rm b},330^{\rm c},$
& $877(20)^{\rm f}$ \\
& & $322(20)^{\rm d},391^{\rm e}$& \\
$5d_{3/2}$ & $421.131$ & $291^{\rm a},489^{\rm c},
447(20)^{\rm d},400.5^{\rm g},$
& $430(43)^{\rm h}$ \\
$5d_{5/2}$ & $-68.567$ & $-96^{\rm c},-48(15)^{\rm d},-12.6^{\rm g}$
& $-63.6(7)^{\rm i}$\\
\end{tabular}
\end{ruledtabular}
\begin{flushleft}
$^{\rm a}$ Reference\cite{dzuba-11},
$^{\rm b}$ Reference\cite{safronova-09},
$^{\rm c}$ Reference\cite{porsev-12},\\
$^{\rm d}$ Reference\cite{sahoo-11},
$^{\rm e}$ Reference\cite{martensson-94},
$^{\rm f}$ Reference\cite{berends-92},\\
$^{\rm g}$ Reference\cite{itano-06},
$^{\rm h}$ Reference\cite{engelke-96},
$^{\rm i}$ Reference\cite{roberts-99}.
\end{flushleft}
\label{hfs_tab}
\end{center}
\end{table}
In the Table. \ref{hfs_tab} we present, and compare HFS constants obtained
from the present calculations with the other theoretical and experimental
results. As evident from the table, our results for the $6p\;^2P_{1/2}$,
$5d\;^2D_{3/2}$ and $5d\;^2D_{5/2}$ states are in better agreement with the
experimental data than the other theoretical results. However, for the
$6s\;^2S_{1/2}$ state, like the other theoretical results, our result is also
larger than the experimental result. Among the theoretical results, our
result for this state is closer to the Ref. \cite{sahoo-11}. The reason for
this is the method employed and the type of the orbitals used in the two
works are similar. For the $6p\;^2P_{3/2}$ state there is a large discrepancy
between the theoretical results and experimental data. However, it must be
emphasized that the experimental data is from a relatively old measurement.
Our result lies between the third-order MBPT results of
Ref. \cite{safronova-09} and the RCC result of Ref. \cite{sahoo-11}.
The impact of the electron correlation effects is discernible in the
Table. \ref{hfscompo_tab}, where we list the contributions from various
RCC terms. The RCC terms in the table are based on the expression in our
previous work \cite{mani-10}. The contribution listed as ``Other'' correspond
to the terms $S_2^\dagger H_{\rm hfs} T + {\rm c.c.}$ and
$S_2^\dagger H_{\rm hfs}TS_1+{\rm c.c.}$.
As expected, the dominant contribution is from the DF term.
It contributes approximately about 72\%, 66\%, 58\%, 69\% and 110\% for
the states $6s\;^2S_{1/2}$, $6p\;^2P_{1/2}$, $6p\;^2P_{3/2}$,
$5d\;^2D_{3/2}$ and $5d\;^2D_{5/2}$, respectively.
Our DF value $9716.7$ and $1548.2$ for states $6s\;^2S_{1/2}$ and
$6p\;^2P_{1/2}$, respectively are on the higher side of the values,
$9577$ and $1542$, reported by Safronova and collaborators in their recent
work \cite{porsev-12}. On the other hand, for the $6p\;^2P_{3/2}$,
$5d\;^2D_{3/2}$ and $5d\;^2D_{5/2}$ states our results of $182.5$, $289.7$
and $110.2$, show a close match with the values $183$, $290$ and $111$ from
Ref. \cite{porsev-12}. The next two leading order contributions are from
the terms $S_1^\dagger \tilde{H}_{\rm hfs} +{\rm c.c.}$ and
$S_2^\dagger \tilde{H}_{\rm hfs} +{\rm c.c.}$. Unlike other states,
$5d\;^2D_{5/2}$ shows a different correlation pattern and contribution from
$S_2^\dagger \tilde{H}_{\rm hfs} +{\rm c.c.}$ is about -321\% of the total
value. However $S_1^\dagger \tilde{H}_{\rm hfs} +{\rm c.c.}$ contributes
44\% of the total value. Despite the large cancellations our total
result compares well with the experiment.
\begin{table}[h]
\begin{center}
\caption{The electric dipole transition amplitudes of $^{171}$Yb$^+$.}
\begin{ruledtabular}
\begin{tabular}{cccc}
Transition & This work & Other works & Exp. \\
\hline
$6p_{1/2}\longleftarrow 6s_{1/2}$ & $2.748$ & $2.72^{\rm a},2.73^{\rm b}$,
& $2.47(3)^{\rm f}$ \\
& & $2.75^{\rm c},2.64^{\rm d},
2.72(1)^{\rm e}$ \\
$6p_{3/2}\longleftarrow 6s_{1/2}$ & $3.901$ & $3.84^{\rm a},3.84^{\rm b}$,
& $3.36(3)^{\rm g}$ \\
& & $3.83^{\rm c},3.71^{\rm d},
3.83(1)^{\rm e}$ \\
$5d_{3/2}\longleftarrow 6p_{1/2}$ & $3.138$ & $3.09^{\rm a},3.78^{\rm b}$,
& $2.97(4)^{\rm f}$ \\
& & $3.06^{\rm c},2.98^{\rm d},
3.06(2)^{\rm e}$ \\
$5d_{3/2}\longleftarrow 6p_{3/2}$ & $1.369$ & $1.36^{\rm a},1.55^{\rm b}$
& $-$ \\
& & $1.35^{\rm c},1.32^{\rm d},
1.35(2)^{\rm e}$ \\
$5d_{5/2}\longleftarrow 6p_{3/2}$ & $4.307$ & $4.77^{\rm b},4.23^{\rm c}$
& $-$ \\
& & $4.23(3)^{\rm e}$
\end{tabular}
\end{ruledtabular}
\begin{flushleft}
$^{\rm a}$ Reference\cite{dzuba-11}. \\
$^{\rm b}$ Reference\cite{safronova-09}. \\
$^{\rm c}$ Reference\cite{porsev-12}- MBPT + corrections. \\
$^{\rm d}$ Reference\cite{porsev-12}- All-order. \\
$^{\rm e}$ Reference\cite{sahoo-11}. \\
$^{\rm f}$ Reference\cite{olmschenk-07,olmschenk-09}. \\
$^{\rm g}$ Reference\cite{pininngton-97}.
\end{flushleft}
\label{e1_tab}
\end{center}
\end{table}
The E1 transition amplitudes are presented in the Table. \ref{e1_tab}.
For comparison the results from the other theoretical and experimental works
are also listed. Like HFS constant, transition amplitudes are calculated
using the RCC wave functions. We have used the similar expressions and
diagrams as the HFS except for one key difference, the hyperfine operator is
replaced by the dipole operator. The experimental results are available only
for the $6s\;^2S_{1/2}-6p\;^2P_{1/2}$, $6s\;^2S_{1/2}-6p\;^2P_{3/2}$ and
$6p\;^2P_{1/2}-5d\;^2D_{3/2}$ transitions. Among all the theoretical results,
the results from the recent all-order work \cite{porsev-12} are closest to the
experimental data. All other results, including ours, are on the higher side
of the experimental value. The component wise contributions are listed in the
Table. \ref{hfscompo_tab}. Like in the case of HFS constant, the DF term has
the dominant contribution. It contributes approximately about 118\%, 116\%,
123\%, 124\% and 121\%, respectively, for the transitions listed in the table.
A close agreement is observed in the DF data from our calculation with
the Ref. \cite{porsev-12}.
\begin{table*}[ht]
\caption{Magnetic dipole hyperfine constants and E1 transition amplitudes,
contributions from different terms in the RCC. The operator ``O''
here represents the hyperfine interaction Hamiltonian $H_{\rm hfs}$
for HFS constant and the dipole operator $D$ for the E1 transition
amplitudes.}
\label{tab-hfs-comp}
\begin{ruledtabular}
\begin{tabular}{cccccccccc}
State/Transition &\multicolumn{6}{c}{Coupled-cluster terms} \\
\hline \\
& DF & $\tilde O$ - DF & $S^\dagger_1\tilde O$
& $S^\dagger_2\tilde O$
& $S^\dagger_2\tilde O S_1$
& $S^\dagger_1\tilde O S_1$
& $S^\dagger_2\tilde O S_2$ & Other & Norm \\
& & & $+ c.c$ & $+ c.c$& $+ c.c.$ & & & & \\
\hline \\
$6s_{1/2}$& $9716.682$ & $-427.318$ & $2756.071$ & $1145.559$ & $125.344$
& $204.287$ & $243.111$ & $-49.371$ & $-225.956$ \\
$6p_{1/2}$& $1548.208$ & $-49.951$ & $528.950$ & $245.862$ & $28.191$
& $46.824$ & $23.105$ & $22.266$ & $-45.402$ \\
$6p_{3/2}$& $182.531$ & $-5.430$ & $56.715$ & $53.520$ & $5.802$
& $4.570$ & $18.932$ & $1.897$ & $-5.012$ \\
$5d_{3/2}$& $289.667$ & $7.017$ & $82.212$ & $4.875$ & $3.863$
& $5.924$ & $30.457$ & $4.316$ & $-7.197$ \\
$5d_{5/2}$& $110.234$ & $4.187$ & $29.781$ & $-220.089$ & $-16.443$
& $2.032$ & $19.468$ & $1.186$ & $1.076$ \\
\hline \\
$6p_{1/2}\longleftarrow6s_{1/2}$& $3.242$ & $0.001$ & $-0.175$ & $-0.311$ & $-0.010$
& $0.019$ & $0.029$ & $0.006$ & $-0.052$ \\
$6p_{3/2}\longleftarrow6s_{1/2}$& $4.543$ & $0.004$ & $-0.254$ & $-0.378$ & $-0.012$
& $0.023$ & $0.037$ & $0.006$ & $-0.067$ \\
$5d_{3/2}\longleftarrow6p_{1/2}$& $3.861$ & $0.005$ & $-0.437$ & $-0.287$ & $-0.003$
& $0.032$ & $0.034$ & $-0.004$ & $-0.068$ \\
$5d_{3/2}\longleftarrow6p_{3/2}$& $1.697$ & $0.002$ & $-0.206$ & $-0.121$
& $-0.000$ & $0.012$ & $0.013$ & $-0.001$ & $-0.027$ \\
$5d_{5/2}\longleftarrow6p_{3/2}$& $5.200$ & $0.010$ & $-0.566$ & $-0.326$ & $-0.001$
& $0.034$ & $0.036$ & $0.005$ & $-0.079$
\end{tabular}
\end{ruledtabular}
\label{hfscompo_tab}
\end{table*}
\subsection{NSI-E1PNC}
For calculation of NSI-E1PNC, we use the expression Eq. (\ref{e1pnccc1v})
derived for NSD-E1PNC in the electronic space. However, the important
difference in this case is, as mentioned earlier, the PRCC cluster operators
are rank zero operators. In terms of diagrams, the ones with dominant
contributions are derived from Fig. \ref{e1pnc_cc_fig} with the NSD-perturbed
operators replaced by the NSI-perturbed ones. In Table. \ref{nsi_comp_tab}, we
list the contributions from the different terms in PRCC. Among all the terms,
the largest contribution, about 117\% of the total value, is from $DS^{(1)}_1$.
The reason for this, as evident from Table. \ref{nsi_orbital_tab}, is the large
$H_{\rm PNC}$ mixing between $6s\;^2S_{1/2}$ and $np\;^2P_{1/2}$ orbitals.
This large contribution from $DS^{(1)}_1$ is consistent with the pattern of
correlation reported in Ref. \cite{sahoo-11}. The next leading order
contributions are $DS^{(1)}_2 + {\rm H.c.}$ and ${T^{(1)}_1}^\dagger D$. The
former involve one core and one virtual orbitals, and the later connects
${T^{(1)}_1}^\dagger $ and $D$ through a core orbital. These contribute about
-17\% and 15\%, respectively. The terms
${S^{(0)}_2}^\dagger DS^{(0)}_1 +{\rm H.c.}$ and
${S^{(0)}_1}^\dagger DS^{(0)}_1+{\rm c.c.}$
are third and fourth leading order terms, contributing about -7\% each.
The contribution from normalization is -2.8\%. Small but not insignificant
contributions of 2\% and -1.8\% are also observed from the terms
${T^{(0)}_2}^\dagger DT^{(1)}_1+{\rm c.c.}$ and ${S^{(1)}_1}^\dagger D$,
respectively.
To examine the correlation pattern more closely we pick the leading order
terms $DS^{(1)}_1$ and ${T^{(1)}_1}^\dagger D$, and for these we calculate
the E1PNC contributions from various intermediate $np_{1/2}$ states. The
dominant contributions from these are tabulated in
Table. \ref{nsi_orbital_tab}. The same analysis but at the DF level
is presented in Table. \ref{nsi_orbital_df}. As we see in both the tables,
dominant contribution is from the $6p\;^2P_{1/2}$ state, contributing about
117\% of the total value. The reason for this is, large $H_{\rm PNC}$
induced mixing with the energetically closer $6s\;^2S_{1/2}$ state.
The PRCC value is about 42\% larger than the Dirac-Fock contribution. This
can be attributed to the large amplitude of $S^{(1)}_1$, and hence to
the correlation effects incorporated using PRCC. The next dominant
contribution among the core orbital is $5p_{1/2}$. This contributes to
${T^{(1)}_1}^\dagger D$ through the $H_{\rm PNC}$ perturbed $6s\;^2S_{1/2}$.
In this case as well PRCC contribution is larger than the DF.
The total NSI-E1PNC result from our calculation is presented in
Table. \ref{e1pnc_tab}. We have also listed the DF contribution. The other
two theoretical results are based on the calculations with
correlation-potential-method \cite{dzuba-11} and RCCSD(T) \cite{sahoo-11}. The
E1PNC result from these two works differ from each other substantially. The
CCSD(T) result from Ref. \cite{sahoo-11} is about 26\% larger than
Ref. \cite{dzuba-11}. Our DF value is marginally on the higher
side of the value reported in Ref. \cite{sahoo-11}. However, the
total result lies between Refs. \cite{dzuba-11} and \cite{sahoo-11},
but closer to the coupled-cluster result of Ref. \cite{sahoo-11}.
\begin{table*}
\begin{center}
\caption{The NSI and NSD E1PNC component wise contribution from various terms in
the PRCC. The NSI and NSD contributions are listed in the units of
$iea_0\times10^{-11}(-Q_W/N)$ and $iea_0 \mu'_W \times10^{-12}$,
respectively.}
\begin{ruledtabular}
\begin{tabular}{ccccccccccccccc}
\multicolumn{2}{c}{Transition}& $D S^{(1)}_1$ & ${S^{(1)}}^\dagger_1 D$ & $D T^{(1)}_1$ &
${T^{(1)}}^\dagger_1 D$ & $D S^{(1)}_2$ & ${T^{(0)}}^\dagger_1 D T^{(1)}_1$ &
${T^{(0)}}^\dagger_2 D T^{(1)}_1$ & ${S^{(0)}}^\dagger_1 D S^{(1)}_1$ &
${S^{(0)}}^\dagger_2 D S^{(1)}_1$ &
${T^{(1)}}^\dagger_1 D S^{(0)}_2$ & Other & Norm \\
& & & & & & $+$c.c.&$+$c.c.&$+$c.c.&$+$c.c.&$+$c.c.&$+$c.c.& & & \\
\hline \\
& & & & & & & NSI-PNC & & & & & & \\
\hline
& & $8.950$&$-0.139$&$-0.005$&$1.179$&$-1.278$&$-0.025$&$0.168$&$-0.511$ &
$-0.529$ & $0.028$ & $0.003$ & $-0.215$ \\
\\
$F_w$ & $F_v$ & & & & & & NSD-PNC & & & & & & & \\
\hline
1 & 0 & $6.689$&$-3.301$&$0.030$&$1.118$&$-0.683$&$0.001$&$-0.250$&$-0.239$ &
$-0.279$ & $-0.008$ &$-0.099$&$-0.083$ \\
1 & 1 & $1.431$&$-2.539$&$-0.002$&$0.221$&$-1.221$&$0.000$&$-0.061$&$0.026$ &
$0.038$ & $0.008$ &$-0.021$&$0.059$ \\
2 & 1 & $-3.590$&$0.952$&$-0.020$&$-0.608$&$-0.114$&$0.000$&$0.130$&$0.163$ &
$0.194$ & $0.009$ &$0.053$&$0.079$ \\
\end{tabular}
\end{ruledtabular}
\label{nsi_comp_tab}
\end{center}
\end{table*}
\begin{table}
\begin{center}
\caption{The NSI E1PNC dominant contribution from the intermediate odd
parity states in the PRCC. The listed E1PNC values are in the units
of $iea_0\times10^{-11}(-Q_W/N)$.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
\multicolumn{4}{c}{$D S^{(1)}_1$} &\multicolumn{4}{c}{${T^{(1)}}^\dagger_1D$}\\
\hline
D & $S^{(1)}_1$ & E1PNC & state & D & $ {T^{(1)}}^\dagger_1$ & E1PNC & state \\
\hline
$-3.861$&$100.787$&$8.918$ &$6p_{1/2}$&$0.003$ &$-1.184$&$0.0$ &$2p_{1/2}$ \\
$-0.217$&$-29.596$&$-0.147$&$7p_{1/2}$&$-0.010$&$2.753$ &$-0.001$ &$3p_{1/2}$ \\
$0.047$ &$16.932$ &$-0.018$&$8p_{1/2}$&$-0.008$&$7.623$ &$-0.002$ &$4p_{1/2}$ \\
$-0.009$&$-21.986$&$-0.005$&$9p_{1/2}$&$1.290$&$39.984$ &$1.182$&$5p_{1/2}$ \\
$0.106$ &$-29.329$&$0.071$ &$10p_{1/2}$& & & \\
$-0.161$&$23.995$ &$0.088$ &$11p_{1/2}$& & & \\
$-0.096$ & $15.017$ & $0.033$& $12p_{1/2}$ & & & \\
\end{tabular}
\end{ruledtabular}
\label{nsi_orbital_tab}
\end{center}
\end{table}
\begin{table*}
\begin{center}
\caption{The Dirac-Fock dominant contributions from the intermediate odd
parity states.
The listed NSI-E1PNC and NSD-E1PNC values are in the units of
$iea_0\times10^{-11}(-Q_W/N)$ and $iea_0\times10^{-12} \mu'_W$,
respectively.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
\multicolumn{4}{c}{$D H_{\rm PNC}$} &\multicolumn{4}{c}{$H_{\rm PNC} D$}\\
\hline
D & $ H_{\rm PNC}$ & E1PNC & state & D & $ H_{\rm PNC} $ & E1PNC & state \\
\hline
\\
& & & NSI-PNC & & & \\
\hline
$-3.861$&$71.073$&$6.288$ &$6p_{1/2}$&$0.003$ &$-1.178$&$0.0$ &$2p_{1/2}$ \\
$-0.217$&$-18.657$&$-0.092$&$7p_{1/2}$&$-0.010$&$2.657$&$-0.001$ &$3p_{1/2}$\\
$0.047$ &$10.547$ &$-0.011$&$8p_{1/2}$&$-0.008$&$6.738$&$-0.001$ &$4p_{1/2}$\\
$-0.009$&$-13.620$&$-0.003$&$9p_{1/2}$&$1.290$&$23.438$&$0.693$&$5p_{1/2}$ \\
\\
& & & NSD-PNC & & & \\
\hline
$-3.861$&$-118.154$&$6.210$ &$6p_{1/2}$&$0.003$ &$1.957$&$0.0$ &$2p_{1/2}$\\
$-0.217$&$31.016$&$-0.092$&$7p_{1/2}$&$-0.010$&$-4.417$&$-0.001$ &$3p_{1/2}$\\
$0.047$&$-17.534$&$-0.011$&$8p_{1/2}$&$-0.008$&$-11.202$&$-0.001$&$4p_{1/2}$\\
$-0.009$&$22.642$&$-0.003$&$9p_{1/2}$&$1.290$&$-38.965$&$0.684$&$5p_{1/2}$ \\
\end{tabular}
\end{ruledtabular}
\label{nsi_orbital_df}
\end{center}
\end{table*}
\subsection{NSD-E1PNC}
For the NSD-PNC, the dominant contributions from various PRCC terms in
Eq. (\ref{e1pnccc1v}) are listed in Table. \ref{nsi_comp_tab}. For hyperfine
transitions $F_v=0\rightarrow F_w=1$ and
$F_v=1\rightarrow F_w=2$, like in NSI-PNC, $DS^{(1)}_1$ is the leading
order term. It contributes about 231\% and -130\%, respectively.
For transition $F_v = 1 \rightarrow F_w = 1$, however,
${S^{(1)}}^\dagger_1D$ is the dominant term, contributing about -123\%
of the total value. The same trend is reported in Ref. \cite{dzuba-11},
where the contributions are about 252\%, -226\% and -157\%, respectively, for
the $F_v=0\rightarrow F_w=1$, $F_v=1\rightarrow F_w=1$ and
$F_v=1\rightarrow F_w=2$ transitions. The next leading order term,
${S^{(1)}}^\dagger_1D$, contribute about -114\% and 35\%, respectively
to the $F_v=0\rightarrow F_w=1$ and $F_v=1\rightarrow F_w=2$ transitions.
However, the second leading order term is $DS^{(1)}_1$ for the
$F_v=1\rightarrow F_w=1$ transition, it contributes about 69\%. The next two
leading order terms are ${T^{(1)}}^\dagger_1D$ and $DS^{(1)}_2 + {\rm H.c.}$.
The contributions from these terms, in the sequence listed in
Table. \ref{nsi_comp_tab}, are 39\%, 11\% and -22\%, and -24\%, -59\%
and -4\%, respectively. Non-negligible contributions are also observed from
the terms ${T^{(0)}_2}^\dagger DT^{(0)}_1+{\rm c.c.}$,
${S^{(0)}_1}^\dagger DS^{(0)}_1+{\rm c.c.}$
and ${S^{(0)}_2}^\dagger DS^{(0)}_1+{\rm c.c.}$.
Unlike NSI-PNC, in NSD-PNC $np_{3/2}$ states also contribute to the E1PNC
matrix element. In Table. \ref{nsd_orbital_tab}, we list dominant
contributions from odd parity $np_{1/2}$ and $np_{3/2}$ states in the PRCC
calculations for the $F_v=0\rightarrow F_w=1$ transition, and specifically
the contributions from $DS^{(1)}_1$, ${S^{(1)}}^\dagger_1D$ and
${T^{(1)}}^\dagger_1D$. At DF level we present the contributions from
the $np_{1/2}$ states in Table. \ref{nsi_orbital_df}. As we see in these
tables, both at DF and PRCC levels, the dominant contribution is from the
$6p\;^2P_{1/2}$ state. The total contribution from this in the PRCC
calculations is about 150\%, which can be attributed to the 230\% and -79\%
contributions from $DS^{(1)}_1$ and ${S^{(1)}}^\dagger_1D$, respectively. The
large contribution through $DS^{(1)}_1$ is due
to the strong $H_{\rm PNC}$ mixing with $6s\;^2S_{1/2}$. However that is not
the case with $5d\;^2D_{3/2}$, which contributes through
${S^{(1)}}^\dagger_1D$. At DF level $6p\;^2P_{1/2}$ contributes only through
mixing with $6s\;^2S_{1/2}$, and contribution is about 214\%.
The state $6p\;^2P_{3/2}$ is the third most dominant contributing one,
contributing about -40\%, through $H_{\rm PNC}$ perturbed $5d\;^2D_{3/2}$.
The other higher energy orbitals have negligible contribution.
The NSD-E1PNC total results are given in Table. \ref{e1pnc_tab}. For
comparison we have listed the DF contributions. Our DF results $6.915$,
$1.632$ and $-3.643$ for the three hyperfine transitions listed in
Table. \ref{e1pnc_tab} compare well with the results $6.90$,
$1.70$ and $-3.70$, reported in Ref. \cite{porsev-12}. Our total results,
however, are on the higher side of
the random-phase approximation (RPA) based results from Ref. \cite{porsev-12}
for all hyperfine transitions. The other theoretical NSD-PNC data available
for comparison is from Dzuba {\rm et al} \cite{dzuba-11} using the
correlation-potential-method. Our results for transitions
$F_v=0\rightarrow F_w=1$ and $F_v=1\rightarrow F_w=2$ are in good agreement
with their results. For transition $F_v=1\rightarrow F_w=1$, however, our
result is higher than their value.
\begin{table*}
\begin{center}
\caption{The NSD E1PNC dominant contribution from the intermediate odd parity
states in the PRCC. The E1PNC values are in the units of
$iea_0\times10^{-12} \mu'_W$. The contributions listed from the core
$nP_{1/2}$ and $nP_{3/2}$ orbitals are from the terms
${T^{(1)}}^\dagger_1 D$ and $DT^{(1)}$, respectively.}
\begin{ruledtabular}
\begin{tabular}{ccccccccccc}
\multicolumn{3}{c}{$D S^{(1)}_1$}&\multicolumn{3}{c}{${S^{(1)}}^\dagger_1D$}
& Orbital & \multicolumn{3}{c}{${T^{(1)}}^\dagger_1 D$/ $DT^{(1)}$} & Orbital \\
\hline
D & $S^{(1)}_1$ & E1PNC & D & $ {S^{(1)}}^\dagger_1$ & E1PNC & &
D & ${T^{(1)}}^\dagger_1$ & E1PNC & \\
\hline
$-3.861$ & $-126.570$ & $6.653$ & $3.242$ & $78.106$ & $-2.298$ & $6p_{1/2}$ &
$0.003$ & $1.965$ & $0.000$ & $2p_{1/2}$ \\
$-0.217$ & $37.839$ & $-0.112$ & $-093$ & $-9.111$ & $-0.008$ & $7p_{1/2}$ &
$-0.010$ & $-4.550$ & $-0.001$ & $3p_{1/2}$ \\
$0.047$ & $-21.776$ & $-0.013$ & $0.011$ & $4.704$ & $-0.001$ & $8p_{1/2}$ &
$-0.008$ & $-12.416$ & $-0.001$ & $4p_{1/2}$ \\
$-0.009$ & $28.408$ & $-0.004$ & $0.013$ & $-5.743$ & $0.001$ & $9p_{1/2}$ &
$1.290$ & $-63.798$ & $1.120$ & $5p_{1/2}$ \\
$0.106$ & $38.534$ & $0.056$ & $0.075$ & $-6.647$ & $0.005$ & $10p_{1/2}$ &
& & & \\
$-0.161$ & $-32.883$ & $0.072$ & $-0.081$ & $4.431$ & $0.003$ & $11p_{1/2}$ &
& & & \\
$-0.096$ & $-22.179$ & $0.029$ & $-0.053$ & $1.876$ & $0.001$ & $12p_{1/2}$ &
& & & \\
\\
\hline
$1.697$ & $-8.244$ & $0.000$ & $-4.543$ & $30.184$ & $-0.984$ & $6p_{3/2}$ &
$-0.001$ & $-0.001$ & $0.000$ & $2p_{3/2}$ \\
$0.024$ & $2.418$ & $0.000$ & $0.358$ & $-5.101$ & $-0.013$ & $7p_{3/2}$ &
$0.006$ & $0.002$ & $0.000$ & $3p_{3/2}$ \\
$0.008$ & $-1.376$ & $0.000$ & $-0.140$ & $2.786$ & $-0.003$ & $8p_{3/2}$ &
$0.046$ & $0.109$ & $0.000$ & $4p_{3/2}$ \\
$-0.028$ & $1.845$ & $0.000$ & $0.136$ & $-3.670$ & $-0.004$ & $9p_{3/2}$ &
$0.749$ & $-3.964$ & $0.021$ & $5p_{3/2}$ \\
$0.075$ & $-2.226$ & $0.000$ & $-0.050$ & $4.445$ & $-0.002$ & $10p_{3/2}$ &
& & & \\
$0.080$ & $-1.527$ & $0.000$ & $0.022$ & $3.370$ & $0.001$ & $11p_{3/2}$ &
& & & \\
$0.043$ & $-0.791$ & $0.000$ & $0.035$ & $1.675$ & $0.000$ & $12p_{3/2}$ &
& & & \\
\end{tabular}
\end{ruledtabular}
\label{nsd_orbital_tab}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{The total NSI (in the unit $iea_0\times10^{-11}(-Q_W/N)$) and
NSD (in the unit $iea_0 \mu'_W \times10^{-12}$) E1PNC results
compared with the previous theoretical results.}
\begin{ruledtabular}
\begin{tabular}{cccc}
Transition & \multicolumn{2}{c}{This work} & Other works \\
\hline
& DF & PRCC & \\
\hline
NSI-PNC & & & \\
\hline
$\langle 5d_{3/2}|\leftarrow \langle 6s_{1/2}|$
& $7.002$ & $7.626$ &
$6.262(20)^{\rm a},8.470^{\rm b}$ \\
\\
NSD-PNC & & & \\
\hline
$\langle 5d_{3/2},F_w=1|\leftarrow \langle 6s_{1/2},F_v=0|$
& $6.915$ &$2.896$ &
$3.1(1.9)^{\rm a},2.6^{\rm c}$ \\
$\langle 5d_{3/2},F_w=1|\leftarrow \langle 6s_{1/2},F_v=1|$
& $1.632$ &$-2.061$&
$-1.3(4)^{\rm a},-1.5^{\rm c}$ \\
$\langle 5d_{3/2},F_w=2|\leftarrow \langle 6s_{1/2},F_v=1|$
& $-3.643$ &$-2.753$&
$-2.6(1.3)^{\rm a},-2.2^{\rm c}$
\end{tabular}
\end{ruledtabular}
\begin{tabbing}
$^{\rm a}$ Reference\cite{dzuba-11}.
$^{\rm b}$ Reference\cite{sahoo-11}.
$^{\rm c}$ Reference\cite{porsev-12}.
\end{tabbing}
\label{e1pnc_tab}
\end{center}
\end{table*}
\subsection{Uncertainty estimates}
To calculate the uncertainty of our E1PNC results we resort to a analysis based
on the sum-over-state approach. In this method, the net uncertainty associated
with an intermediate state is
\begin{equation}
\Delta = \delta E^{\rm exci} + \delta E1 + \delta H_{\rm PNC},
\end{equation}
where $\delta E^{\rm exci}$ and $\delta E1$ are the deviations of excitation
energy and E1 matrix element form the experimental data. And these are
calculated based on results presented in Tables. \ref{ee_tab} and
\ref{e1_tab}, respectively. For $\delta H_{\rm PNC}$, uncertainty
associated with $H_{\rm PNC}$ matrix, we resort to the deviation of
$\sqrt{A_iA_f}$ from experimental data, where $A_i$ and $A_f$ represent
the magnetic dipole hyperfine constants of initial and final states of the
E1PNC transition.
As discussed earlier, dominant contribution to NSI-PNC is from the
$6p\;^2P_{1/2}$ state, which contributes through $DS^{(1)}_1$ i.e.
$H_{\rm PNC}$ matrix element
$\langle 6p\;^2P_{1/2}|H_{\rm PNC}|6s\;^2S_{1/2}\rangle$,
$E1$ matrix $\langle 5d\;^2D_{3/2}|D|6p\;^2P_{1/2}\rangle$ and energy
denominator $E_{6s_{1/2}}-E_{6p_{1/2}}$.
The uncertainty associated with $H_{\rm PNC}$ matrix element, we get using
our RCC results for hyperfine constants, is 9.1\%.
And, the relative uncertainty of $E1$ matrix element and the energy
denominator are calculated as 5.7\% and 3.4\%, respectively.
Combining these, the net uncertainty in NSI-PNC result is 18.2\%.
For NSD-PNC as well $6p\;^2P_{1/2}$ is the dominant contributing
state. However in this case unlike NSI-PNC, apart from $DS^{(1)}_1$,
contribution through ${S^{(1)}}^\dagger_1D$ is not negligible. The matrix
elements involve in this case are
$\langle 5d\;^2D_{3/2}|H_{\rm PNC}|6p\;^2P_{1/2}\rangle$
and $\langle 6p\;^2P_{1/2}|D|6s\;^2S_{1/2}\rangle$, and the energy denominator
is $E_{6p_{1/2}}-E_{5d_{3/2}}$. Using the similar analysis we get 4.52\%,
11.26\% and 2.41\%, respectively for $\delta H_{\rm PNC}$, $\delta E1$
and $\delta E$. Combining these, we get 18.19\% as the net uncertainty associated
with term ${S^{(1)}}^\dagger_1D$. The $rms$ relative uncertainty in
NSD-PNC results are then 18.17\%.
\section{Conclusions}
In this work, we present NSI and NSD-PNC transition amplitudes for the
$[4f^{14}]6s^2\;S_{1/2} - [4f^{14}]5d^2\;D_{3/2}$ transition in
$^{171}$Yb$^+$ ion. To estimate the uncertainty of the the PNC results,
we also calculate excitation energies, hyperfine structure constants and
E1 transition amplitudes for some of the important low lying states, using
RCC theory. The E1PNC results are computed using PRCC theory, which is
formulated based on RCC theory and it incorporates electron
correlation effects arising from a class of diagrams to all order in the
presence of PNC interaction as a perturbation.
Our results for excitation energies, hyperfine structure constants and E1
transition amplitudes are in good agreement, in some cases better, with the
previous experimental data. Our NSI-PNC result lies between the
results from two previous studies reported in Refs. \cite{dzuba-11} and
\cite{sahoo-11}. The NSD-PNC DF results from our work is in excellent
agreement with the results reported in Ref. \cite{porsev-12} for all hyperfine
transitions. The total NSD-PNC result for the
$F_v=0\rightarrow F_w=1$ hyperfine transition lies between the results of
Refs. \cite{dzuba-11} and \cite{porsev-12}. For the remaining
two, our results are slightly on the higher side. The upper
bound to the theoretical uncertainty associated with E1PNC results
is about 20\%.
\begin{acknowledgments}
The author wish thank S. Chattopadhyay for useful discussions. The results
presented in the paper are based on computations using the HPC cluster at
Physical Research Laboratory, Ahmedabad.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,811 |
The Clough Pond Association welcomes any interested individual or family to join the Clough Pond Association by becoming a member. Organizations and businesses can help the Association by becoming a Sponsor. Your support strengthens our efforts to maintain and improve the quality of Clough Pond.
We have two different membership plans and three different sponsorship plans: Single Membership, Family Membership, Standard Sponsor, Premium Sponsor, and Partnership.
Be eligible to serve on the Board of Directors or as an Officer of the Association.
Traditionally we have two (2) meetings each summer, one in June and one in August but other meetings are possible. Those two (2) meetings take the form of potluck suppers followed by a business meeting. Cost is $10 per year and all funds received from this option go into the general operating budget of the Clough Pond Association.
Meetings are as described above. Cost is $20 per year and all funds received from this option go into the general operating budget of the Clough Pond Association.
Cost is $100 per year and all funds received from this option go into the general operating budget of the Clough Pond Association.
Cost is $250 per year and all funds received from this option go into the general operating budget of the Clough Pond Association.
Partnership: At this time the Partnership level Sponsor is reserved for our two major contributors; the Town of Loudon by way of the Loudon Conservation Commission and the New Hampshire Lakes Association. Those two organizations are major supporters of the Clough Pond Association and our arrangement with each of those organizations is unique. We will entertain other candidates for Partnerships upon request.
To become a Sponsor or to discuss any of these options further contact Tom Edwards, Secretary, Clough Pond Association.
To become a member fill out the Sponsorship Agreement and send it along with your dues to Jean Cote, 131 Clough Pond Road, Canterbury, NH 03224. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,208 |
{"url":"http:\/\/www.research-projects.uzh.ch\/p16258.htm","text":"# Cattaneo\n\n### Current research project\n\nTitle \/ Titel Homotopy quantum symmetries, monoidal categories and formality\nPDF Abstract (PDF, 14 KB)\nSummary \/ Zusammenfassung The main objectives are:\n(I) to develop a theory of homotopy quantum groups. This can be understood as the natural theory that should sit at the intersection of four important disciplines of mathematics and physics: monoidal categories, homotopy theory, quantum groups and higher categories. This fact gives a clear multidisciplinary aspect to the project.\n(II) to prove the formality of $${\\mathcal{L}_\\infty^{cois}}$$, the $$\\mathcal{L}_\\infty$$ algebra governing simultaneous deformations of a Poisson manifold and its coisotropic submanifolds}. This is the key step in solving the problem of quantization of symmetries from the point of view of deformation quantization, giving interdisciplinary applications.\n\nThis project will be hosted at the MIT (outgoing host) and at the University of Zurich (return host).\nProject leadership and contacts \/\nProjektleitung und Kontakte\n Prof. Alberto Cattaneo (Project Leader) alberto.cattaneo@math.uzh.ch\nFunding source(s) \/\nUnterst\u00fctzt durch\nEU\n\nIn collaboration with \/\nIn Zusammenarbeit mit\n Yael FregierMassachusetts Institute of TechnologyDepartment of MathematicsBuilding 2,77 Massachusetts AvenueCambridge, MA 02139-4307USA United States\nDuration of Project \/ Projektdauer Jan 2012 to Dec 2014","date":"2013-05-24 00:21:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6226725578308105, \"perplexity\": 4422.348730737394}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368704117624\/warc\/CC-MAIN-20130516113517-00084-ip-10-60-113-184.ec2.internal.warc.gz\"}"} | null | null |
Home SWITCH Video: Super Monkey Ball Banana Mania "SEGA Legends" trailer
Video: Super Monkey Ball Banana Mania "SEGA Legends" trailer
To say that there has been plenty of announcements for Super Monkey Ball Banana Mania would be an understatement. Every day, there has been some kind of announcement for content that you can experience in the game. And today, it was confirmed that you can play as the SEGA Game Gear, Saturn, and Dreamcast.
A trailer was uploaded to show gameplay of them. In the video description, it is stated that "these three console legends are included as part of the Digital Deluxe Edition or available together as the SEGA Legends Pack for $4.99 USD (or equivalent) for Nintendo Switch, PlayStation 4, PlayStation 5, Steam, Xbox One, and Xbox Series S|X when Super Monkey Ball Banana Mania launches on October 5th". The trailer can be seen down below.
Previous articleEnhanced version of GTA 5 pushed into March 2022
Next articleBuild & dispatch with 2 new SnowRunner content packs | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 135 |
{"url":"http:\/\/mathhelpforum.com\/statistics\/84473-true-false-probability-problems.html","text":"# Math Help - True\/False Probability Problems\n\n1. ## True\/False Probability Problems\n\nCan you confirm that these answers are correct? Thank you.\n6. True\n7. False\n8. False\n\n2. = 1\n2\/3 + 2\/3 - 4\/9 =\n4\/3 - 4\/9 =\n12\/9 - 4\/9 =\n9\/9 = 1\n\n3. Hello, krzyrice!\n\nHere are the first two . . . I'm working on the third one.\n\nLet $A$ and $B$ be two events in a finite sample space.\nTrue or false?\n\n6. If $P(A) + P(B) \\:=\\:1$, then $A$ and $B$ must be complementary. . True\n\nWe have: . $P(B) \\:=\\:1 - P(A)$\n\nHence: . $B \\:=\\:A'$ . . . They are complementary.\n\n7. For all $A$ and $B, \\;P(A \\cap B) + P(A \\cup B) \\:=\\:P(A) + P(B)$ . True\nThis is a just rearrangement of the well-known formula:\n\n. . . . $P(A \\cup B) \\:=\\:P(A) + P(B) - P(A \\cap B)$\n\n4. Sorry, Plato, I must disagree . . .\n\nThe problem gave: . ${\\color{blue}P(A) + P(B) \\:=\\:1}$\n\nThere was no mention of an \"or\", as in $A \\cup B$\n\nYou have: . $\\begin{array}{ccccccc}P(A) &=& P(1,2,3,4) &=& \\frac{2}{3} \\\\ \\\\[-4mm]P(B) &=& P(3,4,5,6) &=& \\frac{2}{3} \\end{array}$\n\n. . . . and: . $P(A) + P(B) \\:\\neq \\:1$\n\n5. Okay, Plato, you win!\n\nI had to come up with a truly embarrassing example to convince me.\n\nEvent A: draw card from a deck and get a Heart.\n\n. $P(A) \\:=\\:\\frac{13}{52} \\:=\\:\\frac{1}{4}$\n\nEvent B: flip two coins and get at least one Head.\n\n. . $P(B) \\:=\\:\\frac{3}{4}$\n\nHence, $P(A) + P(B) \\:=\\:1$ . . . but $A$ and $B$ are not complementary.\n\nWell, duh!\n\n6. thanks for the help guys .","date":"2016-07-25 12:19:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 19, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9165889024734497, \"perplexity\": 1450.0855594306802}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-30\/segments\/1469257824226.79\/warc\/CC-MAIN-20160723071024-00178-ip-10-185-27-174.ec2.internal.warc.gz\"}"} | null | null |
{"url":"https:\/\/datascience.stackexchange.com\/questions\/55409\/what-is-the-hypothesis-space-used-by-this-and-gate-perceptron","text":"# What is the hypothesis space used by this AND gate Perceptron?\n\nPer this post\n\nThe hypothesis space used by a machine learning system is the set of all hypotheses that might possibly be returned by it.\n\nPer this post, the Perceptron algorithm makes prediction\n\n$$$$\\hat y = \\begin{cases} 1 & wx+b >= 0\\\\ 0 & wx+b<0 \\end{cases}$$$$\n\nwe can conclude that the model to achieve an AND gate, using the Perceptron algorithm is\n\n$$x_1 + x_2 \u2013 1.5$$\n\nIn this case, what is the hypothesis space used by this AND gate Perceptron?\n\n\u2022 As far as I understand, the expression $x_1 + x_2 - 1.5$ which is a model that is capable of mapping inputs to outputs is a hypothesis. There might be more models that can do this action, the whole models are called hypothesis space. \u2013\u00a0Fatemeh Asgarinejad Jul 11 at 1:47\n\nAs far as I understand:\n\nA hypothesis is a model which is capable of predicting outputs from inputs, hence the $$x_1 + x_2 - 1.5$$ is a hypothesis but not the only one. The whole models that have the same capability are regarded as hypothesis space.\n\nWe know that in AND gate:\n\n x1 x2 output\n|---------|--------|--------|\n| 0 | 0 | 0 |\n| 0 | 1 | 0 |\n| 1 | 0 | 0 |\n| 1 | 1 | 1 |\n|---------------------------|\n\n\nand we have $$w .\\cdot x + b$$, based on which, either $$0$$ or $$1$$ turns out as output. $$w \\cdot x + b$$ $$w_1 \\cdot x_1 + w_2 \\cdot x_2 + b$$\n\ntrying all the inputs in this expression:\n\n$$w_1 \\cdot 0 + w_2 \\cdot 0 + b <= 0$$ (because the output should be 0) $$b < 0$$\n\n$$w_1 \\cdot 0 + w_2 \\cdot 1 + b <= 0$$ ---> $$w_2 + b <= 0$$ So $$w_1 < |b|$$\n\n$$w_1 \\cdot 1 + w_2 \\cdot 0 + b <= 0$$ ---> $$w_1 + b <= 0$$ So $$w_2 < |b|$$\n\n$$w_1 \\cdot 1 + w_2 \\cdot 1 + b > 0$$ ---> $$w_1 + w_2 + b > 0$$\n\nFirstly, we initialize the weights and bias parameters and then if needed, change them.\n\nHere, since $$b < 0$$ we set it as $$-1$$\n\nSince $$w_1 < |b|$$, $$w_2 < |b|$$ and weights are not negative, we set them as 1. So we would have:\n\n$$w_1 \\cdot 0 + w_2 \\cdot 0 + b = -1 < 0$$ is right, returns 0 because is negative.\n\n$$w_1 \\cdot 0 + w_2 \\cdot 1 + b = w_2 + b = 1 - 1 = 0$$ wrong, it returns 1 while it should return 0\n\n$$w_1 \\cdot 1 + w_2 \\cdot 0 + b <= 0$$ ---> $$w_1 + b <= 0$$ So $$w_2 < |b|$$ wrong, it returns 1 while it should return 0\n\n$$w_1 \\cdot 1 + w_2 \\cdot 1 + b > 0$$ ---> $$w_1 + w_2 + b > 0$$\n\nSo we set b for a smaller value like -1.5 (Note), then all the expressions would work appropriately. Hence $$x_1 + x_2 - 1.5$$ is a hypothesis for this problem.\n\nNote: we know that in using perceptron algorithm, when reaching at any point that is not following the current model, weights and bias are updated as follows:\n\nw = w + yx\nb = b + y\n\n\nHere, in the source that you referred to, maybe for simplicity they haven't done so and have just found a sample of a plausible model (hypothesis)\n\nThe other hypothesizes should follow the previously mentioned rules, thereby, $$x_1 + x_2 - 2$$ can also be another hypothesis for this problem, etc.\n\n\u2022 thanks for your answer. in this specific case, what is exactly the hypothesis space? is it something like $b_{min} < b < b_{max}, \\quad and \\quad w_{min} < w < w_{max}$. then what is the values of $b_{min}, b_{max}, w_{min}, w_{max}$ \u2013\u00a0czlsws Jul 11 at 3:12\n\u2022 Here the hypothesis space is all forms of w.x+b where are capable of conveying the inputs to outputs correctly. and based on the expressions like b<0 and w1 < |b| we see that bs and ws are correlated. although, If we follow the perceptron algorithm we would see that b is added by y whenever the condition of the update is true. y here is 0 or 1, hence b equals b + n*1 (n is the number of required updates) for w also, regarding w = w+yx, seemingly, here it can be at most plus 1 because yx is always 0 but in case (1,1) \u2013\u00a0Fatemeh Asgarinejad Jul 11 at 3:19","date":"2019-10-21 13:33:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 31, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 1, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8016076683998108, \"perplexity\": 496.91213740196406}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570987773711.75\/warc\/CC-MAIN-20191021120639-20191021144139-00170.warc.gz\"}"} | null | null |
Zbigniew Kaczmarek (born 1 June 1962 in Lębork) is a former Polish football player. He played 30 times for Poland.
References
Sources
1962 births
Living people
People from Lębork
Polish footballers
Poland youth international footballers
Poland international footballers
Legia Warsaw players
AJ Auxerre players
En Avant Guingamp players
AC Ajaccio players
Polonia Gdańsk players
Lechia Gdańsk players
Ekstraklasa players
Ligue 1 players
Ligue 2 players
Polish expatriate footballers
Expatriate footballers in France
Sportspeople from Pomeranian Voivodeship
Association football midfielders
Wigry Suwałki managers
Olimpia Zambrów managers
Polish football managers | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,598 |
Home Canoe Slalom and Wildwater Canoeing, what is it ? 2017 ICF CANOE SLALOM WORLD CHAMPIONSHIPS
Canoe Slalom and Wildwater Canoeing, what is it ? 2017 ICF CANOE SLALOM WORLD CHAMPIONSHIPS
What is Canoe Slalom ?
Denis GARGAUD-CHANUT © FFCK / KMSP - J.CROSNIER
Canoe Slalom is an Olympic discipline which consists in doing a wild water course of almost 400 meters length as quickly as possible by respecting mandatory passages, indicated by doors (18 to 25 maximum). There are two types of doors:
Green doors: to cross downstream
Red doors: to go upstream
The doors, when touched, or those that are not crossed, count as penalties, which add to the finishing time of the course (2 seconds for a touch, 50 seconds for a missed door).
International categories
Kayak single-seat men (K1M) and women (K1W)
Canoe single-seat men (C1M) and women (C1W)
Double-seat canoe men (C2M) and mixed (C2M)
Courses involving team events with three boats also exist. The Slalom is practised on more or less complicated whitewater courses depending on the level of the competition, with a qualifying phase of two rounds followed by a semi-final and a final round.
What is Extreme Canoe Slalom ?
© Eric Traversié
This is a new event, part of discipline Canoe Slalom, which is currently emerging at the international level.
It consists of a knockout tournament between four boats per round on a course that lasts between 45 and 60 seconds.
After a spectacular start, the athletes are positioned on a 3 to 5-metre ramp above water and they to cover a stretch consisting of 5 to 7 obstacles as quickly as possible. Following several qualification rounds, the winners of each of the rounds compete at the final level, following which the fastest one of these takes the title of world champion. Boats used for such events are a hybrid of those used in slalom and intensive courses.
What is Wildwater Canoeing ?
Tony DEBRAY - Louis LAPOINTE © FFCK / KMSP - J.CROSNIER
The rule is simple: go as quickly as possible from an upstream point to another downstream point of the river. It is a course against the clock, being practised in white waters where it is essential to choose one's path well according to the streams and the natural obstacles formed by rocks. The courses take place mainly on natural courses, however with the emergence of wildwater canoeing, the competitions are now held in the same places as canoe slalom. There are two types of courses: the classic competition with a duration of 12 to 25 min and the wildwater canoeing competition with a duration of 30 seconds to 2 min 30. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,509 |
Long-Term Study Shows No Major Adjacent Degeneration in Most Patients After Fusion for Scoliosis
Young patients who undergo spinal fusion for scoliosis are likely to be doing well 10 years after surgery, according to study results published online in Spine .
The findings contrast beliefs that the surgery would cause damage to the spine just below the fused discs due to increased stress at uninstrumented caudal intervertebral discs and accelerated degeneration, according to a press release from Hospital for Special Surgery (HSS).
"Fusion for adolescent idiopathic scoliosis using the newer generation spine implants appears to spare junctional disc degeneration and allows patients 10 years out to have a relatively normal, pain-free lifestyle," Daniel Green, MD, lead author and pediatric orthopedic surgeon at HSS, stated in the release.
The investigators performed a retrospective chart and imaging review of patients aged 21 years and younger with idiopathic scoliosis who underwent posterior fusion and segmental instrumentation between 1991 and 1997. Surgery had to approach the spine from the back rather than the front or side, and patients had to have fusion of the spine in the lower back, between vertebra T12 and L3.
The investigators studied 90 discs below the fused level in 20 patients with an average follow-up of 11.8 years. The study noted a distal level of fixation at L1 on average, with the major curve averaging 55·±11· preoperatively and 25·±10· at follow-up.
According to the study findings, follow-up MRI revealed new disc pathology in 85% of patients. One patient displayed significant degenerative disc disease at the junctional level — most pathology was seen at the L5-S1 disc. Average Pfirmman grade at uninstrumented levels deteriorated from 1.1 preoperatively to 1.8 at follow-up, with average degenerative scores increasing in the L5-S1 disc space from 1.2 preoperatively to 2.3 postoperatively.
Three of the patients with severe disc disease were taking NSAIDs for pain, but the study pointed out that none were taking narcotics.
"We wanted to see how the patients were doing 10 years down the road, specifically focusing on the part of the spine that did not have surgery," Green stated in the release. "The standard belief was that the area of the spine just below the surgery would wear out, because of the increased stress that the surgery or the fusion would put on that part of the spine."
"That is not what we found," he added. "We found that the area of the spine adjacent to the fusion was pretty healthy and did not show any major degeneration 10 years later. While mild degenerative changes were noted in almost every patient, the severe changes that we were concerned we might find were not there at all."
Though the findings pointed to an accelerated rate of L5-S1 disc degeneration, the authors found good functional scores and maintenance of correction during the 10-year follow-up 10 years.
"There is a lot of research and investment being done looking for new technologies that do not use fusion," Green stated. "This study would suggest that there is a challenge for those trying to do that because the patients doing fusion are doing well."
United Spinal Association and National Spinal Cord Injury Association Announce Merge
In Memoriam: Brandon Barth | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,087 |
Q: i verify that autocomplete works well but there are no results appear I verify if Autocomplte works well or not. I send the keys but he does not select the required element. Finally I want to print the URL of the page that appear after finding the required element and pressing on it. I recieve only this result:
Ran 1 test in 33.110s
OK
Process finished with exit code 0
Message:
def test_autocomplet(self):
try:
driver = webdriver.Chrome()
self.driver=webdriver.Chrome()
url = self.driver.get("http://automationpractice.com/index.php")
self.driver.maximize_window()
Serach_text_box=self.driver.find_element_by_id("search_query_top")
Serach_text_box.send_keys("Printed")
Serach_text_box.send_keys(Keys.ARROW_DOWN)
five_option= WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH,"//*[contains(text(),'Dress')]")))
five_option.send_keys(Keys.ENTER)
print self.driver.current_url
self.assertEqual("http://automationpractice.com/index.php?id_product=3&controller=product",self.driver.current_url, "This Test case is fallied")
except NoSuchElementException as e:
print (e)
except AssertionError as e:
print (e)
except TimeoutException as e:
print (e)
I want to know if any thing in the code is wrong and why he does not select and click on the required element and print the URL of the next page that appear after click on the required element.
I would be thanksfull for any help.
A: I put here code which I used to test this page.
To select item on menu I can use ARROW_DOWN but it doesn't give information about selected item.
Second method is to search
//div[@class='ac_results']//li[contains(text(),'Dress')]
or at least
//li[contains(text(),'Dress')]
eventually
//div[@class='ac_results']//li
to access item in menu. And then I can get full text .text or highlighted part .//strong
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException, TimeoutException
import time
try:
#driver = webdriver.Chrome()
driver = webdriver.Firefox()
url = driver.get("http://automationpractice.com/index.php")
#driver.maximize_window()
search_text_box = driver.find_element_by_id("search_query_top")
search_text_box.send_keys("Printed")
time.sleep(1) # page display (and update) autocompletion when you make little longer delay
# --- select using arrow key ---
# move selection on list and accept it
#search_text_box.send_keys(Keys.ARROW_DOWN)
#search_text_box.send_keys(Keys.ARROW_DOWN)
#search_text_box.send_keys(Keys.ARROW_DOWN)
#search_text_box.send_keys(Keys.ENTER)
# OR
# --- select using tag `<li>` and `text()` in autocompletion ---
# click on first matching item on list
#one_option = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH, "//li[contains(text(),'Dress')]")))
one_option = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH, "//div[@class='ac_results']//li[contains(text(),'Dress')]")))
print(' tag:', one_option.tag_name)
print('text:', one_option.text)
print('bold:', one_option.find_element_by_xpath('.//strong').text)
one_option.click()
# OR
# --- get all elements in autocompletion using `<li>` tag ---
# get many matching items and use [index] to click on some item on list
#one_option = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH, "//li[contains(text(),'Dress')]")))
#all_options = driver.find_elements_by_xpath("//li[contains(text(),'Dress')]")
#for option in all_options:
# print(option.tag_name, ':', option.text)
#all_options[1].click()
print(' current:', driver.current_url)
print('expected:', "http://automationpractice.com/index.php?id_product=3&controller=product")
print('the same:', driver.current_url == "http://automationpractice.com/index.php?id_product=3&controller=product")
assert "http://automationpractice.com/index.php?id_product=3&controller=product" == driver.current_url, "This Test case is fallied"
#assertEqual("http://automationpractice.com/index.php?id_product=3&controller=product", self.driver.current_url, "This Test case is fallied")
except NoSuchElementException as e:
print('NoSuchElementException:', e)
except TimeoutException as e:
print('TimeoutException:', e)
except AssertionError as e:
print('AssertionError:', e)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,161 |
\section{Introduction}
The development of a new drug is a long-term and costly process. It entails the identification of bioactive compounds against predefined biomolecular targets as one of its initial and most important steps. With the advancements in high-throughput screening technology, simultaneous screening of tens of thousands of compounds is quite achievable. However, it is still not possible to analyze the entirety of the chemical and biomolecular spaces due to their huge sizes ~\cite{rifaioglu2019recent}, which usually prevents the discovery of the best candidate molecules. The majority of the identified "non-ideal" drug candidates fail in later stages of the development process, such as clinical trials, due to high toxicity or low efficacy, which is the primary reason for the low success rates lately observed in drug development ~\cite{paul2010improve}. \\
The structural diversity of small-molecule drugs discovered so far is relatively low. Consequently, they can only target biomolecules within a limited structural framework \cite{bhisetti2022artificial}. This is also partly valid for bioactive molecule datasets presented in open-access repositories such as ChEMBL \cite{mendez2019chembl} and PubChem \cite{kim2023pubchem}. Thus, there is a need for truly novel, i.e. structurally diverse, small molecule drug candidates to target understudied proteins in the human proteome, including their clinically significant variants \cite{elton2019deep}. Within the enormous theoretical space of possible small molecules, the size of which is estimated to be around $10^{60}$, molecules that can effectively and specifically target each druggable biomolecule may exist \cite{grant2021novo}. The main challenge here is identifying the correct molecular structures within this unexplored space. For this, an approach called "de novo drug design" is used, the purpose of which is to design new candidate molecules without using a starting structural template, especially to target biomolecules that could not be effectively targeted by the currently available structures \cite{mouchlis2021advances}. \\
To address problems associated with conventional drug design, such as long development durations, high costs, and a high number of unknown variables regarding the efficacy and safety of the designed compounds, AI-driven methods, e.g., deep generative modeling, are starting to penetrate the field of drug design. One of the first generative modeling architectures to be used in de novo molecule design was variational autoencoders (VAE) ~\cite{kingma2013auto}. In a VAE-based molecule generation method developed by Gomez-Bombarelli et al., the encoding network transforms the discrete SMILES expressions of molecules into real-valued continuous vectors, while the decoder reconstructs SMILES from this continuous space. The predictive network added to the system guides the decoder by predicting properties such as drug-likeness and synthetic accessibility of the representations in the latent space \cite{gomez2018automatic}. Another generative modeling architecture called Generative Adversarial Networks - GAN \cite{goodfellow2020generative}, which was originally developed for image analysis, has been employed to design de novo molecules. GANs are trained via a battle between generator and discriminator networks in a zero-sum-game, where each agent tries to beat the other one by performing better at each move. The model called MolGAN uses a multilayer perceptron-based generator and graph convolutional discriminator to handle the molecule generation process \cite{de2018molgan}. This method was one of the first studies to implement GANs for de novo drug design. With the aim of rendering the generation process more efficient, a following study set the training objective as predicting the masked node and edge labels on molecular graphs, which enhanced the generation of novel molecules \cite{mahmood2021masked}.\\
Deep generative models have also been used to design molecules with desired properties. This has mostly been achieved by conditioning the model training and/or the prediction procedure(s). Most of the models developed so far have utilized condition vectors as a tool for property injection into the generative process. In many cases, this was done to condition the generated molecules to have drug-like properties. VAEs ~\cite{ mitton2021graph,richards2022conditional,nemoto2023novo}, GANs ~\cite{kadurin2017drugan, de2018molgan, xie2023helixgan} and sequence based (language) models \cite{arus2019randomized,blaschke2020reinvent, wang2023petrans,bagal2021molgpt} have been used for molecule generation tasks, in this regard. Reinforcement learning (RL) has also been used for this purpose, with reward-penalty functions guiding models towards desired molecular characteristics in the respective latent space. \cite{blaschke2020reinvent, abbasi2022designing,perron2022deep}. This approach results in optimized molecule production; however, obtaining drug-like de novo molecules is not sufficient to yield desired activities against biomolecular targets. One of the fundamental objectives in drug design is to come up with small molecules that will selectively interact with the desired target. Although there are a few recent studies that present prototype models \cite{liu2022generating, wang2022relation, gebauer2022inverse, shi2022pocket2drug, uludougan2022exploiting, rozenberg2022semi, li2022generative, zhang2023universal}, AI-driven target-specific drug design is a highly novel and under-studied field with a great potential to contribute to rational drug design. Incorporating protein features into the process of molecule generation is the most sensible way of designing targeted molecules, which is the approach adopted in conventional structure-based drug design. However, achieving this task in AI-driven de novo design is difficult, mainly due to the extremely high complexity of the interactions between small molecules and target proteins. \\
In this study, we propose DrugGEN, a new de novo drug design system, an end-to-end framework, that generates target-specific small molecules using GANs, transformers ~\cite{vaswani2017attention} and graph representation learning \cite{kipf2016semi}. DrugGEN is composed of two serially connected GANs, in which graph transformer encoder and decoder modules learn the representation of both small molecule ligands and target proteins, and generate novel structures for the given target, in the format of molecular graphs. The first GAN module of DrugGEN (GAN1) aims to learn the distributions of fundamental properties (e.g., physicochemical and topological properties) of real molecules from the given data to generate new drug-like small molecules that are valid and stable. The generator network of the second GAN (GAN2-generator) takes the de novo molecules generated by GAN1 and processes them together with protein feature graphs. The output of GAN2-generator are compared with the known (real) bioactive ligands (inhibitors) of the selected target protein in GAN2-discriminator, to learn the structural distribution of those real inhibitors. This approach is essential for transforming de novo generated molecules into ligands that interact with the selected target. Different variations of the DrugGEN model were constructed and evaluated, in terms of both the generation efficiency and the properties of the output molecules. With the aim of evaluating DrugGEN in a use-case, we generated de novo inhibitors for the AKT1 protein, which is critically important to develop effective treatments against certain types of cancer \cite{mroweh2021targeting}.
\section{Methods}
\subsection{Data}
To train our deep generative models, three different types of data (i.e., compounds, proteins, and bioactivities) were retrieved from different data sources. The compound dataset, which includes atomic, physicochemical, and structural properties of drug and drug candidate molecules, was used as the input of our models (for both the GAN1 and GAN2 modules) as our "real" samples. The compound dataset we utilized in this study was retrieved from ChEMBL \cite{mendez2019chembl}, which is a chemistry database containing curated high-quality data regarding drug-like small molecules and their experimentally measured activities on biological targets. We employed ChEMBL v29 which is composed of a total of 1,914,648 small molecules. The heavy atom distribution histogram of the ChEMBL dataset is given in Figure S1, which is used to determine the threshold for the maximum number of heavy atoms in the compounds in our model. Based on the median value and standard deviation of this distribution, we created the ChEMBL compound dataset composed of 1,588,865 small molecules with a maximum number of 45 heavy atoms.\\
We utilized biological assemblies \cite{krissinel2007inference} obtained from the Protein Data Bank (PDB) \cite{burley2019rcsb} as our protein dataset. There are 57,925 biological assembly models in PDB, in total. Here, we only obtained the models that belong to our target protein, namely RAC-alpha serine/threonine-protein kinase (gene name: AKT1), a member of the non-specific serine/threonine protein kinase class (EC number: 2.7.11.1). The human protein kinase AKT mainly has two domains, which are kinase and pleckstrin homology (PH) (Figure S2) \cite{du2005regulation}. We constructed the AKT1 protein feature vector (see section 2.2) using the kinase domain structure (PDB id: "4GV1" \cite{addie2013discovery}) since the main ligand binding region lies within this domain. \\
The third and final data type to be used in DrugGEN system training is experimental bioactivities, which is based on quantitative measurements of physical interactions between drug-like compounds and their target proteins. The bioactivity data was retrieved from the ChEMBL database. We applied various filters for standardization, such as target type: "single protein", assay type: "binding assay", standard type: "=" and pChEMBL value: "not null" (i.e., curated activity data points). Then, bioactivity data belonging to the AKT1 target protein were selected from the filtered bioactivity dataset. The finalized dataset contains ligand interactions of the human AKT1 (CHEMBL4282) protein with a pChEMBL value equal to or greater than 6 (i.e., IC50 <= 1 µM) as well as SMILES notations of these ligands. This activity dataset was extended by including drug molecules from the DrugBank database \cite{wishart2018drugbank} that are known to interact with human AKT1 protein. With the filtering of molecules with sizes exceeding 45 heavy atoms, a total of ~1,600 bioactivity data points, which also means ~1,600 small molecule ligands, were obtained for the training of AKT1-specific generative models.
\subsection{Featurization}
DrugGEN utilizes graph representations of input molecules, each composed of two parts; an annotation matrix (contains information about the atom types) and an adjacency matrix (contains information about the presence of atomic bonds/interactions and their types). The annotation and adjacency matrices of the compounds were created using the RDKit \cite{landrum2013rdkit} library based on the SMILES notations of the molecules. The annotation matrix of the compounds in the ChEMBL dataset is a matrix with the size 45*13, based on 12 types of atoms (atom types: C, O, N, F, K, S, B, P, Br, Ca, Cl, As) and one for the null (i.e., no atoms) case. The number of rows of the matrix: 45, defines the maximum length (the number of heavy atoms) of the molecule to be generated, while the number of columns: 13, defines the atom types. The extra (13th) column was included for the cases where no atoms are to be included in that position in the molecule (i.e., the "null" atom). The adjacency matrix is a 45*45*5 dimensional matrix that displays whether there are covalent bonds between the atoms of the molecule (0th: no bond, 1st: single, 2nd: double, 3rd: triple, and 4th: aromatic). \\
Since proteins are much larger in size and more complex compared to small molecules, presenting the structural information of a whole protein to the generative model would significantly increase the computational complexity and add noise to the system, which in turn would make it difficult to train an accurate model. In order to overcome this problem, we generated protein features by solely using functionally important regions of proteins, the binding sites/regions. To construct the binding sites of proteins, we employed the coordinates (on the 3-D plane) of protein-ligand complexes, obtained from PDB. In DrugGEN, target proteins are defined at the atomic level, with the aim of constructing their features at the same level as compounds. The atom types are standardized by converting the data from the PDB file format to the PDBQT file format, which contains reduced atom types. Also, hydrogen atoms have been added to proteins to mimic their active forms in nature. For these operations, protein and ligand processing scripts within the AutoDockTools4 \cite{morris2009autodock4} were used. To determine which atoms of the protein are to be included in the binding site feature vector, a cut-off distance between protein and ligand atoms was determined, using Euclidean distances. This value was selected as 9 Angstroms (A), based on the literature \cite{piana2012evaluating}. Thus, the atoms of a protein within a maximum distance of 9 A from all ligand atoms were recorded as its binding site. Figure S3 displays the constructed binding region of the AKT1 protein kinase domain structure. \\
In protein adjacency matrices, both covalent bonds and non-covalent interactions between atoms are included, with the aim of expressing the structure in a precise manner. The PDBeChem web service (https://www.ebi.ac.uk/pdbe-srv/pdbechem/) was used to define the covalent bonds between atoms in the binding site. The Python library Interfacea (https://github.com/JoaoRodrigues/interfacea/tree/master) was used to define the types of non-covalent interactions, both between atoms in the same residue and between inter-residue atoms. As a result, a 450*8-sized protein annotation matrix containing a total of 450 atoms belonging to 7 types (i.e., C: aliphatic carbon, N: non H-bonding nitrogen, OA: acceptor 2 H-bonds oxygen, A: aromatic carbon, SA: acceptor 2 H-bonds sulphur, NA: acceptor 1 H-bond Nitrogen, HD: donor 1 H-bond hydrogen) and 1 additional type to account for the absence of atoms (the null case), was constructed for AKT1. Within the adjacency matrix, there are 4 types of covalent bonds and 6 types of non-covalent bonds (i.e., covalent: single, double, triple, and aromatic; non-covalent: ionic, hydrogen bond, cation-pi, hydrophobic, pi-stacking, and t-stacking). Again, an extra dimension is used for the "no bond" case. The finalized adjacency matrix has the size of 450*450*11.
\subsection{Architecture of DrugGEN}
The DrugGEN model is built on Generative Adversarial Network (GAN) ~\cite{goodfellow2020generative} architecture and took inspiration from the StackGAN ~\cite{zhang2017stackgan} model to create a two-fold system. DrugGEN has 5 model variations each with its unique sample generation routine, below we define the default DrugGEN model (called DrugGEN-Prot) and its construction mechanism. Other model variations are defined in detail at Section 2.6. Figure 1 shows the overall workflow of the DrugGEN system. At the first stage (GAN1), given a random noise $z$, generator $G_1$ (a graph transformer encoder) creates annotation and adjacency matrices of a supposed molecule (Figure 1A). These matrices are then fed to the discriminator network $D_1$ together with the real small molecules, to assign them to the groups of "real" and "fake" (Figure 1B). \\
At the second stage (GAN2), annotation and adjacency matrices of de novo molecules generated by $G_1$ are given to the second generator, $G_2$ (which is a graph transformer decoder) as $G_1(z)$ (i.e., $Q$, $V$ and $A_m$ vectors in Figure 1C). In addition to $G_1(z)$, target proteins' annotation and adjacency matrices are also given to the model ($K$ and $A_k$ vectors, respectively, in Figure 1C). As a result, the finalized de novo generated molecule is the output of the function $G_2(G_1(z), K, A_k)$. $D_{2}$ takes the real inhibitor molecules that are experimentally shown to inhibit the selected target protein, together with the output of $G_2$ (Figure 1D) as its input and distinguishes them from each other. The details of each module are provided below.
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth,trim = 2cm 1.5cm 2cm 2cm,clip=True]{figure1.pdf}
\caption{\textbf{(A)} Generator (G1) of the GAN1 consists of an MLP and graph transformer encoder module. The generator encodes the given noise input into a new representation; \textbf{(B)} the MLP-based discriminator (D1) of GAN1 compares the generated de novo molecules to the real ones in the training dataset, scoring them for their assignment to the classes of "real" and "fake" molecules; \textbf{(C)} Generator (G2) of GAN2 makes use of the transformer decoder architecture to process target protein features and GAN1 generated de novo molecules together. The output of the generator two (G2) is the modified molecules, based on the given protein features; \textbf{(D)} the second discriminator (D2) takes the modified de novo molecules and known inhibitors of the given target protein and scores them for their assignment to the classes of "real" and "fake" inhibitors.
}
\label{fig:img1}
\end{figure*}
\paragraph{GAN1 Generator:}
The generator module employs the transformer encoder architecture ~\cite{vaswani2017attention} and operates on graphs-based data ~\cite{dwivedi2020generalization}. For this both the annotation and adjacency matrices are required to be processed in the same module. The annotation matrix contains the information regarding types and number of atoms in the molecule. The adjacency matrix represents the bonds between the atoms in the molecule, i.e., the edges in the graph. A molecular graph is considered as: $G = (V, E)$ ; given that $V$ is the set of nodes and $E$ is the set of edges representing connection between nodes. Each node can be indexed as $v_i$ $\in$ $V$ with $i$ = 1, …, n. A connection between two nodes is defined as $E_i,j$ = 0, …, n. Each node and edge label is described in Section 2.2. We define the adjacency matrix $A$ as $A_i,j$ = 0,...,4 (according to the type of edge -bond- between the respective vertices -atoms-, or lack thereof). We define the annotation matrix $N$ as $N_i$ = 0,...,12 (according to the type of vertices -atoms-, or lack thereof). We used a maximum length of 45 heavy atoms in our molecules which necessitates our molecular graph size to be 45*13. Details regarding the dimensions and the context of annotation and adjacency matrices are given in Section 2.2. \\
The input (composed of noise) are fed through individual MLPs for annotation and adjacency matrices, both of which consists of four layers (i.e., input: 16, 2 hidden: 64 each, and output: 128 dimensions). In summary, MLPs are utilized to create embedings of annotation and adjacency matrices with $d_k$ (default: 128) dimensions. Afterward, the input is fed to the transformer encoder module, that has a depth of 8 encoder layers with 8 multi-head attention heads for each. Here, firstly, input is processed in layer normalization and then sent to the self attention mechanism. In the classic transformer architecture, $Q$, $K$ and $V$ variables are the representations of the same input sequence. Attention is calculated by the scaled dot product of $Q$ and $K$, after that, the attention is multiplied by $V$ to form the final product \cite{vaswani2017attention}. In the graph transformer setting, $Q_{m_{1}}$, $K_{m_{1}}$ and $V_{m_{1}}$ are the variables representing the annotation matrix of the molecule. However, here, attention weights are calculated as the multiplication of the adjacency matrix ($A_{m_{1}}$) of the molecules with the scaled dot product of $Q_{m_{1}}$ and $K_{m_{1}}$. Then, attention weights are multiplied with $V_{m_{1}}$ to create the final representation of the annotation matrix. The new representation of the adjacency matrix is the concatenated version of the attention weights as described in the study by Dviwedi et al. (2020) and Vignac et al. (2022)\cite{dwivedi2020generalization,vignac2022digress}. For our default model, output dimension size of the transformer is 128 for both the annotation and adjacency. The calculation of the attention mechanism is formulated below:
\begin{equation}
Attention_{m_{1}}(Q_{m_{1}}, K_{m_{1}}, V_{m_{1}}) = softmax(\frac{Q_{m_{1}}K_{m_{1}}^{T}}{\sqrt{d_{k}}}A_{m_{1}})V_{m_{1}}
\end{equation}
In this equation $Q_{m_{1}}$, $K_{m_{1}}$, and $V_{m_{1}}$ denote the annotation matrix of the molecules while $A_{m_{1}}$ denotes their adjacency matrix. $d_k$ is the dimension of the transformer encoder module and it is used to scale the attention weights. \\
The reason for multiplying attention with the adjacency matrix is to ensure the contribution of adjacency to attention weights. After the final products are created in the attention mechanism, both the annotation and adjacency matrices are forwarded to layer normalization. Normalized matrices are summed with the initial matrices (the ones before forwarding to the attention mechanism), to create a residual connection. Finally, these matrices are fed to separate feedforward layers, which concludes the processing of the annotation and adjacency matrices.
\paragraph{GAN2 Generator:}
The second generative network modifies molecules that were previously generated by GAN1, with the aim of generating binders for the given target protein. $G_{2}$ module utilizes the transformer decoder architecture ~\cite{vaswani2017attention}. The transformer decoder module has a depth of 8 decoder layers and uses 8 multi-head attention heads for each. For our default model, both the input and output dimension sizes of the transformer decoder are 128. DrugGEN's graph transformer decoder network, $G_{2}$, takes both $G_{1}(z)$, which is data generated by $G_1$, and the protein features as input (Figure 1C). Protein self attention is calculated in the transformer decoder module as described in equation 1 for molecules. Interactions between molecules and proteins are processed inside the multi-head attention module of the transformer decoder. Here, molecules and protein features are multiplied via taking their scaled dot product, and thus, new molecular matrices are created. The attention is calculated as shown in the formula below:
\begin{equation}
\begin{split}
Attention_{{m_{2}}}(Q_{m_{2}}, K_{p}, V_{m_{2}}) =softmax(\frac{Q_{m_{2}}K_{p}^{T}}{\sqrt{d_{k}}}(A_p A_{m_{2}}))V_{m_{2}}
\end{split}
\end{equation}
In this equation, $Q_{m_{2}}$ and $V_{m_{2}}$ denote the annotation matrix of the molecules, while $K_{p}$ denotes the annotation matrix of the protein. Superscript T denotes the transpose function. $A_p$ and $A_{m_{2}}$ correspond to the representation of protein and molecule adjacency matrices, respectively. $d_k$ is the dimension of the transformer decoder module used to scale attention weights. Apart from the attention mechanism, further processing of the molecular matrices follows the same workflow as the transformer encoder in $G_1$. The output molecules of this module are the final products of DrugGEN and are forwarded to $D_2$.
\paragraph{GAN1 and GAN2 Discriminators:}
The purpose of the discriminator in GANs is to compare the synthetic (or fake) data, $G(z)$, generated by the generator with the real data, $x$, and classify its input samples as fake or real. Both of the discriminators in DrugGEN (Figure 1B and 1D) are constructed using MLP, and they take their input as flat, one dimensional vectors. These vectors are created by concatenating the flattened versions of the annotation and adjacency matrices. GAN1 and GAN2 discriminators do not share parameters; however, they have the same modularity and size. The sizes of the layers in both MLP discriminators are 256, 128, 64, 32, 16, 1, respectively, from input to output. The last layer ends with a single neuron and a $tanh$ activation function to map each sample to a value between [-1,1]. A theoretically perfect discriminator should map a real molecule to 1 and a generated molecule to -1.
\subsection{Loss function}
DrugGEN utilizes the WGAN loss in model training ~\cite{arjovsky2017wasserstein}. Since DrugGEN is composed of two GANs, losses of these two networks are combined with each other. We reformulated the WGAN loss for end-to-end training of a two-stage GAN system, and the formula below is obtained:
\begin{equation}
\begin{split}
L = (\mathbb{E}_{x\sim{p_{r}(x)}}[D_1(x)] - &\mathbb{E}_{z\sim{p_{g}(z)}}[D_1(G_1(z))])\\ + & (\mathbb{E}_{\tilde{x}\sim{p_{r}(\tilde{x}))}}[D_2(\tilde{x})]\\ - &\mathbb{E}_{K\sim{p_{g}(K)}}[D_2(G_2(G_1(z),(K_p, A_p))])
\end{split}
\end{equation}
where $x$ denotes real molecules that has been used in the first discriminator of DrugGEN, obtained from ChEMBL, DrugBank; $\tilde{x}$ denotes the real molecules, which interact with the selected target proteins, used in the second discriminator of DrugGEN; $z$ denotes the noise distribution, the input of the first generator of DrugGEN; $K_p$ denotes the annotation matrix and $A_p$ denotes the adjacency matrix of the protein; $p_{r}$ denotes real data distribution and $p_{g}$ generated data distribution. It has been shown in the literature that using gradient penalty (GP) improves the performance of WGAN ~\cite{gulrajani2017improved}. Due to this, we utilized GP, and its loss is formulated as:
\begin{equation}
L_{GP} = \lambda \mathbb{E}_{\hat{x}\sim{p_{\hat{x}}(\hat{x})}}[(|| \nabla_{\hat{x}} \tilde{D}({\hat{x}})||_2 - 1)^2]
\end{equation}
where $\lambda$ denotes a penalty coefficient; $\hat{x}$ denotes data coming from: (i) $x$ (GAN1's real data), (ii) $\tilde{x}$ (GAN2's real data), and (iii) generated samples. $p_{\hat{x}}(\hat{x})$ refers to sampling uniformly along straight lines between pairs of points from the data distribution $p_r$ and generator distribution $p_g$ ~\cite{gulrajani2017improved}. Also, $\tilde{D}$ denotes the aggregation of $D_1$ and $D_2$ as $D_1$ + $D_2$. By combining Eqn. 3 and Eqn. 4, we obtained our finalized loss function as:\\
\begin{equation}
L_{total} = L + L_{GP}
\end{equation}
\subsection{The Training Scheme and Hyperparameters}
DrugGEN was trained with the ChEMBL compounds dataset (used as the real molecules input of the model). The ChEMBL dataset was split into train and test partitions randomly with 90\% to 10\% ratio. Training procedure has been carried out via two alternative routes, in different runs. In the first route, the model training is started with a "warm-up" session with only the GAN1, which is continued for several epochs, and then, the GAN2 training is activated. Here, DrugGEN training starts with $D_1$ and continues with $G_1$. After that, the model trains $D_2$ and $G_2$ consecutively. The second route trains GAN1 and GAN2 together from scratch, in which training starts with $D_1$ and $D_2$ and continues with $G_1$ and $G_2$. The default DrugGEN model uses a learning rate of 0.00001 for $G_1$, $G_2$, $D_1$, and $D_2$. The batch size of the model was 128 and the model was run for 50 epochs in total (according to our observation, loss values did not significantly change after 50 epochs). The Adam optimizer was utilized as the optimizer of the model with beta1: 0.9 and beta2: 0.999. Training the models reported below took approximately 2 days to finish (each one) using 10 Intel Gold CPUs and a single NVIDIA A5000 GPU. $G_1$ of DrugGEN consisted of ~37 million parameters, while $G_2$ consisted of ~640 million parameters. Both discriminators, on the other hand, had ~2.7 million parameters for the default model.
\subsection{Model Variations}
With the aim of generating target-based drug-candidate de novo molecules using the DrugGEN system, we implemented numerous different models with slight variations in terms of the architectural design and the input data. All the models presented below were tested with respect to their generational performance. \\
\textbf{DrugGEN-Prot (the default model)} is the one shown in Figure 1 and explained in sections 2.3 and 2.4. It incorporates protein features to the transformer decoder module of GAN2 (together with the de novo molecules generated by GAN1) to direct the target centric molecule design. The model employs end-to-end training and computes a single finalized loss by combining the losses of both discriminators. \\
\textbf{DrugGEN-CrossLoss} is composed of only one GAN (i.e., GAN1 of the default model), and is implemented with the aim of shifting the distribution of the input data to the distribution of real inhibitors of the selected target within a simpler system. In this model, the input of the GAN1 generator is the real molecules (i.e., ChEMBL dataset) instead of the random noise (to ease the learning process) and the GAN1 discriminator compares the de novo generated molecules with the real inhibitors of the given target protein. \\
\textbf{DrugGEN-Ligand} is composed of two GANs, similar to DrugGEN-Prot (and utilizes the same training routine and hyperparameters); however, it incorporates AKT1 inhibitor molecule features as the input of the GAN2-generator's transformer decoder instead of the protein features. The objective of the transformer decoder module of this model is to generate molecules that are structurally similar to AKT1 inhibitors. \\
\textbf{DrugGEN-RL} utilizes the same general architecture as DrugGEN-Ligand, and constructed with the aim of designing structurally diverse de novo molecules by avoiding the use molecular scaffolds that are already presented in the training set. DrugGEN-RL is inspired from the paradigm of reinforcement learning (RL). Here, the objective of the RL module is to decrease the Tanimoto scaffold similarity (using Bemis-Murcko \cite{bemis1999properties}) between generated and training set molecules (i.e., ChEMBL molecules for GAN1, and real AKT1 inhibitors for GAN2) by defining the similarity between them as an additional (penalty) term in the loss function.\\
\textbf{DrugGEN-NoTarget} is our base model, which is composed of only one GAN (i.e., GAN1 of the default model). This model only focuses on learning the chemical properties of real molecules from the ChEMBL training dataset, as a result, there is no target-specific generation. DrugGEN-NoTarget uses the same hyperparameters as the default model.
\subsection{Performance Metrics}
The performance of the models was evaluated using several molecular generation metrics presented in the MOSES benchmark platform \cite{polykovskiy2020molecular}, including validity, uniqueness, internal diversity (IntDiv), and novelty, to assess the efficiency of the generative capabilities of the models. Validity is calculated as the percent of the data that can be parsed by the SMILES conversion function of the RDKit \cite{landrum2013rdkit} Python package. Uniqueness is the metric that checks the dissimilarity of each molecule against other molecules in the same batch. IntDiv is the measurement of the mean dissimilarity (based on Tanimoto similarity on MorganFingerprints) between a molecule and other molecules in the same batch. Novelty is the ratio of the generated molecules that are not presented in the real (training) dataset to all generated molecules. Higher values of validity, uniqueness, IntDiv and novelty indicate better performance. Quantitative estimate of drug-likeness (QED), partition coefficient (logP), synthetic accessibility (SA), similarity to nearest neighbor (SNN), and MOSES filters measure the fitness of the generated molecules to be considered as drug candidates. Calculation details of these metrics can be found in Polykovskiy et al. (2020) and Landrum (2013) \cite{polykovskiy2020molecular,landrum2013rdkit}.
\vspace{-0.3cm}
\subsection{Molecule Filtering and Selection Procedure}
With the aim of identifying the best novel candidates for targeting the AKT1 protein, de novo molecules generated by DrugGEN models are filtered according to the following operations: 1) based on the Tanimoto similarity (calculated on Morgan fingerprints) against both the ChEMBL dataset molecules and real AKT1 inhibitors. Molecules that have Tanimoto similarity higher than 70\% to the training sets are eliminated; 2) using Lipinski's \cite{lipinski2012experimental} and Veber's rules \cite{veber2002molecular}, and 3) applying the PAINS (pan-assay INterference compoundS) \cite{baell2010new} filter, which identifies and eliminates false positive molecules in biological screening assays such as redox cyclers, toxoflavins, polyhydroxylated natural phytochemicals, and etc. \\
The molecules that remain after the filtering operations were analyzed via molecular docking. For the docking study, the crystal structure of AKT1 (PDB code "4GV1" \cite{addie2013discovery}) was prepared by Protein Preparation Wizard \cite{madhavi2013protein} program in Schrödinger Suite 2021 \cite{Maestro} with the OPLS2005 force field. Missing hydrogen atoms were added, and water molecules were removed. The physical condition of pH was set as 7.4 ± 1.0 for atom typing. The binding site of AKT1 was defined as ALA-177, LYS-179, LYS-182, ALA-212, GLU-228, ALA-230, GLU-234, GLU-278, THR-291, ASP-292, by detection with PLIP 2.5.4 \cite{adasme2021plip} and cross checked with the binding data published in the literature \cite{addie2013discovery}. These findings were integrated and used for grid generation. Glide software \cite{friesner2006extra} was used to find the best binding poses for each ligand. Van der Waals radius scaling factor was set to 1.0 and partial charge cut-off value was set to 0.25. The docking calculations were made in Standard Precision Mode (GlideScore SP). Results were visualized with PyMOL \cite{PyMOL}. \\
In parallel to docking analysis, our filtered de novo molecules have also been subject to deep learning-based drug-target interaction prediction against the AKT1 protein using our previously developed system entitled DEEPScreen. DEEPScreen employs readily available 2-D image-based structural Kakule representations (300-by-300 pixels) of compounds as input and processes them via deep convolutional neural networks which classify them as active or inactive against the target of interest \cite{rifaioglu2020deepscreen} For this, we first trained an AKT1 target model using experimental bioactivity data of this protein in ChEMBLv30 as our training dataset, which was composed of 1338 active and 1666 inactive molecules (activity threshold was pChEMBL value: 7). We randomly split the compound dataset into train, validation and test folds (80\%, 10\% and 10\% of the data, respectively). We optimized hyper-parameters with respect to the scoring metrics on the validation fold and measured the overall performances of the model on the independent hold-out test fold. The test performance of the model was found to be precision: 0.91, recall: 0.92, F1-score: 0.92, MCC: 0.85, which was considered satisfactory. Afterwards, the 2-D structural images of the de novo molecules were generated using the same parameters and run on the trained AKT1 model in prediction mode. Details regarding the DEEPScreen system and its training can be obtained from \cite{rifaioglu2020deepscreen}.
\section{Results and Discussion}
The DrugGEN model is a two-fold generative adversarial network that utilizes transformer architecture to design target-specific molecules. The performance of DrugGEN in designing de novo molecules was assessed using well-known benchmarking metrics. Additionally, target-specific properties of the generated molecules were evaluated through further in silico experiments, such as molecular docking and deep learning-based drug-target interaction prediction. We finally explored de novo molecules in comparison to real molecules via t-SNE based embedding and visualization in 2-D. \\
Hepatocellular carcinoma (HCC) is the most prevalent form of liver cancer, accounting for 75-80\% of cases. Also, it is the third leading cause of cancer-related deaths, with nearly 830,000 deaths worldwide \cite{sung2021global}. The PI3K/AKT/mTOR pathway is one of the most important signaling pathways related to HCC. It regulates the various fundamental cellular processes, such as survival, cell growth, and metabolism. Systemic treatment became the only option for advanced-stage HCC patients when the first FDA-approved multikinase inhibitor (MKI), Sorafenib, prolonged the survival of patients for 2-3 months. Due to the high toxicity and lower response rate of these treatment options, combinatorial treatment strategies involving antiangiogenic agents and ICIs (such as atezolizumab + bevacizumab) showed improved patient survival over Sorafenib \cite{zhang2022recent}. Unfortunately, available drugs are unable to effectively improve the overall HCC survival rate. Recent studies have uncovered several proteins that may serve as potential targets for the treatment of liver cancer \cite{luo2021ythdf1}. After evaluating potential kinases known to have a role in the development of HCC on the KinMap platform \cite{eid2017kinmap} and considering the availability of in vitro studies and kinase activity assays, we identified AKT kinases (RAC-alpha/beta/gamma serine/threonine-protein kinases) as promising targets for the treatment of HCC. Therefore, AKT1 targeting is selected as the use-case of DrugGEN.
\subsection{Performance Evaluation of DrugGEN Models}
In this analysis, DrugGEN models (see section 2.6) were compared with each other and with other models from the literature, over various benchmarking metrics. For this, we generated approximately 10,000 de novo molecules from each of the fully trained DrugGEN models (50,000 in total) and subjected these molecules to MOSES benchmarking \cite{polykovskiy2020molecular}. In Table 1, we report the generative performance over validity, uniqueness, novelty, internal diversity (where higher values are better) and FCD (lower is better) metrics. According to Table 1, DrugGEN displayed competitive results on the ChEMBL dataset against both baseline models (i.e., ORGAN and NAT GraphVAE), and more recent methods such as MolGPT, MGM, RELATION and MCMG (methods were selected based on the availability of models trained on the ChEMBL dataset, for fair comparison). Overall, all DrugGEN models have a high efficiency on molecule generation tasks. The validity score of DrugGEN-Prot was low compared to other DrugGEN models and compared methods. We believe this is due to the high complexity of this model. Unlike DrugGEN-Prot, remaining DrugGEN models do not utilize protein features (instead, transformer decoder input is either real AKT inhibitors or ChEMBL molecules), which decreases the overall complexity and facilitates the learning process. On the other hand, DrugGEN-Prot has the highest uniqueness score among all DrugGEN models, which is also similar to the best methods included in this analysis. DrugGEN models do not suffer from low novelty, opposite to ORGAN, MolGPT, and MGM. ORGAN relies on GANs composed of RNN (as generator) and CNN (as discriminator), to generate conditioned molecules \cite{guimaraes2017objective}. DrugGEN utilizes graph transformers, a novel architecture, inside GANs which in return yields a higher novelty and validity scores. Models like MolGPT and MGM also utilize the transformer architecture. However, the usage of transformers in generative modeling may result in lower novelty scores subject to overfitting to training data \cite{zuo2021taming}. It is probable that DrugGEN does not suffer greatly from overfitting due to relying on probabilistic discrimination instead of cross entropy loss. The IntDiv metric indicates the diversity of structures among generated samples. The DrugGEN-Prot model outperforms every other model that IntDiv reported, indicating the ability to learn different molecular structures from the training dataset and yield a diverse structural distribution during the generation process. The FCD score measures the proximity of the distribution of generated molecules' physicochemical characteristics to the distribution of the training dataset \cite{mahmood2021masked}. We measured the FCD scores of our models against real AKT1 inhibitors (not to the ChEMBL dataset). The only method that can be directly compared to DrugGEN over this metric is RELATION, since it also generates target-specific molecules. DrugGEN-Ligand and DrugGEN-RL have the lowest two FCD scores, indicating that these models capture the physicochemical properties of real AKT1 inhibitors better than others.\\
\vspace{-0.5cm}
\begin{table}[h]
\caption{Molecule generation performance of DrugGEN and other methods: MCMG \cite{wang2021multi}, RELATION\cite{wang2022relation},MGM\cite{mahmood2021masked}, MolGPT \cite{bagal2021molgpt}, ORGAN \cite{guimaraes2017objective}, and NAT GraphVAE \cite{kwon2019efficient}, calculated in terms of fundamental benchmarking metrics. All models are trained on the ChEMBL dataset.}
\centering
\vspace{0.5cm}
{\begin{tabular}{@{}llllll@{} }
\hline
Models & Val. $(\uparrow)$ & Uniq. $(\uparrow)$ & Nov. $(\uparrow)$ & IntDiv $(\uparrow)$ & FCD $(\downarrow)$ \\
\hline
MCMG & - & 0.105 & 0.889 & 0.622 & - \\
RELATION & 0.854 & \textbf{1.0} & \textbf{1.0} & 0.773 & 13.3 \\
MGM & 0.849 & \textbf{1.0} & 0.722 & - & 0.845\\
MolGPT & \textbf{0.994} & \textbf{1.0} & 0.797 & 0.857 & 0.067\\
ORGAN & 0.379 & 0.841 & 0.687 & - & -\\
NAT GraphVAE & 0.830 & 0.944 & 1.0 & - & 0.016 \\
\hline
DrugGEN-Prot & 0.484 & 0.939 & 0.992 & \textbf{0.887} & 16.65 \\
DrugGEN-CrossLoss & 0.820 & 0.790 & \textbf{1.0} & 0.878 & 18.38 \\
DrugGEN-Ligand & 0.859 & 0.881 & 0.981 & 0.877 & 5.382 \\
DrugGEN-RL & 0.867 & 0.873 & \textbf{1.0} & 0.830 & 6.068\\
DrugGEN-NoTarget & 0.820 & 0.857 & \textbf{1.0} & 0.885 & 17.27 \\
\hline
\end{tabular}}{}
\end{table}
In Table 2, we report the Wassertein distance values (measured between generated molecules and real AKT1 inhibitors) based on QED, SA, and logP metrics (where lower values are better) \cite{polykovskiy2020molecular}, together with the percentage of molecules that can pass the MOSES filters (where higher values are better) for DrugGEN models. QED values in Table 2 show that molecules generated by all DrugGEN models have highly similar drug-like properties to real AKT1 inhibitors, many of which are actual drug candidates. The DrugGEN-Ligand and DrugGEN-RL models generate molecules with logP values and SA scores close to AKT1 inhibitors. It is possible that incorporating protein features to the process (in DrugGEN-Prot) makes it challenging to design synthetically accessible molecules since this operation tries to shift the distribution of generated molecules to the features of the given target protein, instead of the known inhibitors of that protein. MOSES filters eliminate the structures that do not have drug-like patterns via a multi-level filtering operation\cite{polykovskiy2020molecular}. DrugGEN models (except DrugGEN-Prot and DrugGEN-NoTarget) perform well in this metric.
\subsection{Target-centric Assessment of Generated Molecules}
Based on the results in both Table 1 and 2, it is possible to state that DrugGEN models (especially DrugGEN-Prot) generate highly diverse and novel molecules that are significantly different from known inhibitors of the selected target, which can be interpreted as a highly positive outcome if these molecules really interact with the target of interest. However, it is not possible to evaluate properties related to target activity using the abovementioned metrics. To evaluate this, we carried out further computational analysis, in which we first filtered the same ~50,000 de novo generated molecules that were subjected to benchmarking analysis (according to the protocol explained in Section 2.8). After applying Lipinski, Veber and PAINS filters (to ensure drug-like properties), ~43,000 of them remained in our dataset. Distributions of physicochemical properties of both de novo generated molecules and real AKT1 inhibitors are given in Figure S5. Afterwards, a molecular docking analysis was performed (see Section 2.8) on these filtered de novo molecules, using AKT1 crystal structure (PDB id: "4GV1" \cite{addie2013discovery}) as template. Figure 2A displays box plots of docking scores (i.e., binding free energies - $\Delta$G) obtained from docking (with respect to scores of 100 molecules from each DrugGEN model with the best binding properties), where a lower binding free energy value indicates a higher activity. It is observed from Figure 2A that all five models were able to generate molecules with high potential, surpassing the score of the native ligand in the crystal complex structure "4GV1" \cite{addie2013discovery} (shown as the horizontal red dashed line). The molecules generated from DrugGEN-CrossLoss and DrugGEN-Prot have lower binding free energies against the AKT1 protein.\\
We also carried out a deep learning-based drug-target interaction (DTI) prediction analysis using the DEEPScreen system \cite{rifaioglu2020deepscreen} against the AKT1 protein (see Section 2.8) on the same de novo molecule dataset as the one used in docking. DEEPScreen is utterly independent of DrugGEN models to ensure there is no bias in this analysis. Out of ~43,000 molecules obtained from all DrugGEN models, DEEPScreen predicted ~18,000 of them as active against AKT1, and 301 of them received high confidence scores (0.83 or higher, where min:0 and max:1, the full confidence score histogram is given in Figure S4). These results indicate DrugGEN can generate target-specific molecules with high potential, which is in accordance with the results of the docking analysis.\\
\vspace{-0.5cm}
\begin{table}[h]
\centering
\caption{Wasserstein distance-based scores (i.e., QED, logP, SA and filters) of DrugGEN models. These metrics generally indicate drug-likeness and are measured in terms of distances between generated molecules and real AKT1 inhibitors. Filters metric calculates the percentage of generated molecules that can pass MOSES filters \cite{polykovskiy2020molecular}.}
\vspace{0.5cm}
{\begin{tabular}{@{}lllll@{}}
\hline
Models & QED $(\downarrow)$ & logP $(\downarrow)$ & SA $(\downarrow)$ & Filters $(\uparrow)$ \\
\hline
DrugGEN-Prot & 0.044 & 0.554 & 1.048 & 51.6\% \\
DrugGEN-CrossLoss & 0.096 & 0.481 & 0.354 & 83.6\% \\
DrugGEN-Ligand & 0.034 & \textbf{0.170} & \textbf{0.070} & 90.6\% \\
DrugGEN-RL& \textbf{0.030} & 0.218 & 0.280 & \textbf{91.7\%} \\
DrugGEN-NoTarget & 0.094 & 0.520 & 0.476 & 78.3\% \\
\hline
\end{tabular}}{}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth,trim=0cm 2.5cm 0cm 2.5cm,clip=true]{tsne.pdf}
\caption{\textbf{(A)} Score box plots displaying the binding free energies measured in the docking analysis of dene novo molecules generated by different DrugGEN models (against the AKT1 protein structure). Molecules generated by DrugGEN-CrossLoss showed the best average binding affinity, outperforming the score of the native ligand in the utilized AKT1 complex structure (shown by red dashed line, PDB id: "4GV1" \cite{addie2013discovery}). Also, all models generated at least a few molecules that have binding free energies lower than the native ligand; \textbf{(B)} 2-D visualization of tSNE embeddings of de novo molecules generated by different DrugGEN models (each model is denoted with a distinct color).}
\label{fig:img2}
\end{figure}
To further explore the de novo molecules, we carried out a t-SNE embedding \cite{van2008visualizing} (of randomly selected 10,000 ChEMBL training molecules, 1,600 real AKT1 inhibitors, and the same 50,000 de novo DrugGEN molecules as in the previous analyses, 10,000 from each model) and visualization on 2-D, as shown in Figure 2B (the t-SNE parameters were, perplexity=50 and number of iterations=500). Individual visualizations for each DrugGEN model within the same overall t-SNE embedding is also given in Figure S6. In both figures, each dot corresponds to a molecule, colors indicate their source, and the Euclidean distances indicate structural similarities based on Tanimoto applied on molecular fingerprints (i.e., ECFP \cite{rogers2010extended}). Randomly selected ChEMBL molecules spread around the embedding space due to their high structural diversity. On the other hand, it is possible to observe distinct clusters formed by models such as DrugGEN-Prot, which are within the space of real drug-like (ChEMBL) molecules, but still away from individual clusters formed by ChEMBL molecules. De novo molecules of DrugGEN-NoTarget and real AKT1 inhibitors are mostly far away from each, which is an expected result since the generation process is not target specific for this model. Similarly, molecules of DrugGEN-Ligand and DrugGEN-RL models are also far away from real AKT1 inhibitors, which is also indicated by low average docking performance of these two models (Figure 2A). Interestingly, these models managed to capture the physicochemical distribution of real AKT1 inhibitors (Table 2) but generated structurally dissimilar molecules. Actually, structural dissimilarity was the main aim behind the DrugGEN-RL model, as a result, this model can be considered successful in this regard.\\
As an overall evaluation, DrugGEN-CrossLoss and DrugGEN-Prot can be considered as the most successful models in terms of target-specific generation, where the former model generated molecules with better docking scores thanks to utilizing real AKT1 inhibitors in the transformer decoder module (which conditions the generation process towards the physical and chemical features of known inhibitors), the latter model generated highly diverse molecules with topological complementarity to AKT1 binding pocket thanks to utilizing target protein features (instead of its known inhibitors). Protein binding pocket graphs were significantly larger compared to molecular graphs, which increased the complexity of the model, and thus, the learning process. On top of that, we only used the AKT1 protein during the training of DrugGEN-Prot, which probably limited the generalization capability of the model. These two are probably the main reasons behind obtaining relatively lower generation scores for DrugGEN-Prot (Table 1).\\
Finally, we manually selected the 33 most promising generated molecules (from our drug-like de novo molecules dataset with satisfactory docking scores) via expert curation, and presented them as our best candidates to target AKT1 (Figure 3). We checked the structural similarity of these molecules to database records and found that they are completely novel at the threshold of at least 60\% Tanimoto similarity (compared to all molecules in the ChEMBL database). We showcase one molecule among the 33 (shown as Mol\_{10} in Figure 3), which can be denoted as a Pyrrolo[1,2-a]pyrimidin-4(1H)-one derivative. Figure 4 displays the reference crystal complex structure of AKT1 with its native ligand 0XZ (PDB id: "4GV1" \cite{addie2013discovery}), for which the binding free energy was measured as -8,781 kcal/mol using the exact same docking protocol (Figure 4A), together with the best docking pose of Mol\_{10} with the binding free energy -9.686 kcal/mol (Figure 4B). The reference co-crystal complex structure "AKT1 - 0XZ" (Figure 4A) and the complex model obtained from the docking of the selected de novo molecule "AKT1 - Mol\_{10}" (Figure 4B) share the same 3 binding residues. Two of these residues , ALA-177 and LYS-179, have hydrophobic interactions with both of the ligands. Hydrophobic interactions are more prominent in the AKT1 - Mol\_{10} complex, whereas in the crystal structure hydrogen bonds are evident. This may contribute to the difference in binding affinities.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth,trim=0.5cm 3cm 0cm 4cm,clip=true]{mols.pdf}
\caption{Promising de novo molecules to effectively target AKT1 protein (generated by DrugGEN models), selected via expert curation from the dataset of molecules with sufficiently low binding free energies (< -9 kcal/mol) in the molecular docking experiment.
}
\label{fig:img3}
\end{figure}
\section{Conclusion}
In this study, we developed the DrugGEN system to automatically design target-specific drug candidate molecules. Main idea behind the DrugGEN was to combine GANs and the graph transformer architecture to create a system that can design inhibitor candidates given the target protein. DrugGEN can be seen as an umbrella-system that contains several models implemented to investigate the target-centric generation capabilities. DrugGEN models perform similar to SOTA models (or better in some cases) in performance metrics, which points out to its high generation efficiency and capacity. In terms of physicochemical metrics such as QED, SA, and logP, we showed that DrugGEN models can generate de novo molecules with similar molecular characteristics to real inhibitors of the AKT1 protein. Further computational analyses were done to assess the target-specific characteristics of de novo molecules, the results of which indicated their high potential in AKT1 targeting. With the intention of presenting a tool that the community can utilize, as well as for reproducibility-related purposes, we openly shared the code base, datasets, all results and trained models of DrugGEN in our repository at https://github.com/HUBioDataLab/DrugGEN. \\
As a next step, we plan to train DrugGEN models with; (i) a larger target-centric dataset including additional proteins and/or their real inhibitors, and (ii) increased number of parameters to optimize (i.e., larger models) to provide room for improvement; both of which would yield a more successful learning in terms of molecular structural properties corresponding to given target characteristics. In further studies, selected de novo molecules will be subjected to chemical synthesis and subsequent in vitro cell-based experiments to validate AKT1 targeting and observe phenotypic effects on HCC cell-lines. We also plan to improve the molecular generation process by incorporating high-level functional properties of real drugs and drug candidate molecules (along with their structural features, which are already utilized in the current version) in the context of heterogeneous biomedical knowledge graphs ~\cite{dougan2021crossbar}, to the model training procedure. This architecture is intended to facilitate the understanding of the relationship between the structural and functional properties of small molecules and thereby enhance the design process.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth,trim=0cm 3cm 0cm 3cm,clip=true]{dock.pdf}
\caption{\textbf{(A)} AKT1 crystal complex structure with the cocrystallized ligand: 0XZ (PDB id: "4GV1" \cite{addie2013discovery}); \textbf{(B)} The best pose in the molecular docking of the showcase de novo generated (predicted) inhibitor of AKT1: the Pyrrolo[1,2-a]pyrimidin-4(1H)-one derivative (Mol\_{10} in Figure 3), to the structurally resolved binding site of AKT1.}
\label{fig:img4}
\end{figure}
\section*{Acknowledgments}
This project was supported by TUBITAK-BIDEB 2247-A National Leader Researchers Program under project number 120C123.
\section*{Author information \& contributions}
AU: Atabey Ünlü (atabeyunlu36@gmail.com), \\
EC: Elif Çevrim (candaselif@gmail.com), \\
AS: Ahmet Sarıgün (ahmet.sarigun@metu.edu.tr), \\
HC: Hayriye Çelikbilek (hayriye.celikbilek@gmail.com), \\
HAG: Heval Ataş Güvenilir (hevalatas@gmail.com), \\
AK: Altay Koyaş (altay.koyas@metu.edu.tr), \\
DCK: Deniz Cansen Kahraman (cansen@metu.edu.tr), \\
AO: Abdurrahman Olğaç (aolgac@gazi.edu.tr), \\
AR: Ahmet Rifaioğlu (ahmet.rifaioglu@uni-heidelberg.de), \\
TD: Tunca Doğan (tuncadogan@gmail.com).
TD conceptualized the study and designed the general methodology. EC and HAG prepared the datasets and handled the protein featurization process. AS, AU, ASR and TD determined the technical details of the fundamental model architecture. AU and AS prepared the original codebase, also designed and implemented initial models. AU designed, implemented, trained, tuned and evaluated numerous model variants and constructed the finalized DrugGEN models. DCK and AK selected the protein target by reviewing the literature. HC further evaluated the de novo generated molecules in the context of drug-target interaction prediction (via DEEPScreen). EC and AO conducted the molecular filtering operations and physics based (docking) experiments. AU, EC, AO, and TD evaluated and discussed findings. EC, AU, AS and TD visualized the results and prepared the figures in the manuscript. AU, EC, AS, HC, HAG, and TD wrote the manuscript. AU, EC, AS, and TD prepared the repository. TD, ASR and AO supervised the overall study. All authors approved the manuscript.
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,704 |
package org.squeryl.dsl
import ast._
import boilerplate._
import fsm._
import org.squeryl.internals._
import org.squeryl._
import java.sql.{SQLException, ResultSet}
import collection.mutable.ArrayBuffer
import scala.runtime.NonLocalReturnControl
trait QueryDsl
extends DslFactory
with WhereState[Unconditioned]
with ComputeMeasuresSignaturesFromStartOrWhereState
with StartState
with QueryElements[Unconditioned]
with JoinSignatures
with FromSignatures {
outerQueryDsl =>
def using[A](session: Session)(a: =>A): A =
_using(session, a _)
private def _using[A](session: Session, a: ()=>A): A = {
val s = Session.currentSessionOption
try {
if(s != None) s.get.unbindFromCurrentThread
try {
session.bindToCurrentThread
val r = a()
r
}
finally {
session.unbindFromCurrentThread
session.cleanup
}
}
finally {
if(s != None) s.get.bindToCurrentThread
}
}
def transaction[A](s: Session)(a: =>A) =
_executeTransactionWithin(s, a _)
/**
* 'transaction' causes a new transaction to begin and commit after the block execution, or rollback
* if an exception occurs. Invoking a transaction always cause a new one to
* be created, even if called in the context of an existing transaction.
*/
def transaction[A](a: =>A): A =
if(! Session.hasCurrentSession)
_executeTransactionWithin(SessionFactory.newSession, a _)
else {
val s = Session.currentSession
val res =
try {
s.unbindFromCurrentThread
_executeTransactionWithin(SessionFactory.newSession, a _)
}
finally {
s.bindToCurrentThread
}
res
}
/**
* 'inTransaction' will create a new transaction if none is in progress and commit it upon
* completion or rollback on exceptions. If a transaction already exists, it has no
* effect, the block will execute in the context of the existing transaction. The
* commit/rollback is handled in this case by the parent transaction block.
*/
def inTransaction[A](a: =>A): A =
if(! Session.hasCurrentSession)
_executeTransactionWithin(SessionFactory.newSession, a _)
else {
a
}
private def _executeTransactionWithin[A](s: Session, a: ()=>A) = {
val c = s.connection
if(c.getAutoCommit)
c.setAutoCommit(false)
var txOk = false
try {
val res = _using(s, a)
txOk = true
res
}
catch {
case e:NonLocalReturnControl[_] =>
{
txOk = true
throw e
}
}
finally {
try {
if(txOk)
c.commit
else
c.rollback
}
catch {
case e:SQLException => {
Utils.close(c)
if(txOk) throw e // if an exception occured b4 the commit/rollback we don't want to obscure the original exception
}
}
try{c.close}
catch {
case e:SQLException => {
if(txOk) throw e // if an exception occured b4 the close we don't want to obscure the original exception
}
}
}
}
implicit def __thisDsl:QueryDsl = this
private class QueryElementsImpl[Cond](override val whereClause: Option[()=>LogicalBoolean])
extends QueryElements[Cond]
def where(b: =>LogicalBoolean): WhereState[Conditioned] =
new QueryElementsImpl[Conditioned](Some(b _))
def &[A](i: =>TypedExpressionNode[A]): A =
FieldReferenceLinker.pushExpressionOrCollectValue[A](i _)
implicit def singleColumnQuery2RightHandSideOfIn[A](q: Query[A]) =
new RightHandSideOfIn[A](q.copy(false).ast)
implicit def measureSingleColumnQuery2RightHandSideOfIn[A](q: Query[Measures[A]]) =
new RightHandSideOfIn[A](q.copy(false).ast)
implicit def measureOptionSingleColumnQuery2RightHandSideOfIn[A](q: Query[Measures[Option[A]]]) =
new RightHandSideOfIn[A](q.copy(false).ast)
implicit def groupSingleColumnQuery2RightHandSideOfIn[A](q: Query[Group[A]]) =
new RightHandSideOfIn[A](q.copy(false).ast)
implicit def groupOptionSingleColumnQuery2RightHandSideOfIn[A](q: Query[Group[Option[A]]]) =
new RightHandSideOfIn[A](q.copy(false).ast)
trait SingleRowQuery[R] {
self: Query[R] =>
}
trait SingleColumnQuery[T] {
self: Query[T] =>
}
trait ScalarQuery[T] extends Query[T] with SingleColumnQuery[T] with SingleRowQuery[T]
implicit def scalarQuery2Scalar[T](sq: ScalarQuery[T]) = sq.head
implicit def countQueryableToIntTypeQuery[R](q: Queryable[R]) = new CountSubQueryableQuery(q)
private def _countFunc = count
class CountSubQueryableQuery(q: Queryable[_]) extends Query[LongType] with ScalarQuery[LongType] {
private val _inner:Query[Measures[LongType]] =
from(q)(r => compute(_countFunc))
def iterator = _inner.map(m => m.measures).iterator
def Count: ScalarQuery[LongType] = this
def statement: String = _inner.statement
// Paginating a Count query makes no sense perhaps an org.squeryl.internals.Utils.throwError() would be more appropriate here:
def page(offset:Int, length:Int) = this
def distinct = this
def forUpdate = _inner.forUpdate
def dumpAst = _inner.dumpAst
def ast = _inner.ast
protected[squeryl] def invokeYield(rsm: ResultSetMapper, rs: ResultSet) =
_inner.invokeYield(rsm, rs).measures
override private[squeryl] def copy(asRoot:Boolean) = new CountSubQueryableQuery(q)
def name = _inner.name
private[squeryl] def give(rsm: ResultSetMapper, rs: ResultSet) =
q.invokeYield(rsm, rs)
}
implicit def singleColComputeQuery2ScalarQuery[T](cq: Query[Measures[T]]): ScalarQuery[T] = new ScalarMeasureQuery[T](cq)
implicit def singleColComputeQuery2Scalar[T](cq: Query[Measures[T]]) = new ScalarMeasureQuery[T](cq).head
class ScalarMeasureQuery[T](q: Query[Measures[T]]) extends Query[T] with ScalarQuery[T] {
def iterator = q.map(m => m.measures).iterator
def distinct = this
def forUpdate = q.forUpdate
def dumpAst = q.dumpAst
// TODO: think about this : Paginating a Count query makes no sense perhaps an org.squeryl.internals.Utils.throwError() would be more appropriate here.
def page(offset:Int, length:Int) = this
def statement: String = q.statement
def ast = q.ast
protected[squeryl] def invokeYield(rsm: ResultSetMapper, rs: ResultSet) =
q.invokeYield(rsm, rs).measures
override private[squeryl] def copy(asRoot:Boolean) = new ScalarMeasureQuery(q)
def name = q.name
private[squeryl] def give(rsm: ResultSetMapper, rs: ResultSet) =
q.invokeYield(rsm, rs).measures
}
implicit def queryable2OptionalQueryable[A](q: Queryable[A]) = new OptionalQueryable[A](q)
implicit def view2QueryAll[A](v: View[A]) = from(v)(a=> select(a))
def update[A](t: Table[A])(s: A =>UpdateStatement):Int = t.update(s)
def manyToManyRelation[L <: KeyedEntity[_],R <: KeyedEntity[_],A <: KeyedEntity[_]](l: Table[L], r: Table[R]) = new ManyToManyRelationBuilder(l,r,None)
def manyToManyRelation[L <: KeyedEntity[_],R <: KeyedEntity[_],A <: KeyedEntity[_]](l: Table[L], r: Table[R], nameOfMiddleTable: String) = new ManyToManyRelationBuilder(l,r,Some(nameOfMiddleTable))
class ManyToManyRelationBuilder[L <: KeyedEntity[_], R <: KeyedEntity[_]](l: Table[L], r: Table[R], nameOverride: Option[String]) {
def via[A <: KeyedEntity[_]](f: (L,R,A)=>Pair[EqualityExpression,EqualityExpression])(implicit manifestA: Manifest[A], schema: Schema) = {
val m2m = new ManyToManyRelationImpl(l,r,manifestA.erasure.asInstanceOf[Class[A]], f, schema, nameOverride)
schema._addTable(m2m)
m2m
}
}
class ManyToManyRelationImpl[L <: KeyedEntity[_], R <: KeyedEntity[_], A <: KeyedEntity[_]](val leftTable: Table[L], val rightTable: Table[R], aClass: Class[A], f: (L,R,A)=>Pair[EqualityExpression,EqualityExpression], schema: Schema, nameOverride: Option[String])
extends Table[A](nameOverride.getOrElse(schema.tableNameFromClass(aClass)), aClass, schema, None) with ManyToManyRelation[L,R,A] {
thisTableOfA =>
def thisTable = thisTableOfA
schema._addRelation(this)
private val (_leftEqualityExpr, _rightEqualityExpr) = {
var e2: Option[Pair[EqualityExpression,EqualityExpression]] = None
from(leftTable, rightTable, thisTableOfA)((l,r,a) => {
e2 = Some(f(l,r,a))
select(None)
})
val e2_ = e2.get
//invert Pair[EqualityExpression,EqualityExpression] if it has been declared in reverse :
if(_viewReferedInExpression(leftTable, e2_._1)) {
assert(_viewReferedInExpression(rightTable, e2_._2))
e2_
}
else {
assert(_viewReferedInExpression(leftTable, e2_._2))
assert(_viewReferedInExpression(rightTable, e2_._1))
(e2_._2, e2_._1)
}
}
private def _viewReferedInExpression(v: View[_], ee: EqualityExpression) =
ee.filterDescendantsOfType[SelectElementReference[Any]].filter(
_.selectElement.origin.asInstanceOf[ViewExpressionNode[_]].view == v
).headOption != None
private val (leftPkFmd, leftFkFmd) = _splitEquality(_leftEqualityExpr, thisTable, false)
private val (rightPkFmd, rightFkFmd) = _splitEquality(_rightEqualityExpr, thisTable, false)
val leftForeignKeyDeclaration =
schema._createForeignKeyDeclaration(leftFkFmd.columnName, leftPkFmd.columnName)
val rightForeignKeyDeclaration =
schema._createForeignKeyDeclaration(rightFkFmd.columnName, rightPkFmd.columnName)
private def _associate[T <: KeyedEntity[_]](o: T, m2m: ManyToMany[T,A]): A = {
val aInst = m2m.assign(o)
try {
thisTableOfA.insertOrUpdate(aInst)
}
catch {
case e:SQLException =>
if(Session.currentSession.databaseAdapter.isNotNullConstraintViolation(e))
throw new SquerylException(
"the " + 'associate + " method created and inserted association object of type " +
posoMetaData.clasz.getName + " that has NOT NULL colums, plase use the other signature of " + 'ManyToMany +
" that takes the association object as argument : associate(o,a) for association objects that have NOT NULL columns", e)
else
throw e
}
}
def left(leftSideMember: L): Query[R] with ManyToMany[R,A] = {
val q =
from(thisTableOfA, rightTable)((a,r) => {
val matchClause = f(leftSideMember, r, a)
outerQueryDsl.where(matchClause._1 and matchClause._2).select(r)
})
new DelegateQuery(q) with ManyToMany[R,A] {
private def _assignKeys(r: R, a: AnyRef): Unit = {
val leftPk = leftPkFmd.get(leftSideMember.asInstanceOf[AnyRef])
val rightPk = rightPkFmd.get(r.asInstanceOf[AnyRef])
leftFkFmd.set(a, leftPk)
rightFkFmd.set(a, rightPk)
}
def associationMap =
from(thisTableOfA, rightTable)((a,r) => {
val matchClause = f(leftSideMember, r, a)
outerQueryDsl.where(matchClause._1 and matchClause._2).select((r,a))
})
def assign(o: R, a: A) = {
_assignKeys(o, a.asInstanceOf[AnyRef])
a
}
def associate(o: R, a: A): A = {
assign(o, a)
thisTableOfA.insertOrUpdate(a)
a
}
def assign(o: R): A = {
val aInstAny = thisTableOfA._createInstanceOfRowObject
val aInst = aInstAny.asInstanceOf[A]
_assignKeys(o, aInstAny)
aInst
}
def associate(o: R): A =
_associate(o,this)
def dissociate(o: R) =
thisTableOfA.deleteWhere(a0 => _whereClauseForAssociations(a0) and _equalityForRightSide(a0, o)) > 0
def _whereClauseForAssociations(a0: A) = {
val leftPk = leftPkFmd.get(leftSideMember.asInstanceOf[AnyRef])
leftFkFmd.get(a0.asInstanceOf[AnyRef])
FieldReferenceLinker.createEqualityExpressionWithLastAccessedFieldReferenceAndConstant(leftPk)
}
def _equalityForRightSide(a0: A, r: R) = {
val rightPk = rightPkFmd.get(r.asInstanceOf[AnyRef])
rightFkFmd.get(a0.asInstanceOf[AnyRef])
FieldReferenceLinker.createEqualityExpressionWithLastAccessedFieldReferenceAndConstant(rightPk)
}
def dissociateAll =
thisTableOfA.deleteWhere(a0 => _whereClauseForAssociations(a0))
def associations =
thisTableOfA.where(a0 => _whereClauseForAssociations(a0))
}
}
def right(rightSideMember: R): Query[L] with ManyToMany[L,A] = {
val q =
from(thisTableOfA, leftTable)((a,l) => {
val matchClause = f(l, rightSideMember, a)
outerQueryDsl.where(matchClause._1 and matchClause._2).select(l)
})
new DelegateQuery(q) with ManyToMany[L,A] {
private def _assignKeys(l: L, a: AnyRef): Unit = {
val rightPk = rightPkFmd.get(rightSideMember.asInstanceOf[AnyRef])
val leftPk = leftPkFmd.get(l.asInstanceOf[AnyRef])
rightFkFmd.set(a, rightPk)
leftFkFmd.set(a, leftPk)
}
def associationMap =
from(thisTableOfA, leftTable)((a,l) => {
val matchClause = f(l, rightSideMember, a)
outerQueryDsl.where(matchClause._1 and matchClause._2).select((l, a))
})
def assign(o: L, a: A) = {
_assignKeys(o, a.asInstanceOf[AnyRef])
a
}
def associate(o: L, a: A): A = {
assign(o, a)
thisTableOfA.insertOrUpdate(a)
a
}
def assign(o: L): A = {
val aInstAny = thisTableOfA._createInstanceOfRowObject
val aInst = aInstAny.asInstanceOf[A]
_assignKeys(o, aInstAny)
aInst
}
def associate(o: L): A =
_associate(o,this)
def dissociate(o: L) =
thisTableOfA.deleteWhere(a0 => _whereClauseForAssociations(a0) and _leftEquality(o, a0)) > 0
def _leftEquality(l: L, a0: A) = {
val leftPk = leftPkFmd.get(l.asInstanceOf[AnyRef])
leftFkFmd.get(a0.asInstanceOf[AnyRef])
FieldReferenceLinker.createEqualityExpressionWithLastAccessedFieldReferenceAndConstant(leftPk)
}
def _whereClauseForAssociations(a0: A) = {
val rightPk = rightPkFmd.get(rightSideMember.asInstanceOf[AnyRef])
rightFkFmd.get(a0.asInstanceOf[AnyRef])
FieldReferenceLinker.createEqualityExpressionWithLastAccessedFieldReferenceAndConstant(rightPk)
}
def dissociateAll =
thisTableOfA.deleteWhere(a0 => _whereClauseForAssociations(a0))
def associations =
thisTableOfA.where(a0 => _whereClauseForAssociations(a0))
}
}
}
def oneToManyRelation[O <: KeyedEntity[_],M](ot: Table[O], mt: Table[M]) = new OneToManyRelationBuilder(ot,mt)
class OneToManyRelationBuilder[O <: KeyedEntity[_],M](ot: Table[O], mt: Table[M]) {
def via(f: (O,M)=>EqualityExpression)(implicit schema: Schema) =
new OneToManyRelationImpl(ot,mt,f, schema)
}
class OneToManyRelationImpl[O <: KeyedEntity[_],M](val leftTable: Table[O], val rightTable: Table[M], f: (O,M)=>EqualityExpression, schema: Schema)
extends OneToManyRelation[O,M] {
schema._addRelation(this)
private def _isSelfReference =
leftTable == rightTable
//we obtain the FieldMetaDatas from the 'via' function by creating an EqualityExpression AST and then extract the FieldMetaDatas from it,
// the FieldMetaData will serve to set fields (primary and foreign keys on the objects in the relation)
private val (_leftPkFmd, _rightFkFmd) = {
var ee: Option[EqualityExpression] = None
//we create a query for the sole purpose of extracting the equality (inside the relation's 'via' clause)
from(leftTable,rightTable)((o,m) => {
ee = Some(f(o,m))
select(None)
})
val ee_ = ee.get //here we have the equality AST (_ee) contains a left and right node, SelectElementReference
//that refer to FieldSelectElement, who in turn refer to the FieldMetaData
// now the Tuple with the left and right FieldMetaData
_splitEquality(ee.get, rightTable, _isSelfReference)
}
val foreignKeyDeclaration =
schema._createForeignKeyDeclaration(_rightFkFmd.columnName, _leftPkFmd.columnName)
def left(leftSide: O): OneToMany[M] = {
val q = from(rightTable)(m => where(f(leftSide, m)) select(m))
new DelegateQuery(q) with OneToMany[M] {
def deleteAll =
rightTable.deleteWhere(m => f(leftSide, m))
def assign(m: M) = {
val m0 = m.asInstanceOf[AnyRef]
val l0 = leftSide.asInstanceOf[AnyRef]
val v = _leftPkFmd.get(l0)
_rightFkFmd.set(m0, v)
m
}
def associate(m: M)(implicit ev: M <:< KeyedEntity[_]) = {
assign(m)
rightTable.insertOrUpdate(m)
}
}
}
def right(rightSide: M): ManyToOne[O] = {
val q = from(leftTable)(o => where(f(o,rightSide)) select(o))
new DelegateQuery(q) with ManyToOne[O] {
def assign(one: O) = {
val o = one.asInstanceOf[AnyRef]
val r = rightSide.asInstanceOf[AnyRef]
val v = _rightFkFmd.get(r)
_leftPkFmd.set(o, v)
one
}
def delete =
leftTable.deleteWhere(o => f(o, rightSide)) > 0
}
}
}
/**
* returns a (FieldMetaData, FieldMetaData) where ._1 is the id of the KeyedEntity on the left or right side,
* and where ._2 is the foreign key of the association object/table
*/
private def _splitEquality(ee: EqualityExpression, rightTable: Table[_], isSelfReference: Boolean) = {
if(isSelfReference)
assert(ee.right._fieldMetaData.isIdFieldOfKeyedEntity || ee.left._fieldMetaData.isIdFieldOfKeyedEntity)
if(ee.left._fieldMetaData.parentMetaData.clasz == rightTable.classOfT &&
(!isSelfReference || (isSelfReference && ee.right._fieldMetaData.isIdFieldOfKeyedEntity)) ) {
assert(ee.right._fieldMetaData.isIdFieldOfKeyedEntity)
(ee.right._fieldMetaData, ee.left._fieldMetaData)
}
else {
assert(ee.left._fieldMetaData.isIdFieldOfKeyedEntity)
(ee.left._fieldMetaData, ee.right._fieldMetaData)
}
}
// Composite key syntactic sugar :
def compositeKey[A1,A2](a1: A1, a2: A2) =
new CompositeKey2(a1, a2)
def compositeKey[A1,A2,A3](a1: A1, a2: A2, a3: A3) =
new CompositeKey3(a1, a2, a3)
def compositeKey[A1,A2,A3,A4](a1: A1, a2: A2, a3: A3, a4: A4) =
new CompositeKey4(a1, a2, a3, a4)
def compositeKey[A1,A2,A3,A4,A5](a1: A1, a2: A2, a3: A3, a4: A4, a5: A5) =
new CompositeKey5(a1, a2, a3, a4, a5)
def compositeKey[A1,A2,A3,A4,A5,A6](a1: A1, a2: A2, a3: A3, a4: A4, a5: A5, a6: A6) =
new CompositeKey6(a1, a2, a3, a4, a5, a6)
def compositeKey[A1,A2,A3,A4,A5,A6,A7](a1: A1, a2: A2, a3: A3, a4: A4, a5: A5, a6: A6, a7: A7) =
new CompositeKey7(a1, a2, a3, a4, a5, a6, a7)
def compositeKey[A1,A2,A3,A4,A5,A6,A7,A8](a1: A1, a2: A2, a3: A3, a4: A4, a5: A5, a6: A6, a7: A7, a8: A8) =
new CompositeKey8(a1, a2, a3, a4, a5, a6, a7, a8)
def compositeKey[A1,A2,A3,A4,A5,A6,A7,A8,A9](a1: A1, a2: A2, a3: A3, a4: A4, a5: A5, a6: A6, a7: A7, a8: A8, a9: A9) =
new CompositeKey9(a1, a2, a3, a4, a5, a6, a7, a8, a9)
// Tuple to composite key conversions :
implicit def t2te[A1,A2](t: (A1,A2)) = new CompositeKey2[A1,A2](t._1, t._2)
implicit def t3te[A1,A2,A3](t: (A1,A2,A3)) = new CompositeKey3[A1,A2,A3](t._1, t._2, t._3)
implicit def t4te[A1,A2,A3,A4](t: (A1,A2,A3,A4)) = new CompositeKey4[A1,A2,A3,A4](t._1, t._2, t._3, t._4)
implicit def t5te[A1,A2,A3,A4,A5](t: (A1,A2,A3,A4,A5)) = new CompositeKey5[A1,A2,A3,A4,A5](t._1, t._2, t._3, t._4, t._5)
implicit def t6te[A1,A2,A3,A4,A5,A6](t: (A1,A2,A3,A4,A5,A6)) = new CompositeKey6[A1,A2,A3,A4,A5,A6](t._1, t._2, t._3, t._4, t._5, t._6)
implicit def t7te[A1,A2,A3,A4,A5,A6,A7](t: (A1,A2,A3,A4,A5,A6,A7)) = new CompositeKey7[A1,A2,A3,A4,A5,A6,A7](t._1, t._2, t._3, t._4, t._5, t._6, t._7)
implicit def t8te[A1,A2,A3,A4,A5,A6,A7,A8](t: (A1,A2,A3,A4,A5,A6,A7,A8)) = new CompositeKey8[A1,A2,A3,A4,A5,A6,A7,A8](t._1, t._2, t._3, t._4, t._5, t._6, t._7, t._8)
implicit def t9te[A1,A2,A3,A4,A5,A6,A7,A8,A9](t: (A1,A2,A3,A4,A5,A6,A7,A8,A9)) = new CompositeKey9[A1,A2,A3,A4,A5,A6,A7,A8,A9](t._1, t._2, t._3, t._4, t._5, t._6, t._7, t._8, t._9)
// Case statements :
def caseOf[A](expr: NumericalExpression[A]) = new CaseOfNumericalExpressionMatchStart(expr)
def caseOf[A](expr: NonNumericalExpression[A]) = new CaseOfNonNumericalExpressionMatchStart(expr)
def caseOf = new CaseOfConditionChainStart
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,902 |
Digital document categorization based on logo spotting and recognition has raised a great interest in the research community because logos in documents are sources of information for categorizing documents with low costs. In this paper, we present an approach to improve the result of our method for logo spotting and recognition based on key point matching and presented in our previous paper . First, the key points from both the query document images and a given set of logos (logo gallery) are extracted and described by SIFT, and are matched in the SIFT feature space. Secondly, logo segmentation is performed using spatial density-based clustering. The contribution of this paper is to add a third step where homography is used to filter the matched key points as a post-processing. And finally, in the decision stage, logo classification is performed by using an accumulating histogram. Our approach is tested using a well-known benchmark database of real world documents containing logos, and achieves good performances compared to state-of-the-art approaches. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,481 |
#include "config.h"
#include "RenderMathMLPadded.h"
#if ENABLE(MATHML)
#include <cmath>
#include <wtf/IsoMallocInlines.h>
namespace WebCore {
WTF_MAKE_ISO_ALLOCATED_IMPL(RenderMathMLPadded);
RenderMathMLPadded::RenderMathMLPadded(MathMLPaddedElement& element, RenderStyle&& style)
: RenderMathMLRow(element, WTFMove(style))
{
}
LayoutUnit RenderMathMLPadded::voffset() const
{
return toUserUnits(element().voffset(), style(), 0);
}
LayoutUnit RenderMathMLPadded::lspace() const
{
LayoutUnit lspace = toUserUnits(element().lspace(), style(), 0);
// FIXME: Negative lspace values are not supported yet (https://bugs.webkit.org/show_bug.cgi?id=85730).
return std::max<LayoutUnit>(0, lspace);
}
LayoutUnit RenderMathMLPadded::mpaddedWidth(LayoutUnit contentWidth) const
{
return std::max<LayoutUnit>(0, toUserUnits(element().width(), style(), contentWidth));
}
LayoutUnit RenderMathMLPadded::mpaddedHeight(LayoutUnit contentHeight) const
{
return std::max<LayoutUnit>(0, toUserUnits(element().height(), style(), contentHeight));
}
LayoutUnit RenderMathMLPadded::mpaddedDepth(LayoutUnit contentDepth) const
{
return std::max<LayoutUnit>(0, toUserUnits(element().depth(), style(), contentDepth));
}
void RenderMathMLPadded::computePreferredLogicalWidths()
{
ASSERT(preferredLogicalWidthsDirty());
// Determine the intrinsic width of the content.
RenderMathMLRow::computePreferredLogicalWidths();
// Only the width attribute should modify the width.
// We parse it using the preferred width of the content as its default value.
m_maxPreferredLogicalWidth = mpaddedWidth(m_maxPreferredLogicalWidth);
m_minPreferredLogicalWidth = m_maxPreferredLogicalWidth;
setPreferredLogicalWidthsDirty(false);
}
void RenderMathMLPadded::layoutBlock(bool relayoutChildren, LayoutUnit)
{
ASSERT(needsLayout());
if (!relayoutChildren && simplifiedLayout())
return;
// We first layout our children as a normal <mrow> element.
LayoutUnit contentAscent, contentDescent, contentWidth;
contentAscent = contentDescent = 0;
RenderMathMLRow::computeLineVerticalStretch(contentAscent, contentDescent);
RenderMathMLRow::layoutRowItems(contentAscent, contentDescent);
contentWidth = logicalWidth();
// We parse the mpadded attributes using the content metrics as the default value.
LayoutUnit width = mpaddedWidth(contentWidth);
LayoutUnit ascent = mpaddedHeight(contentAscent);
LayoutUnit descent = mpaddedDepth(contentDescent);
// Align children on the new baseline and shift them by (lspace, -voffset)
LayoutPoint contentLocation(lspace(), ascent - contentAscent - voffset());
for (auto* child = firstChildBox(); child; child = child->nextSiblingBox())
child->setLocation(child->location() + contentLocation);
// Set the final metrics.
setLogicalWidth(width);
setLogicalHeight(ascent + descent);
layoutPositionedObjects(relayoutChildren);
clearNeedsLayout();
}
std::optional<int> RenderMathMLPadded::firstLineBaseline() const
{
// We try and calculate the baseline from the position of the first child.
LayoutUnit ascent;
if (auto* baselineChild = firstChildBox())
ascent = ascentForChild(*baselineChild) + baselineChild->logicalTop() + voffset();
else
ascent = mpaddedHeight(0);
return std::optional<int>(std::lround(static_cast<float>(ascent)));
}
}
#endif
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,768 |
Q: Hugo theme link refers to container port in Docker/Nginx I've got a simple static site, generated with Hugo, that I'm building to a Docker container running Nginx. Nginx is listening on port 90. I'm encountering strange behavior where certain links try to open the container port rather than the host port (in the case of localhost, it's 8000). So for example, this link:
<a href="/documents">Docs</a>
...when moused-over shows that it will attempt to open localhost:8000/documents, which is correct, but when clicked it attempts instead to open http://localhost:90/documents/ (If I manually change the URL in the browser to http://localhost:8000/documents/, it responds fine.)
What makes this even stranger:
*
*Only certain links, specifically in the header menu, do this.
*I've used dozens of Hugo themes, and I've only encountered this issue with one of them: ZDoc. Could it be specific to this theme? That seems strange to me.
What could be causing this? I'm struggling to even know what this phenomenon is called. "Host/container port confusion"?
I'm certain it's not a misconfiguration of Nginx or Docker. I'm exposing port 90 properly in my Dockerfile:
EXPOSE 90
nginx.conf is set to listen on that port:
http {
include /etc/nginx/mime.types;
sendfile on;
server {
root /usr/share/nginx/html/;
index index.html;
server_name localhost;
listen 90;
}
}
And I'm starting the Docker container with the host port 8000 forwarding to the port Nginx is listening on:
docker run --name my-simple-site -p 8000:90 -d simple-site
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de9cd1526034 simple-site "nginx -g 'daemon of…" 41 minutes ago Up 41 minutes 0.0.0.0:8000->90/tcp my-simple-site
A: Strangely, the fix for this was to change the link to point directly to the file: <a href="/documents/index.html">Docs</a>
I'm unclear why and would love some insight into this. Does Nginx infer a port when pointing to a directory?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,282 |
\section{Introduction.}
\label{sec:1}
{\it Fold} maps are higher dimensional versions of Morse functions. They are locally represented as the projections or the products of Morse functions and the identity maps on open sets in Euclidean spaces.
{\it Round} fold maps have been introduced by the author in \cite{kitazawa0.1, kitazawa0.2, kitazawa0.3}, followed by \cite{kitazawa0.4,kitazawa0.5,kitazawa0.6} for example. Studies such as \cite{kitazawasaeki1,kitazawasaeki2} have been presented recently. These maps have been fundamental and strong tools in understanding the topologies and the differentiable structures of the manifolds via geometric and constructive ways or combinatorial ways. They are also strong tools in knowing not only (co)homology groups of these manifolds, but also more precise information such as fundamental groups, cohomology rings and differentiable structures.
Our paper is on round fold maps on $3$-dimensional closed and orientable manifolds. It has been shown that such a manifold admits a round fold map into the plane if and only if it is a so-called graph manifold. A {\it graph manifold} is, in short, a manifold obtained by gluing so-called {\it circle bundles} over surfaces or bundles over surfaces whose fibers are circles along tori.
A round fold map of a certain simplest class is defined as {\it directed}. Graph manifolds admitting such maps are characterized in terms of \cite{neumann}, or graphs with several labels used here and simpler graphs.
In short the class of such manifolds are ones whose {\it normal forms}, uniquely defined graphs with the labels, are trees.
$3$-dimensional spheres, circle bundles over spheres, Lens spaces and Seifert manifolds over spheres are of such a simplest class of $3$-dimensional manifolds.
Our Main Theorems are as follows. In our paper, elementary algebraic topology is fundamental. For related fundamental terminologies, notions, and notation, see \cite{hatcher1} for example.
\begin{MainThm}
\label{mthm:1}
If a graph manifold $M$ admits a directed round fold map into the plane ${\mathbb{R}}^2$, then for any ordered pair of elements of the 1st integral cohomology group $H^1(M;\mathbb{Z})$, the cup product is the zero element.
\end{MainThm}
This has been shown in \cite{kitazawasaeki1} in the case where the coefficient ring is the field $\mathbb{Q} \supset \mathbb{Z}$ of all rational numbers. This previous result is shown by applying \cite{doighorn} for example after our characterization of $3$-dimensional closed and orientable manifolds admitting directed round fold maps into the plane ${\mathbb{R}}^2$. See Theorems \ref{thm:4} and \ref{thm:5}, presented later, for example. This can be also regarded as an extension of a result in \cite{doighorn}, where Theorem \ref{thm:4} is applied.
\begin{MainThm}
\label{mthm:2}
There exists a family $\{M_j\}$ of countably many $3$-dimensonal closed, connected and orientable manifolds admitting no directed round fold maps into the plane ${\mathbb{R}}^2$ enjoying the following properties.
\begin{enumerate}
\item Distinct manifolds in the family are mutually non-homemorphic.
\item For each manifold $M_j$ in the family, there exists a $3$-dimensional closed, connected and orientable manifold $M_{j,0}$ enjoying the following properties.
\begin{enumerate}
\item The integral homology groups of the original manifold $M_j$ and the manifold $M_{j,0}$ are mutually isomorphic.
\item The rational cohomology rings of the original manifold $M_j$ and the manifold $M_{j,0}$ are mutually isomorphic.
\item The integral cohomology rings of the original manifold $M_j$ and the manifold $M_{j,0}$ are mutually non-isomorphic.
\item $M_{j,0}$ admits directed round fold maps into the plane ${\mathbb{R}}^2$.
\end{enumerate}
\end{enumerate}
\end{MainThm}
The organization of our paper is as follows. The second section is for preliminaries. We also review fold maps and round fold maps. We also define several notions which we have presented in the present section and which we need in more rigorous manners. The third section is devoted to our proofs of Main Theorems.
The fourth section is devoted to a kind of appendices, explaining about our Main Theorems in the viewpoint of explicit fold maps and algebraic topology and differential topology of manifolds admitting these maps. We see Main Theorems give phenomena in $3$-dimensional manifolds which are already discovered in higher dimensional closed (and simply-connected) manifolds and fold maps on them. In several explicit cases, fold maps of several classes have been shown to distinguish algebraic topologically or differential topologically similar manifolds.
\ \\
{\bf Conflict of Interest.} \\
The author is a member of the project JSPS KAKENHI Grant Number JP22K18267 "Visualizing twists in data through monodromy" (Principal Investigator: Osamu Saeki). The present study is due to this project. \\
\ \\
{\bf Data availability.} \\
Data supporting our present study essentially are all in our paper.
\section{Fundamental properties and existing studies on special generic maps and the manifolds.}
\subsection{Manifolds, differentiable maps and smooth bundles}
The $k$-dimensional Euclidean space is denoted by ${\mathbb{R}}^k$ for any positive integer $k$. $\mathbb{R}:={\mathbb{R}}^1$ is for the line. ${\mathbb{R}}^2$ is for the plane. Of course $\mathbb{Z} \subset \mathbb{Q} \subset \mathbb{R}$, which is very fundamental. The Euclidean space is also a Riemannian manifold whose underlying metric is the standard Euclidean metric. Let $||x|| \geq 0$ denote the distance between $x \in {\mathbb{R}}^k$ and the origin $0 \in {\mathbb{R}}^k$. Let the $k$-dimensional unit sphere be denoted by $S^k:=\{x \in {\mathbb{R}}^{k+1} \mid ||x||=1\}$ for any integer $k \geq 0$. Let the $k$-dimensional unit disk be denoted by $D^k:=\{x \in {\mathbb{R}}^{k} \mid ||x|| \leq 1\}$ for any integer $k \geq 1$. Their topologies can be understood easily and they are $k$-dimensional smooth compact submanifolds in the Euclidean spaces.
A topological manifold is regarded as a CW complex. A smooth manifold is regarded as a polyhedron, which is defined uniquely as an object of the PL category, or equivalently, one of the piecewise smooth category. This is a so-called PL manifold. For polyhedra, and, more precisely, topological spaces regarded as CW complexes, for example, we can define their dimensions uniquely. For such a space $X$, let $\dim X$ denote its dimension.
For differentiable map $c:X \rightarrow Y$ between differentiable manifolds $X$ and $Y$, a {\it singular} point $x \in X$ is defined as a point where the rank of the differential ${dc}_{x}$ is smaller than the minimum between $\dim X$ and $\dim Y$. Let $S(c)$ denote the set of all singular points of $c$ and we call this the {\it singular set} of $c$.
\begin{Def}
\label{def:1}
A smooth map $c:X \rightarrow Y$ from a smooth manifold $X$ with no boundary into another smooth manifold $Y$ with no boundary with $\dim X \geq \dim Y$ is said to be a {\it fold} map if at each singular point $p$, it is represented by the form $(x_1,\cdots,x_{\dim X}) \mapsto (x_1,\cdots,x_{\dim Y-1},{\Sigma}_{j=1}^{\dim X-\dim Y-i(p)+1} {x_{\dim Y+j-1}}^2-{\Sigma}_{j=1}^{i(p)} {x_{\dim X-i(p)+j}}^2)$ for suitable local coordinates and some integer $0 \leq i(p) \leq \frac{\dim X-\dim Y+1}{2}$.
\end{Def}
Morse functions are fold maps. We also need arguments on Morse functions in our paper. For this, see \cite{milnor2, milnor3} for example. \cite{golubitskyguillemin} explains about elementary and some advanced studies on singularity theory of differentible maps and fundamental expositions on singularities of fold maps. \cite{saeki1} is a pioneering paper on some explicit relations between fold maps and closed manifolds admitting them.
\begin{Prop}
\label{prop:1}
For a fold map in Definition \ref{def:1}, $i(p)$ is uniquely defined as the {\rm index} of $p$ and we can define $F_i(c)$ as the set of all singular points whose indices are $i$.
The singular set $S(c)$ and the set $F_i(c)$ are {\rm (}$\dim Y-1${\rm )}-dimensional smooth regular submanifolds with no boundaries and the restrictions there give smooth immersions. If $X$ is closed, then these submanifolds are compact.
\end{Prop}
A {\it diffeomorphism} means a smooth map between smooth manifolds which are homeomorphisms and which have no singular points. A {\it diffeomorphism} on a smooth manifold means a diffeomorphism from the manifold onto itself. A {\it diffeomorphism group} on a smooth manifold is a topological group consisting of all diffeomorphisms on the manifold endowed with the {\it Whitney $C^{\infty}$ topology}. Such topologies are natural topologies on spaces of smooth maps between smooth manifolds. See \cite{golubitskyguillemin} again for example.
A {\it smooth} bundle means a bundle whose fiber is a smooth manifold and whose structure group is (a subgroup of) the diffeomorphism group of the fiber. {\it Trivial} bundles are of course important where we do not restrict classes of bundles to the class of smooth bundles. An important subclass is the class of {\it linear} bundles. They are smooth bundles whose fibers are Euclidean spaces, unit spheres, or unit disks, and whose structure groups consist of linear transformations, defined naturally.
For bundles, see \cite{milnorstasheff,steenrod} for example.
\subsection{Round fold maps.}
\begin{Def}
Let $m \geq n \geq 2$ be integers.
A {\it round} fold map $f:M \rightarrow {\mathbb{R}}^n$ on a closed and connected manifold $M$ is a fold map enjoying the following properties.
\begin{enumerate}
\item The restriction $f {\mid}_{S(f)}$ is an embedding.
\item For some diffeomorphism ${\phi}_{{\mathbb{R}}^n}:{\mathbb{R}}^n \rightarrow {\mathbb{R}}^n$ as some integer $l>0$, $({\phi}_{{\mathbb{R}}^n} \circ f)(S(f))=\{x \in {\mathbb{R}}^n \mid 1 \leq ||x|| \leq l, ||x|| \in \mathbb{Z}\}$.
\end{enumerate}
\end{Def}
Hereafter, we consider a round fold map $f$ satisfying $f(S(f))=\{x \in {\mathbb{R}}^n \mid 1 \leq ||x|| \leq l, ||x|| \in \mathbb{Z}\}$. This has no problem.
Note that for $n=1$, we can also define a {\it round} fold map as in \cite{kitazawa0.5}. It is defined as a function obtained by gluing two copies of a Morse function satisfying some natural conditions on the boundaries. It is a so-called {\it twisted double}. However we do not need such functions here.
We can define two important classes of round fold maps.
Hereafter, ${D^n}_a:=\{x \in {\mathbb{R}}^n \mid ||x|| \leq a\}$ for $a>0$ and it is diffeomorphic to the $n$-dimensional unit disk $D^n$.
For $0<a_1<a_2$ with $n \geq 2$, ${A^n}_{a_1,a_2}:={D^n}_{a_2}-{\rm Int}\ {D^n}_{a_1}$, which is diffeomorphic to $S^{n-1} \times D^1$.
\begin{Def}
Let $f:M \rightarrow {\mathbb{R}}^n$ be a round fold map on an $m$-dimensional closed and connected manifold with $m \geq n \geq 2$. Suppose that the number of connected components of the singular set $S(f)$ is $l>0$.
\begin{enumerate}
\item We can consider the restriction of $f$ to the preimage $f^{-1}({A^n}_{\frac{1}{2},l+\frac{1}{2}})$. We can also compose this with the canonical projection to $S^{n-1}$ mapping each point $x \in {A^n}_{\frac{1}{2},l+\frac{1}{2}}$ to $\frac{1}{||x||}x \in S^{n-1}$.
This gives a smooth bundle. If this gives a trivial one, then $f$ is said to {\it have a globally trivial monodromy}.
\item For each connected component of the set $f(S(f))$, represented by $\partial {D^n}_{l^{\prime}}$ for some integer $1 \leq l^{\prime} \leq l$, we can have a closed tubular neighborhood ${A^n}_{l^{\prime}-\frac{1}{2},l^{\prime}+\frac{1}{2}}$ and consider the restriction of $f$ to the preimage of this closed tubular neighborhood. We can also compose this with the canonical projection to $S^{n-1}$ mapping each point $x \in {A^n}_{l^{\prime}-\frac{1}{2},l^{\prime}+\frac{1}{2}}$ to $\frac{1}{||x||}x \in S^{n-1}$. If this gives a trivial one for each connected component of the set $f(S(f))$, then $f$ is said to {\it have componentwisely trivial monodromies}.
\end{enumerate}
\end{Def}
Canonical projections of unit spheres into the Euclidean spaces (whose dimensions are at least $2$) are round fold maps having globally trivial monodromies. To check this is regarded as a kind of fundamental exercises on smooth manifolds and maps, theory of Morse functions, and singularity theory of differentiable maps.
A {\it homotopy sphere} means a smooth manifold homeomorphic to a unit sphere whose dimension is at least $1$. It is said to be a {\it standard} sphere if it is diffeomorphic to the unit sphere and it is said to be an {\it exotic} sphere if it is not.
\cite{kervairemilnor,milnor1} are on such homotopy spheres. It is well-known that $4$-dimensional exotic spheres are undiscovered and that homotopy spheres whose dimensions are not $4$ are completely classified via algebraic topological and abstract theory.
\begin{Prop}[\cite{saeki2} etc.]
\label{prop:2}
A homotopy sphere whose dimension is at least $2$ and which is not a $4$-dimensional exotic sphere admits a round fold map into ${\mathbb{R}}^2$ whose singular set is connected. If a manifold whose dimension is $m \geq 2$ admits a round fold map into ${\mathbb{R}}^n$ whose singular set is connected satisfying $m \geq n \geq 2$, then it is a homotopy sphere which is not a $4$-dimensional exotic sphere.
\end{Prop}
Note again that round fold maps have been first introduced by the author in \cite{kitazawa0.1,kitazawa0.2,kitazawa0.3} after \cite{saeki2}.
\begin{Thm}[\cite{kitazawa0.1,kitazawa0.2}]
\label{thm:1}
Let $m \geq n \geq 2$ be integers. Let $\Sigma$ be a homotopy sphere which is not a $4$-dimensional exotic sphere.
An $m$-dimensional closed manifold $M$ admits a round fold map $f:M \rightarrow {\mathbb{R}}^n$ enjoying the following properties if and only if it is the total space of a smooth bundle over the $n$-dimensional unit sphere $S^n$ whose fiber is $\Sigma$.
\begin{enumerate}
\item $f$ has a globally trivial monodromy.
\item $S(f)$ consists of exactly two connected components. The index of each singular point is $0$ or $1$. $f(F_1(f))= \partial {D^n}_1$ and $f(F_0(f))= \partial {D^n}_2$.
\item For a point in ${D^{n}}_1-f(S(f))$, the preimage is the disjoint union of two copies of $\Sigma$.
\item For a point in ${A^{n}}_{1,2}-f(S(f))$, the preimage is an {\rm (}$m-n${\rm )}-dimensional standard sphere.
\end{enumerate}
\end{Thm}
Proofs of this are in refereed articles \cite{kitazawa0.1, kitazawa0.4}, the doctoral dissertation \cite{kitazawa0.2} and a preprint \cite{kitazawa0.5} of the author. We present our proof again.
\begin{proof}[A proof of Theorem \ref{thm:1}]
We show the "if" part.
The base space
$S^n$ is decomposed into the following two manifolds and we can reconstruct $S^n$ by gluing them by a diffeomorphism along the boundaries.
\begin{itemize}
\item $D^{n,1} \sqcup D^{n,2}$, which denotes the disjoint union of two smoothly embedded copies of the $n$-dimensional unit disk.
\item $\partial D^{n,0} \times [-1,1]$, where $\partial D^{n,0}$ denotes the boundary of $D^{n,1}$ or $D^{n,2}$.
\end{itemize}
We can have $D^{n,1} \sqcup D^{n,2}$ as the preimage of a double cover over ${D^n}_{\frac{1}{2}}$. We can construct the projection over $D^{n,1} \sqcup D^{n,2}$ whose preimage is diffeomorphic to $\Sigma$. In other words, we also have a trivial smooth bundle over $D^{n,1} \sqcup D^{n,2}$ whose fiber is $\Sigma$ or one over ${D^n}_{\frac{1}{2}}$ whose fiber is the disjoint union $\Sigma \sqcup \Sigma$.
We can construct the projection over
$\partial D^{n,0} \times [-1,1]$ whose preimage is diffeomorphic to $\Sigma$. Over the total space of the resulting trivial bundle, we can have the product map of a natural Morse function with exactly two singular points over the cylinder $\Sigma \times [-1,1]$ and the identity map on $\partial D^{n,0}$. The Morse function enjoys properties that the preimage of the minimum (maximum) and the boundary coincide and that the two singular points are all in the interior for example.
We can regard this product map as a map into $A_{\frac{1}{2},\frac{5}{2}}$ whose image is $A_{\frac{1}{2},2}$.
We can glue the two maps to obtain a desired round fold map. This completes the proof of the "if" part.
The "only if" part can be presented shortly.
We abuse the notation before in a natural way. Conversely, we can decompose the $m$-dimensional manifold into two manifolds which are the total spaces of smooth bundles over $D^{n,1} \sqcup D^{n,2}$ and $\partial D^{n,0} \times [-1,1]$. Moreover fibers are diffeomorphic to $\Sigma$ of course.
By observing the identification to have the base space, which is a homotopy sphere, carefully, we can see that the base space must be a standard sphere. This completes the proof of the "only if" part.
This completes the proof.
\end{proof}
\begin{Thm}[\cite{kitazawa0.3,kitazawa0.5}]
\label{thm:2}
Let $m \geq n \geq 2$ and $l>0$ be integers. Let an $m$-dimensional closed manifold $M$ be represented as a connected sum of $l>0$ manifolds each of which is the total space of some smooth bundle over $S^n$ whose fiber is an {\rm (}$m-n${\rm )}-dimensional standard sphere.
Here a connected sum is taken in the smooth category.
$M$ admits a round fold map $f:M \rightarrow {\mathbb{R}}^n$ enjoying the following properties if and only if it is the total space of a smooth bundle over the $n$-dimensional unit sphere $S^n$ whose fiber is $\Sigma$.
\begin{enumerate}
\item $f$ has componentwisely trivial monodromies.
\item $S(f)$ consists of exactly $l+1$ connected components. The index of each singular point is $0$ or $1$. $f(F_1(f))={\subset}_{j=1}^{l-1} \partial {D^n}_j$ and $f(F_0(f))= \partial {D^n}_l$.
\item For a point in ${D^{n}}_1-f(S(f))$, the preimage is the disjoint union of $l+1$ {\rm (}$m-n${\rm )}-dimensional standard spheres.
\item For a point in ${A^{n}}_{l^{\prime},l^{\prime}+1}-f(S(f))$, the preimage is the disjoint union of $l+1-l^{\prime}$ {\rm (}$m-n${\rm )}-dimensional standard spheres for each $1 \leq l^{\prime} \leq l$.
\end{enumerate}
In addition, if $m \geq 2n$ is assumed, then the converse also holds. If it is not assumed, then the converse does not hold in general and the case $(m,n)=(3,2)$ gives one of such cases.
\end{Thm}
\begin{Def}
We say round fold maps as in Theorem \ref{thm:2} are {\it directed}. We say ones whose singular sets are connected and which have globally trivial monodromies are also {\it directed}.
\end{Def}
We explain about directed round fold maps whose singular sets are connected.
For such maps, preimages of points in the interiors of the images are standard spheres. Furthermore, the index of singular points of the maps here are always $0$. Canonical projections of unit spheres into Euclidean spaces whose dimensions are at least $2$ satisfy the conditions.
\begin{Prop}
\label{prop:3}
In Theorem \ref{thm:2} consider the case $l=1$. Directed round fold maps have globally trivial monodromies in the case $(m,n)=(k+1,k),(4,2), (5,2), (5,3), (6,3), (6,4)$ for example where $k$ is an arbitrary integer greater than or equal to $2$.
\end{Prop}
\begin{proof}[A short exposition on Proposition \ref{prop:3}.]
We abuse the notation $l+1:=2>0$ for the number of connected components of the singular set $S(f)$ for a directed round fold map $f$.
We consider the restriction of $f$ to $f^{-1}({A^n}_{l+\frac{1}{2},l+\frac{3}{2}})$. We can also compose this with the canonical projection to $S^{n-1}$ mapping each point $x \in {A^n}_{l+\frac{1}{2},l+\frac{3}{2}}$ to $\frac{1}{||x||}x \in S^{n-1}$. This bundle is a linear bundle whose fiber is the unit disk $D^{m-n+1}$ according to \cite{saeki2}. We can also say that in our case, this is also trivial as a linear bundle. Smooth bundles whose fibers are diffeomorphic to $S^{m-n}$ are in our case linear where in general this is not true of course. In our case, we can see that an isomorphism on the smooth (linear) bundle $f^{-1}(\partial {D^n}_{l+\frac{1}{2}})$ is always extended to an isomorphism on the smooth (linear) bundle over $f^{-1}({A^n}_{l+\frac{1}{2},l+\frac{3}{2}})$. This is a most important ingredient and this completes our exposition on Proposition \ref{prop:3}.
For linear bundles related to our arguments here, consult \cite{milnorstasheff} again. \cite{hatcher1} is on the diffeomorphism groups of unit spheres and smooth bundles whose fibers are standard spheres.
\end{proof}
\begin{Prop}
\label{prop:4}
In the definition of directed round fold maps, we do not need the assumption that maps have componentwisely trivial monodromies in the case $(m,n)=(k+1,k)$ for any integer $k \geq 2$.
\end{Prop}
This is due to a well-known fact on the diffeomorphisms groups, or the so-called {\it mapping class groups} of compact surfaces and some related arguments are important imgredients of \cite{kitazawasaeki2}. Proposition \ref{prop:6} is presented later as a well-known fact on the mapping class group and the diffeomorphism group of the torus $S^1 \times S^1$. This is also an important fact in our arguments.
\section{On Main Theorems.}
\subsection{Graph manifolds and round fold maps into ${\mathbb{R}}^2$ on them.}
We explain about graph manifolds. Note that topological manifolds whose dimensions are at most $3$ are all smooth manifolds and for a fixed topological manifold here, the smooth manifolds are always mutually diffeomorphic. This is due to \cite{moise} for example.
\begin{Def}
\label{def:5}
A {\it graph manifold} is a $3$-dimensional closed, connected and orientable manifold obtained from finitely many manifolds regarded as the total spaces of smooth bundles over compact and connected surfaces whose fibers are diffeomorphic to $S^1$ or {\it circle bundles} over the surfaces by gluing tori on the boundaries by diffeomorphisms.
\end{Def}
\begin{Thm}
\label{thm:3}
A $3$-dimensional closed, connected and orientable manifold admits a round fold map into ${\mathbb{R}}^2$ if and only if it is a graph manifold.
\end{Thm}
Originally, in \cite{saeki3}, "a round fold map" is "a fold map into ${\mathbb{R}}^2$ such that the restriction to the singular set is an embedding".
An $m$-dimensional {\it pair of pants} is a smooth manifold diffeomorphic to one obtained by removing three disjointly and smoothly embedded copies of the $m$-dimensional unit sphere from an $m$-dimensional standard sphere.
\begin{Prop}
\label{prop:5}
In Definition \ref{def:5}, we can choose each bundle as a trivial bundle over the unit disk $D^2$ or a $2$-dimensional pair of pants.
We can also define a graph satisfying the following rules.
\begin{enumerate}
\item The vertex set is the set of all these circle bundles over the surfaces.
\item The edge set is the set of all tori of connected components of the boundaries of these circle bundles over the surfaces.
\item Each edge contains exactly two vertices and they are distinct. They are the circle bundles containing the edge or the torus as a connected component of the boundaries. \end{enumerate}
\end{Prop}
Related to Proposition \ref{prop:5}, we explain about graphs associated with graph manifolds shortly where we do not need deep understanding on these graphs in our paper.
We call a graph in Proposition \ref{prop:5} a {\it representation graph} for the graph manifold $M$. It is not unique. In our studies, representation graphs of a certain type or the so-called {\it plumbing type} are important. This is for decompositions into circle bundles where connected components of boundaries are glued in some specific ways. See \cite{neumann, saeki1} and see also \cite{kitazawasaeki1}.
For each graph manifold, graphs with several labels are defined in \cite{neumann}. The definition of such a graph with labels is more complicated. As a strong result, we can define a {\it normal form} in such graphs. This also gives an invariant for graph manifolds.
The following theorem shows that both these graphs are important in characterizing a same subclass of graph manifolds.
\begin{Thm}[\cite{kitazawasaeki1}]
\label{thm:4}
\begin{enumerate}
\item A graph manifold admits a directed round fold map into ${\mathbb{R}}^2$ if and only if its normal form is a graph with no cycles.
\item Equivalently, a graph manifold admits a directed round fold map into ${\mathbb{R}}^2$ if and only if there exists a representation graph with no cycles for the manifold.
\end{enumerate}
\end{Thm}
\begin{Thm}[\cite{doighorn,kitazawasaeki1}]
\label{thm:5}
For manifolds in the previous theorem, the cup product for any ordered pair of elements of the $1$st rational cohomology classes is always the zero element.
\end{Thm}
In \cite{kitazawasaeki1}, the fact that graph manifolds admitting directed fold maps into ${\mathbb{R}}^2$ enjoy such a property on the rational cohomology rings is presented as a corollary to a main result of \cite{doighorn}. The result of \cite{doighorn} states that graph manifolds with normal forms having no cycles enjoy this property.
Main Theorem \ref{mthm:1} is regarded as a stronger version.
\subsection{Proofs of Main Theorems.}
We need some notions on polyhedra and so-called {\it Reeb spaces}.
A topological space which is homeomorphic to a polyhedron whose dimension is at most $2$ has the structure of a polyhedron uniquely. A topological space which is homeomorphic to a topological manifold whose dimension is at most $3$ has the structure of a polyhedron uniquely and this is the PL manifold. This is also due to \cite{moise} for example.
For a continuous map $c:X \rightarrow Y$ between topological spaces, we can define an equivalence relation ${\sim}_c$ on $X$ by the following rule: $x_1 {\sim}_c x_2$ if and only if $x_1$ and $x_2$ are in a same connected component of the preimage $c^{-1}(y)$ of some point $y$.
We do not explain about general theory of Reeb spaces precisely. One of important fact is that for fold maps and more general smooth maps enjoying some properties on so-called "genericity", they are polyhedra whose dimensions are same as those of the manifolds of the targets and whose structures as the polyhedra are naturally induced from the manifolds of the targets. Such facts are shown in \cite{shiota} for example. \cite{kobayashisaeki} explicitly shows such a fact for so-called {\it stable} maps on smooth closed manifolds whose dimensions are at least $3$ into surfaces with no boundaries. Essentially the class of such maps there contains round fold maps and fold maps such that the restrictions to the singular sets are embeddings for example. See \cite{golubitskyguillemin} for related singularity theory of smooth maps.
For such polyhedra, see also \cite{turaev} for example. This is a paper published before studies of such polyhedra regarded as the Reeb spaces of such smooth maps into ${\mathbb{R}}^2$ started. For related studies, see \cite{costantinothurston,ishikawakoda}.
The following example is important.
It is well-known that homotopy spheres except $4$-dimensional exotic spheres are all PL homeomorphic to standard spheres where they are seen as the PL manifolds. We call such PL manifolds {\it PL spheres}.
\begin{Ex}
For example, for directed round fold maps on $m$-dimensional closed and connected manifolds into ${\mathbb{R}}^n$ with $m>n$, the Reeb spaces are simple homotopy equivalent to the bouquet of $n$-dimensional (PL) spheres.
FIGURE \ref{fig:1} is for Theorem \ref{thm:1}. This is obtained by attaching a copy ${{D^n}_0}^{\prime}$ of the $n$-dimensional unit sphere ${D^n}_{0}$ to another copy via a diffeomorphism from the boundary onto a smoothly embedded copy of the unit sphere $S^{n-1}$ in the interior of ${D^n}_{0}$. FIGURE \ref{fig:2} is for a general case. Note that topologically we can represent the Reeb space as a one embedded naturally into ${\mathbb{R}}^{n+1}$ and this is the subspace of the Reeb space in some hyperplane of ${\mathbb{R}}^{n+1}$.
\end{Ex}
\begin{figure}
\includegraphics[height=25mm, width=40mm]{20230113Reebz.eps}
\begin{center}
\caption{The Reeb space for Theorem \ref{thm:1}.}
\label{fig:1}
\end{center}
\end{figure}
\begin{figure}
\includegraphics[height=25mm, width=40mm]{20230113Reeb.eps}
\begin{center}
\caption{The Reeb space of a general directed fold map. The subspace of it in a suitable hyperplane of ${\mathbb{R}}^{n+1}$}
\label{fig:2}
\end{center}
\end{figure}
Let $A$ be a commutative ring with a generator $\pm a \in A$.
For a compact, connected and oriented manifold $X$, we can define the fundamental class as a generator of $H_{\dim X}(X,\partial X;A)$, isomorphic to $A$. We can set this element as $\pm a$ according to the orientation.
For a smooth manifold $Y$ and an element $c \in H_j(Y,\partial Y,A)$, assume that $c$ is equal to the value ${i_X}_{\ast}(a)$ of the homomorphism ${i_{X}}_{\ast}:H_{\dim X}(X;\partial X;A) \rightarrow H_{j}(Y;\partial Y;A)=H_{\dim X}(Y;\partial Y;A)$ induced canonically by some smooth embedding $i_X:X \rightarrow Y$ of a smooth, compact, connected and oriented manifold $X$ enjoying the following properties.
\begin{itemize}
\item $i_X(\partial X) \subset \partial Y$.
\item $i_X({\rm Int}\ X) \subset {\rm Int}\ Y$.
\end{itemize}
In other words, they are embedded properly.
Respecting fundamental arguments on differential topology, we may add so-called "transversality" of the embedding on the boundaries according to the situations.
In other words, at each point $p$ in the boundary $\partial X$, the intersection of the image of the differential ${d_{i_X}}_p$ of the embedding and the tangent space $T_{i_X(p)}Y$ of $Y$ at $i_X(p)$ is assumed to be of dimension $\dim X+(\dim Y-1)-\dim Y=\dim X-1$.
However, we do not consider this assumption essentially in our paper.
In this situation, $c$ is said to be {\it represented by} the submanifold {\it $i_X(X)$}.
We can define similar notions in the PL category, or equivalently, the piecewise smooth category, and the topology category.
Consider a compact, connected and oriented manifold $X$ again. In our arguments, so-called Poincar\'e duals to elements of $H_j(X;\partial X;A)$, $H^j(X;\partial X;A)$. $H_j(X;A)$ and $H^j(X;A)$ are important. They are uniquely defined as elements of $H^{\dim X-j}(X;A)$, $H_{\dim X-j}(X;A)$. $H^{\dim X-j}(X,\partial X;A)$ and $H_{\dim X-j}(X,\partial X;A)$, respectively. Related to this,
Poincar\'e duality theorem for $X$ is also important.
Of course we calculate (co)homology groups and cohomology rings. For this, we need exact sequences such as Mayer-Vietoris sequences. We also need some theorems such as K\"unneth formula, useful for the cohomology groups (rings) of products. Poincar\'e duality theorem, presented before, and universal coefficient theorem are important. See \cite{hatcher1} again for example.
We show Main Theorems.
First, the following proposition is fundamental in our arguments.
\begin{Prop}
\label{prop:6}
For a trivial circle bundle $T^2:=S^1 \times S^1$ over $S^1$, there exists a family of isomorphisms on the bundle, which can be denoted by ${\{{\Phi}_j\}}_{j \in \mathbb{Z}}$ and enjoy the following properties.
\begin{enumerate}
\item Let ${S^1}_{\rm b}$ denote the subspace $S^1 \times \{\ast\} \subset S^1 \times S^1$ with an orientation. Let ${S^1}_{\rm f}$ denote the subspace $\{\ast\} \times S^1 \subset S^1 \times S^1$ with an orientation, which is regarded as a fiber of the trivial circle bundle over ${S^1}_{\rm b}$. Let $[S_{\rm b}] \in H_1(T^2;\mathbb{Z}) \oplus \mathbb{Z} \oplus \mathbb{Z}$ and $[S_{\rm f}] \in H_1(T^2;\mathbb{Z}) \oplus \mathbb{Z} \oplus \mathbb{Z}$ denote the elements represented by these oriented circles.
The homomorphism ${{\Phi}_j}_{\ast}$ induced from ${\Phi}_j$, which is also an isomorphism, maps $[S_{\rm b}]$ to $[S_{\rm b}]+j[S_{\rm f}]$ and $[S_{\rm f}]$ to $[S_{\rm f}]$.
\item
Any isomorphism on the bundle is smoothly isotopic to exactly one ${\Phi}_j$ and we can smoothly isotope in the space of isomorphisms on the bundle.
\end{enumerate}
\end{Prop}
We omit rigorous exposition on this.
We also need closely related arguments in our paper. For these arguments, we use fundamental knowledge and methods on linear bundles including circle bundles. When we need, see \cite{milnorstasheff} again for example. Although terminologies and notation may be a bit different from ours, we can consult this, explaining about the essentially same content.
For Main Theorem \ref{mthm:1}, the following lemma is essential.
\begin{Lem}
\label{lem:1}
Assume that a $3$-dimensional closed, connected and orientable manifold $M$ admits a directed round fold map $f:M \rightarrow {\mathbb{R}}^2$. Then we have the following properties.
\begin{enumerate}
\item
\label{lem:1.1}
The 2nd integral homology group $H_2(M;\mathbb{Z})$ is free and generated by finitely many elements which are not divisible by integers greater than $1$.
\item \label{lem:1.2} Furthermore, the set of the finitely many elements in $H_2(M;\mathbb{Z})$ can be taken as its basis. In addition, these elements can be taken as ones represented by spheres which are mutually disjoint.
Let $\{{S^2}_j\}$ denote the set of all spheres here.
\item \label{lem:1.3} We can consider the quotient map $q_f:M \rightarrow W_f$ onto the Reeb space $W_f$ of $f$. Then $q_f$ maps ${S^2}_j$ onto a PL sphere by a {\rm (}PL{\rm )} homeomorphism. Furthermore, the
homomorphism ${q_f}_{\ast}:H_2(M;\mathbb{Z}) \rightarrow H_2(W_f;\mathbb{Z})$ induced canonically by the map $q_f$ is a monomorphism and maps each element represented by ${S^2}_j$ to an element which is not divisible by integers greater than $1$.
\item \label{lem:1.4} The subgroup of 1st integral homology group $H_1(M;\mathbb{Z})$ generated by all elements whose orders are infinite is generated by finitely many elements which are not divisible by integers greater than $1$.
\item \label{lem:1.5} Furthermore, the set of the finitely many elements in $H_1(M;\mathbb{Z})$ can be taken as a basis of the subgroup of the 1st integral homology group $H_1(M;\mathbb{Z})$ generated by all elements whose orders are infinite. In addition, these elements are represented by connected components of the preimage $f^{-1}(0)$.
\end{enumerate}
\end{Lem}
\begin{proof}
We prove by an induction on the numbers of connected components of the singular sets.
If the number is $1$, then by Proposition \ref{prop:2}, the manifold is a $3$-dimensional (standard) sphere. Our lemma holds of course.
Suppose that our lemma holds if the numbers of connected compoents of the singular sets are at most $k$ where $k$ is a positive integer.
We consider a directed round fold map $f_{{\rm r},k+1}:{M^3}_{k+1} \rightarrow {\mathbb{R}}^2$ such that the singular set $S(f_{{\rm r},k+1})$ consists of exactly $k+1$ connected components.
Remove a connected component $B_{k}$ of the inteiror of the preimage ${f_{{\rm r},k+1}}^{-1}({D^2}_{\frac{3}{4}})$, regarded as the total space of a trivial circle bundle over ${D^2}_{\frac{3}{4}}$. Let $A_{k}$ denote the resulting $3$-dimensional compact, connected and orientable manifold. The resulting map is regarded as the restriction of a round fold map on a $3$-dimensional closed, connected and orientable manifold $A_{k,0}$ into ${\mathbb{R}}^2$ obtained by removing the interior of a copy of $S^1 \times D^2$ smoothly embedded in $A_{k,0}$. The round fold map is, by Proposition \ref{prop:3} and arguments in Proposition \ref{prop:4}, deformed to a directed round fold map $f_{{\rm r},k,0}:A_{k,0} \rightarrow {\mathbb{R}}^2$ by a natural smooth homotopy eliminating two adjacent two connected components of the singular set, consisting of singular points whose indices are $0$ and $1$, respectively.
We can consider a Mayer-Vietoris sequence
$$ \rightarrow H_3(A_k;\mathbb{Q}) \oplus H_3(S^1 \times D^2;\mathbb{Q}) \cong \{0\} \rightarrow H_3(A_{k,0};\mathbb{Q}) \cong \mathbb{Q} \rightarrow$$
$$\rightarrow H_2(S^1 \times S^1;\mathbb{Q}) \cong \mathbb{Q} \rightarrow H_2(A_k;\mathbb{Q}) \oplus H_2(S^1 \times D^2;\mathbb{Q}) \cong H_2(A_k;\mathbb{Q}) \rightarrow H_2(A_{k,0};\mathbb{Q}) \rightarrow$$
and we can consider a family $\{{S^2}_{k,j}\}$ of spheres in $A_{k,0}$ as in our lemma for the directed round fold map. We can see that the homomorphism from $H_2(A_k;\mathbb{Q})$ into $H_2(A_{k,0};\mathbb{Q})$ here is a monomorphism and induced by
the inclusion canonically. By the construction of the round fold maps and the manifolds, the spheres in $\{{S^2}_{k,j}\}$ can be also regarded as ones in ${\rm Int}\ A_k$. Furthermore, they can be regarded as ones mapped by the inclusion and by diffeomorphisms onto the corresonding spheres in $A_{k,0}$. This also means that the ranks of $H_2(A_k;\mathbb{Q})$, $H_2(A_{k,0};\mathbb{Q})$, $H_2(A_k;\mathbb{Z})$ and $H_2(A_{k,0};\mathbb{Z})$ agree
by universal coefficient theorem.
We can consider a Mayer-Vietoris sequence
$$ \{0\}\rightarrow H_3({M^3}_{k+1};\mathbb{Q}) \cong \mathbb{Q} \rightarrow$$
$$\rightarrow H_2(S^1 \times S^1;\mathbb{Q}) \cong \mathbb{Q} \rightarrow H_2(A_k;\mathbb{Q}) \oplus H_2(B_k;\mathbb{Q}) \cong H_2(A_k;\mathbb{Q}) \rightarrow H_2({M^3}_{k+1};\mathbb{Q}) \rightarrow$$
$$\rightarrow H_1(S^1 \times S^1;\mathbb{Q}) \cong \mathbb{Q} \oplus \mathbb{Q} \rightarrow H_1(A_k;\mathbb{Q}) \oplus H_1(B_k;\mathbb{Q}) \cong H_1(A_k;\mathbb{Q}) \oplus \mathbb{Q} \rightarrow H_1({M^3}_{k+1};\mathbb{Q}) \rightarrow$$
and we first explain about the homomorphism from
$H_1(S^1 \times S^1;\mathbb{Q}) \cong \mathbb{Q}$ into $H_1(A_k;\mathbb{Q}) \oplus H_1(B_k;\mathbb{Q}) \cong H_1(A_k;\mathbb{Q}) \oplus \mathbb{Q}$. This is the direct sum of the two homomorphisms induced canonically by the inclusions.
By considering fibers of the trivial circle bundles, the dimension of the kernel must be $0$ or $1$. For this, remember that $B_k$ is the total space of a trivial circle bundle over ${D^2}_{\frac{3}{4}}$ and an element represented by the fiber of the bundle $S^1 \times S^1$ over $S^1$ is mapped to an element of the form $(c_1,c_{\rm F}) \in H_1(A_k;\mathbb{Q}) \oplus H_1(B_k;\mathbb{Q})$ where $c_{\rm F}$ is represented by a fiber of the circle bundle $B_k$.
The rank of $H_2({M^3}_{k+1};\mathbb{Q})$ is equal to that of $H_2(A_k;\mathbb{Q})$ or the sum of the rank of $H_2(A_k;\mathbb{Q})$ and $1$. We argue the two cases to complete the proof. \\
\ \\
Case 1 The rank of $H_2({M^3}_{k+1};\mathbb{Q})$ is equal to that of $H_2(A_k;\mathbb{Q})$. \\
The rank of $H_2({M^3}_{k+1};\mathbb{Z})$ is equal to those of $H_2(A_k;\mathbb{Z})$, $H_2({M^3}_{k+1};\mathbb{Q})$ and $H_2(A_k;\mathbb{Q})$
by the universal coefficient theorem. Furthermore, $H^1({M^3}_{k+1};\mathbb{Z})$ and $H_2({M^3}_{k+1};\mathbb{Z})$ are free and isomorphic by universal coefficient theorem and Poincar\'e duality theorem.
Reviewing the construction of the maps and the manifolds, we can see that the round fold map $f_{{\rm r},k+1}:{M^3}_{k+1} \rightarrow {\mathbb{R}}^2$ enjoys the desired properties. \\
\ \\
Case 2 The rank of $H_2({M^3}_{k+1};\mathbb{Q})$ is equal to the sum of the rank of $H_2(A_k;\mathbb{Q})$ and $1$. \\
We assume that this occurs.
By the construction of the maps and the manifolds, there exists a closed, connected and oriented surface $S$ and an element $c_S$ of $H_2({M^3}_{k+1};\mathbb{Q})$ represented by the surface $S$ and elements represented by spheres in the family $\{{S^2}_{k,j}\}$ form a basis of $H_2({M^3}_{k+1};\mathbb{Q})$.
By an argument on algebraic topology and differential topology, we investigate properties of $S$. $S$ is divided by $\partial A_k$, identified with $\partial B_k$ in a canonical way. More precisely, it is decomposed into two compact surfaces along circles in the boundary. By Poincar\'e duality theorem for $B_k$ or the so-called intersection theory, there exists an element $c_{{\rm F},k+1} \in H_1({M^3}_{k+1};\mathbb{Q})$ enjoying the following properties with the arguments.
\begin{itemize}
\item Let $\{c_{{\rm F},k,j}\} \subset H_1(A_{k,0};\mathbb{Q})$ denote the basis obtained from a basis as in (\ref{lem:1.5}) in the assumption by universal coefficient theorem. These elements are seen as mutually independent in $H_1(A_k;\mathbb{Q})$ by regarding them as elements of $H_1(A_k;\mathbb{Q})$ naturally by respecting the structures of the maps and the manifolds as before. They are also seen as mutually independent in $H_1({M^3}_{k+1};\mathbb{Q})$ by considering the inclusion. Here we also respect intersections for the circles in $f^{-1}(0)$ by which the elements are represented and $2$-dimensional spheres in $\{{S^2}_j\}$ for the 2nd integral (rational) homology group.
Furthermore, $c_{{\rm F},k+1} \in H_1({M^3}_{k+1};\mathbb{Q})$ and these elements form a basis.
\item $c_{{\rm F},k+1}$ is represented by a fiber of the trivial circle bundle $B_k \subset {M^3}_{k+1}$.
\end{itemize}
We investigate the unique connected component of the preimage ${f_{{\rm r},k+1}}^{-1}({D^2}_{\frac{3}{2}})$ containing $B_k$ as the subspace. By observing Theorem \ref{thm:1}, Proposition \ref{prop:3} and Proposition \ref{prop:4} and their proofs, we can see that the restriction of the round fold map here is seen as the restriction of a round fold map like one in Theorem \ref{thm:1} to the preimage of ${D^{2}}_{\frac{3}{2}}$.
We explain about circle bundles over $S^2$ shortly. Related to this, circle bundles over the torus are discussed more precisely in the proof of Main Theorem \ref{mthm:2} and this may help us to discuss more precisely.
Each circle bundle over $S^2$ corresponds to, modulo isomorphisms with the structure groups preserving the orientations of fibers, its {\it Euler number} $k \in \mathbb{Z}$. The total space of such a bundle is a so-called {\it rational homology sphere} or a $3$-dimensional closed manifold whose rational homology group is isomorphic to that of $S^3$ if and only if $k \neq 0$. Consider a round fold map $f_{{S^3}_k}$ as in Theorem \ref{thm:1} on such a rational homology sphere ${S^3}_k$. Let $A_{{S^3}_k}$ denote the preimage of ${D^{2}}_{\frac{3}{2}}$ for the map and $B_{{S^3}_k}:={S^3}_k-{\rm Int}\ A_{{S^3}_k}$.
We can consider a Mayer-Vietoris sequence
$$ \{0\}\rightarrow H_3({S^3}_k;\mathbb{Q}) \cong \mathbb{Q} \rightarrow$$
$$\rightarrow H_2(S^1 \times S^1;\mathbb{Q}) \cong \mathbb{Q} \rightarrow H_2(A_{{S^3}_k};\mathbb{Q}) \oplus H_2(B_{{S^3}_k};\mathbb{Q}) \cong H_2(A_{{S^3}_k};\mathbb{Q}) \rightarrow H_2({S^3}_k;\mathbb{Q}) \cong \{0\} \rightarrow$$
$$\rightarrow H_1(S^1 \times S^1;\mathbb{Q}) \cong \mathbb{Q} \oplus \mathbb{Q} \rightarrow H_1(A_{{S^3}_k};\mathbb{Q}) \oplus H_1(B_{{S^3}_k};\mathbb{Q}) \cong H_1(A_{{S^3}_k};\mathbb{Q}) \oplus \mathbb{Q} \rightarrow H_1({S^3}_k;\mathbb{Q}) \cong \{0\} \rightarrow$$
and we investigate the isomorphism from $H_1(S^1 \times S^1;\mathbb{Q}) \cong \mathbb{Q} \oplus \mathbb{Q}$ onto $H_1(A_{{S^3}_k};\mathbb{Q}) \oplus H_1(B_{{S^3}_k};\mathbb{Q}) \cong H_1(A_{{S^3}_k};\mathbb{Q}) \oplus \mathbb{Q}$. As presented, this is defined by using the homomorphisms induced by the inclusions. $H_1(A_{{S^3}_k};\mathbb{Q})$ is isomorphic to $\mathbb{Q}$ and the rank of $H_1(A_{{S^3}_k};\mathbb{Z})$ is $1$ by universal coefficient theorem. Furthermore, a generator of this is represented by the preimage of a point in $\partial {D^{2}}_{\frac{5}{4}}$.
This contradicts the property that a nice element $c_{{\rm F},k+1} \in H_1({M^3}_{k+1};\mathbb{Q})$ is taken. Thus the restriction of the round fold map $f_{{\rm r},k+1}$ to the unique connected component of the preimage ${f_{{\rm r},k+1}}^{-1}({D^2}_{\frac{3}{2}})$ containing $B_k$ cannot be a map like this.
This must be one obtained from a round fold map on $S^2 \times S^1$ as in Theorem \ref{thm:1}.
We can see that by the construction in the proof of Theorem \ref{thm:1}, $S$ can be taken as the image of a section of the trivial bundle over the base space $S^2$. By the construction, the desired properties are also enjoyed.
Last note that the 2nd integral homology group of a $3$-dimensional closed, conencted and orientable manifold is free. This is due to Poicar\'e duality theorem and the universal coefficient theorem for the 1st integral cohomology group, forcing the groups to be free. This completes the proof of Case 2.
This completes the proof.
\end{proof}
\begin{proof}[A proof of Main Theorem \ref{mthm:1}]
We apply Lemma \ref{lem:1} with Poicar\'e duality theorem or intersection theory. ${S^2}_j$ is, via a suitable smooth isotopy, moved in such a way that the original sphere and the resulting sphere are mutually disjoint. Distinct spheres in $\{{S^2}_j\}$ are mutually disjoint.
This completes the proof.
\end{proof}
\begin{proof}[A proof of Main Theorem \ref{mthm:2}]
We consider a circle bundle over the torus $T^2:=S^1 \times S^1$ whose {\it Euler number} is $k \in \mathbb{Z}$. We explain about this. Consider a manifold obtained by removing the interior of a smoothly embedded copy of the $2$-dimensional unit disk $D^2$ in $T^2$, denoted by ${T^2}_o$. Let the removed disk denoted by ${D^2}_o$.
We consider trivial smooth bundles over ${T^2}_o$ and ${D^2}_o$ and the total spaces can be denoted by ${T^2}_o \times S^1$ and ${D^2}_o \times S^1$. Their trivializations are naturally given. On their boundaries the trivializations are also given and identified with a trivial bundle $S^1 \times S^1$ over $S^1$ naturally. We glue ${T^2}_o \times S^1$ and ${D^2}_o \times S^1$ by a bundle isomorphism ${\Phi}_k$ in Proposition \ref{prop:5} from $\partial {T^2}_o \times S^1$ onto $\partial {D^2}_o \times S^1$ where the bundles of the domain and that of the target are identified with the bundle $S^1 \times S^1$ canonically as before. We have a desired circle bundle and its total space $B_{S^1,k}(T^2)$.
We have a Mayer-Vietoris sequence
$$\rightarrow H_1(S^1 \times S^1;\mathbb{Z}) \rightarrow H_1({T^2}_o \times S^1;\mathbb{Z}) \oplus H_1({D^2}_o \times S^1;\mathbb{Z}) \rightarrow H_1(B_{S^1,k}(T^2);\mathbb{Z}) \rightarrow$$
and we have $H_1({T^2}_o \times S^1;\mathbb{Z}) \cong H_1({T^2} \times S^1;\mathbb{Z}) \cong \mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}$ and $H_1({D^2}_o \times S^1;\mathbb{Z}) \cong \mathbb{Z}$ for example.
In our arguments here, we can apply suitable identifications of the homology groups and the finitely generated (commutative) groups and we apply.
The first homomorphism from $H_1(S^1 \times S^1;\mathbb{Z})$ into $H_1({T^2}_o \times S^1;\mathbb{Z}) \oplus H_1({D^2}_o \times S^1;\mathbb{Z})$ is regarded as a homomorphism mapping $(a,b) \in \mathbb{Z} \oplus \mathbb{Z} \cong H_1(S^1 \times S^1;\mathbb{Z})$ to $(0,0,b,ka+b)$.
Remember that this is the direct sum of the two homomorphisms induced canonically by the inclusions.
The second homomorphism from $H_1({T^2}_o \times S^1;\mathbb{Z}) \oplus H_1({D^2}_o \times S^1;\mathbb{Z})$ into $H_1(B_{S^1,k}(T^2);\mathbb{Z})$ is defined as the sum of the two homomorphisms induced canonically by the inclusions into $B_{S^1,k}(T^2)$. Furthemore, this is an epimorphism. This is due to the fact that $S^1 \times S^1$ is connected and after the third group here, the homomorphism from $H_0(S^1 \times S^1;\mathbb{Z})$ to $H_0({T^2}_o \times S^1;\mathbb{Z}) \oplus H_0({D^2}_o \times S^1;\mathbb{Z})$ follows. Furthermore, this homomorphism from $H_0(S^1 \times S^1;\mathbb{Z})$ to $H_0({T^2}_o \times S^1;\mathbb{Z}) \oplus H_0({D^2}_o \times S^1;\mathbb{Z})$ is a monomorphism and also the direct sum of the two homomorphisms induced canonically by the inclusions. We have the following properties.
\begin{itemize}
\item $H_1(B_{S^1,k}(T^2);\mathbb{Z})$ is isomorphic to $\mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}/|k|\mathbb{Z}$.
\item The first and the second direct summands of the group $H_1(B_{S^1,k}(T^2);\mathbb{Z}) \cong \mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}/|k|\mathbb{Z}$ are regarded as the subgroups generated by elements enjoying the following properties.
\begin{itemize}
\item These two elements are represented by the images of sections of the restrictions of the bundle ${T^2}_o \times S^1$ to circles ${S^1}_1$ and ${S^1}_2$ in the interior of ${T^2}_o$ by which elements of some natural basis of the subgroup of $H_1(B_{S^1,k}(T^2);\mathbb{Z}) \cong \mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}/|k|\mathbb{Z}$ generated by all elements whose orders are infinite are represented.
\item The two elements of $H_1({T^2}_o;\mathbb{Z}) \cong \mathbb{Z} \oplus \mathbb{Z}$ form the basis of course.
\end{itemize}
\item The third summand of the group is generated by an element represented by a fiber of the circle bundle.
\end{itemize}
We investigate the rational and integral cohomology rings of $B_{S^1,k}(T^2)$ by Poincar\'e duality theorem for it or intersection theory. By universal coefficient theorem, $H^1(B_{S^1,k}(T^2);\mathbb{Z})$ is isomorphic to $\mathbb{Z} \oplus \mathbb{Z}$. Poincar\'e duality theorem shows that this is isomorphic to $H_2(B_{S^1,k}(T^2);\mathbb{Z})$.
This is generated by elements represented by the total spaces of the restrictions of the circle bundle $B_{S^1,k}(T^2)$ to ${S^1}_1$ and ${S^1}_2$. Remember that ${S^1}_i$ is a circle in ${T^2}_o \subset T^2$ and defined before.
Choose one of these tori. By using some smooth isotopy, we can move it to another place in such a way that the original torus and the new torus are mutually disjoint. For two tori, we may regard that the intersection can be taken as the fiber, by which a generator of the summand $\mathbb{Z}/|k|\mathbb{Z}$ of $H_1(B_{S^1,k}(T^2);\mathbb{Z}) \cong \mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}/|k|\mathbb{Z}$ is represented. Note that $H^2(B_{S^1,k}(T^2);\mathbb{Z}) \cong \mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}/|k|\mathbb{Z}$ by Poincar\'e duality theorem. This means that $B_{S^1,k}(T^2)$ enjoys the property on the rational cohomology ring in Theorem \ref{thm:5} and that it does not enjoy the property on the integral cohomology ring in Main Theorem \ref{mthm:1}.
As a desired manifold admitting a directed round fold map into ${\mathbb{R}}^2$, we consider a connected sum of two copies of $S^2 \times S^1$ and the total space of a circle bundle over $S^2$ whose Euler number is $k$.
\end{proof}
\begin{Rem}
We do not know whether the converse of Main Theorem \ref{mthm:1} holds. Problems like this are important of course.
\end{Rem}
\begin{Rem}
Main Theorem \ref{mthm:1} does not hold where the coefficient ring is a finite commutative ring. This is also pointed out in \cite{kitazawasaeki1}.
\end{Rem}
\section{Explicit fold maps and restrictions on the manifolds, and Main Theorems.}
\begin{Ex}[\cite{kitazawa0.3}]
According to \cite{milnor1}, followed by \cite{eellskuiper}, $7$-dimensional homotopy spheres are completely and explicitly classified. If we consider the orientations, there exist exactly $28$ types of $7$-dimensional homotopy spheres. Such homotopy spheres of exactly $16$ of $28$ are the total spaces of linear bundles over $S^4$ whose fibers are the unit sphere $S^3$. $7$-dimensional standard spheres are also of one of such types. All $7$-dimensional homotopy spheres are represented as connected sums of two such homotopy spheres.
This means that $7$-dimensional homotopy spheres admit directed round fold maps into ${\mathbb{R}}^4$ from Theorem \ref{thm:2}. It is well-known that if in the case where the singular set is connected, then it is a standard sphere. This is due to theory of so-called {\it special generic} maps in \cite{saeki2}, some of which will be discussed later. Homotopy spheres of exactly $16$ of $28$, which are the total spaces of linear bundles over $S^4$ before, admit directed round fold maps into ${\mathbb{R}}^4$ whose singular sets consist of exactly two connected components. Furthermore, the converse holds by Theorem \ref{thm:1} and Proposition \ref{prop:3}. Every $7$-dimensional homotopy sphere admits a directed round fold map into ${\mathbb{R}}^4$ whose singular set consists of exactly three connected components.
This means that difference of the differentiable structures of homotopy spheres and that of topological types of round fold maps of an explicit class are closely related. Such facts have been already discovered in special generic maps, which is presented here shortly. See also \cite{saekisakuma1, saekisakuma2} for example.
\end{Ex}
A fold map is said to be a {\it special generic} map if the indices of singular points of it are always $0$. Morse functions with exactly two singular points on homotopy spheres and canonical projections of unit spheres are simplest examples.
In the case where the dimensions of the spaces of the targets are not sufficiently high or at most $4$ (with the conditions forcing the fundamental groups to be trivial or free groups for example), manifolds admitting such maps in such situations are always represented as connected sums of the total spaces of smooth bundles whose fibers are homotopy spheres (in considerable cases). As a recent new study, the author has been studying cases where the dimensions of the spaces of the targets are greater than $4$ with the conditions forcing the fundamental groups to be trivial or in the simple-connected cases. We present an example as a theorem. We also review our proofs in related preprints here. We also review some of fundamental arguments in \cite{saeki2}.
\begin{Thm}[\cite{kitazawa2} etc.]
\label{thm:6}
There exists a pair $(M_1,M_2)$ of $9$-dimensional closed and simply-connected manifolds enjoying the following properties.
\begin{enumerate}
\item \label{thm:6.1}
For $i=1,2$, $H_2(M_i;\mathbb{Z})$ is isomorphic to $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$, $H_3(M_i;\mathbb{Z})$ is the trivial group, and $H_4(M_i;\mathbb{Z})$ is isomorphic to $\mathbb{Z}$.
\item \label{thm:6.2}
For $M_1$ and $M_2$, consider the subgroups of their integral cohomology groups generated by all elements whose orders are infinite. They have the structures of subrings of the integral cohomology rings and they are isomorphic to the integral cohomology ring of $S^4 \times S^5$.
\item \label{thm:6.3}
The integral cohomology rings of $M_1$ and $M_2$ are not isomorphic.
\item \label{thm:6.4}
$M_1$ admits a special generic map into ${\mathbb{R}}^n$ if and only if $n=5,6,7,8,9$.
\item \label{thm:6.5}
$M_2$ admits a special generic map into ${\mathbb{R}}^n$ if and only if $n=6,7,8,9$.
\end{enumerate}
\end{Thm}
This is for a kind of appendices to Main Theorems. We review our proof of this theorem according to \cite{kitazawa2} in a way a bit different from the original one.
\begin{proof}
[A proof of Theorem \ref{thm:6}]
We can construct a closed and simply-connected manifold $M_1$ and a special generic map $f_1:M_1 \rightarrow {\mathbb{R}}^5$ and we explain about the construction. Note that in \cite{kitazawa2}, we have given another construction.
We have a $5$-dimensional compact and simply-connected manifold $P$ smoothly immersed into ${\mathbb{R}}^5$ such that $H_2(P;\mathbb{Z})$ is isomorphic to $\mathbb{Z}/2\mathbb{Z} \oplus {\mathbb{Z}}{2\mathbb{Z}}$, that $H_3(P;\mathbb{Z})$ is the trivial group and that has the (simple) homotopy type of a $3$-dimensional polyhedron. This is regarded as a result due to fundamental arguments on differential topology. More explicitly, we can also have this from complete classifications of $5$-dimensional closed and simply-conencted manifolds in the topology category, the PL category or equivalently the piecewise smooth category, and the smooth category, presented in \cite{barden}.
They are equivalent in all these categories. We have a desired manifold by removing a copy of the smoothly embedded copy of the $5$-dimensional unit disk from a certain manifold in the paper. This $5$-dimensional closed and simply-connected manifold is used later, in the construction of $M_2$ as $M^{\prime}$.
We can construct the product map of a Morse function with exactly one singular point on a copy of the unit disk $D^{5}$ obtained by considering a natural height and the identity map on $\partial P$. We can construct this as a map onto a small collar neighborhood $N(\partial P)$ of $\partial P$. In the complementary set $P-{\rm Int}\ N(\partial P)$ of its interior in $P$, we can construct a trivial smooth bundle over the set whose fiber is a $4$-dimensional standard sphere. We can glue them naturally to obtain a smooth surjection onto $P$. By composing the immersion, we have a desired special generic map $f_1:M_1 \rightarrow {\mathbb{R}}^5$ on a suitable closed and connected manifold $M_1$.
By considering some propositions on fundamental groups and homology groups in section 3 of \cite{saeki2}, we can see that $M_1$ is a simply-connected manifold enjoying (\ref{thm:6.1}) and (\ref{thm:6.2}).
We explain about the non-existence of special generic maps into ${\mathbb{R}}^n$ for $n=1,2,3,4$. According to the presented theory, if such a map exists, then this is represented as the composition of a surjection onto an $n$-dimensional compact and simply-connected manifold $P_n$ with some smooth immersion into ${\mathbb{R}}^n$. Furthermore, \cite{nishioka} shows that the integral homology group of the $n$-dimensional compact and simply-connected manifold is free. Moreover theory of \cite{saeki2} before states that $H_j(M_1:\mathbb{Z})$ is isomorphic to $H_j(P_n:\mathbb{Z})$ for $1 \leq j \leq 9-n$. This is a contradiction.
We explain about the existence of special generic maps into ${\mathbb{R}}^n$ for $n=6,7,8,9$ by applying theory first discovered in the preprint \cite{kitazawa4} of the author. In the construction of the special generic map $f_1:M_1 \rightarrow {\mathbb{R}}^5$. The preesnted product map of a Morse function and the identity map on $\partial P$ or a connected component ${\partial}_0 N(\partial P):=\partial N(\partial P) \bigcap {\rm Int}\ P$ of the boundary of the collar neighborhood $N(\partial P)$, and the projection onto the complementary set $P-{\rm Int}\ N(\partial P)$, are glued. We can glue them by the product map of two diffeomorphisms.
We present the product map of the diffeomorphisms more precisely. First we give suitable identifications of fibers of the two trivial bundles over the boundary ${\partial}_0 N(\partial P)$ and the complementary set $P-{\rm Int}\ N(\partial P)$ in $P$ where the fibers are the unit disk $D^{10-n}=D^5$ and the unit sphere $\partial D^{5}=S^{4}$, respectively. The diffeomorphisms are the identification between the base spaces and the diffeomorphism on the fiber $S^{4}$, regarded as the identity map on this fiber.
Our function defined from a natrual height on the unit disk $D^{5}$ and used here as the Morse function for the product map is regarded as a restriction of a canonical projection of the unit sphere $S^4$ into $\mathbb{R}$. More precisely, this is restricted to a hemisphere. We do not present the rigorous definition of a canonical projection of a unit sphere and its hemisphere. However, we can define naturally and this is defined rigorously in related preprints by the author for example. As a fundamental property, our height function here is represented as the composition of the following two maps.
\begin{itemize}
\item The restriction to the hemisphere of a canonical projection of the unit sphere $S^4$ to ${\mathbb{R}}^{n-4}$ in the case $n=6,7,8$ and that to the hemisphere of the canonically defined smooth embedding of the unit sphere $S^4$ to ${\mathbb{R}}^{n-4}={\mathbb{R}}^{5}$ for $n=9$.
\item A canonical projection of ${\mathbb{R}}^{n-4}$ to $\mathbb{R}$.
\end{itemize}
Although we do not define canonical projections of Euclidean spaces here rigorously, we may give definitions in the canonical way.
Here we construct special generic map into ${\mathbb{R}}^n$ for $n=6,7,8,9$. Around $\partial P$ or ${\partial}_0 N(\partial P)$, we replace the existing product map by the new product map of the following two.
\begin{itemize}
\item The restriction to the hemisphere of a canonical projection of the unit sphere $S^4$ to ${\mathbb{R}}^{n-4}$ in the case $n=6,7,8$ and that to the hemisphere of the canonically defined smooth embedding of the unit sphere $S^4$ to ${\mathbb{R}}^{n-4}={\mathbb{R}}^{5}$ for $n=9$. This is presented just before. Furthermore, the space of the target is suitably restricted to a half-space of the Euclidean space.
\item The identity map on ${\partial}_0 N(\partial P)$.
\end{itemize}
We replace the projection of the trivial bundle over $P-{\rm Int}\ N(\partial P)$ by the product map of the following two maps.
\begin{itemize}
\item A canonical projection of the unit sphere $S^{9-5}=S^4$ to ${\mathbb{R}}^{n-5}$.
\item The identity map on $P-{\rm Int}\ N(\partial P)$.
\end{itemize}
By gluing the two maps by the product of the diffeomorphisms before, we have a special generic map into ${\mathbb{R}}^n$ instead. We can also construct this in such a way that the composition of this with a canonical projection to ${\mathbb{R}}^5$ is the originally constructed special generic map $f_1:M_1 \rightarrow {\mathbb{R}}^5$. This completes our exposition on the property (\ref{thm:6.4}).
According to \cite{barden} for example, we have a $5$-dimensional closed and simply-connected manifold $M^{\prime}$ whose 2nd integral homology group is isomorphic to $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$. This is also a rational homology sphere or a closed manifold whose rational homology group is isomorphic to that of a sphere. We have $M^{\prime}$ as one we can smoothly immerse and embed into ${\mathbb{R}}^6$. Remember that this manifold is also presented in obtaining the $5$-dimensional manifold $P$ before.
We put $M_2:=M^{\prime} \times S^{4}$.
We explain about the non-existence of special generic maps on $M_2$ into ${\mathbb{R}}^n$ for $n=1,2,3,4,5$. In the case $n=1,2,3,4$, we can argue as in the case of $M_1$.
We explain about the case $n=5$.
We apply some arguments in the third section of \cite{saeki2}, which are also regarded as essential in main ingredients of \cite{kitazawa1}.
Assume that there exists a special generic map $f_{2,5}:M_2 \rightarrow {\mathbb{R}}^5$, then by \cite{saeki2}, there exists a $5$-dimensional compact and simply-connected manifold $P_5$ smoothly immersed into ${\mathbb{R}}^5$ and $f_2$ is represented as the composition of a surjection onto $P_5$ with the smooth immersion. Furthermore, $H_j(M_2;A)$ is isomorophic to $H_j(P_5;A)$ for any commutative ring $A$ and $j=1,2,3,4$. Moreover these isomorphisms are induced by the surjection, denoted by $q_{f_{2,5}}:M_2 \rightarrow P_5$. We put $A:=Z/2\mathbb{Z}$ and we can have
the cup product for the ordered pair of some element of $H^2(M_2;A)$ and some element of $H^3(M_2;A)$, which is not the zero element. We can take such elements by the definition that $M_2$ is the product of $M^{\prime}$ and $S^4$ and K\"unneth theorem. This is also the pull-back of the cup product of the ordered pair of some element of $H^2(P_5;A)$ and some element of $H^3(P_5;A)$ for the map $q_{f_{2,5}}$. However $P_5$ has the {simple} homotopy type of $4$-dimesional polyhedron. This is a contradiction. We can also see that the properties (\ref{thm:6.1}), (\ref{thm:6.2}) and (\ref{thm:6.3}) are enjoyed by considering the integral cohomology rings and K\"unneth theorem.
We consider the product map of the canonical projection of the unit sphere $S^{9-5}=S^4$ into ${\mathbb{R}}^{n^{\prime}}$ with $n^{\prime}=1,2,3,4$
and the identity map on $M^{\prime}$. We can regard this as a special generic map $f_{2,n^{\prime}+5}:M_2 \rightarrow {\mathbb{R}}^{n^{\prime}+5}$ for $n^{\prime}=1,2,3,4$. We can see $M_2$ admits a special generic map into ${\mathbb{R}}^n$ for $n=6,7,8,9$. For this, remember that $M^{\prime}$ is smoothly immersed and embedded into ${\mathbb{R}}^6$.
This argument is also presented in a more general manner in \cite{kitazawa1}. This completes the proof of the property (\ref{thm:6.5}).
This completes the proof.
\end{proof}
In short, difference of the cohomology rings in the case where the coefficient ring is $\mathbb{Z}$ affects difference in the existence of special generic maps into Euclidean spaces and their dimensions. In addition, these two manifolds cannot be distinguished by their fundamental groups, their integral homology groups and the subgroups of their integral cohomology rings generated by all elements whose orders are infinite and the structures of the subrings of the integral cohomology rings we can induce there.
We present a very explicit case here and this with its proof is presented in \cite{kitazawa2} in a more general manner. As presented there, we can also have cases of $8$-dimensional closed and simply-connected manifolds for example.
For related studies, see also \cite{kitazawa3} for example.
Our Main Theorems have pointed out similar difference affects difference in types of round fold maps into ${\mathbb{R}}^2$. Note that we cannot consider about fundamental groups in our new cases. It is well-known that $3$-dimensional closed and orientable manifolds are determined by their fundamental groups in considerable cases.
\section{Acknowledgement}
The author would like to thank Osamu Saeki and Takahiro Yamamoto for related rigorous discussions on special generic maps, round fold maps, Main Theorems and Theorem \ref{thm:6}, a previous result of us in \cite{kitazawa2}. The author also would like to thank Masaharu Ishikawa and Yuya Koda for positive and interesting comments on our present study and our related work \cite{kitazawasaeki1}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,381 |
Hammett Lee Bowen Jr. (November 30, 1947 – June 27, 1969) was a United States Army soldier and a recipient of the United States military's highest decoration—the Medal of Honor—for his actions in the Vietnam War.
Biography
Bowen joined the Army from Jacksonville, Florida in 1965, and received basic training at Fort Campbell, Kentucky. He then was sent to Fort Benning, Georgia, to attend the Non-Commissioned Officer Course (NCOC). He was in Class 4–68, where he graduated as an infantryman NCO. By June 27, 1969, was serving as a staff sergeant in Company C, 2d Battalion, 14th Infantry Regiment, 25th Infantry Division. On that day, in Bình Dương Province, South Vietnam, during Operation Toan Thang III, Bowen smothered the blast of an enemy-thrown hand grenade with his body, sacrificing himself to protect those around him.
Bowen, aged 21 at his death, was buried at Restlawn Memory Gardens in his birth city of LaGrange, Georgia.
Medal of Honor citation
Staff Sergeant Bowen's official Medal of Honor citation reads:
S/Sgt. Bowen distinguished himself while serving as a platoon sergeant during combat operations in Binh Duong Province, Republic of Vietnam. S/Sgt. Bowen's platoon was advancing on a reconnaissance mission into enemy controlled terrain when it came under the withering crossfire of small arms and grenades from an enemy ambush force. S/Sgt. Bowen placed heavy suppressive fire on the enemy positions and ordered his men to fall back. As the platoon was moving back, an enemy grenade was thrown amid S/Sgt. Bowen and 3 of his men. Sensing the danger to his comrades, S/Sgt. Bowen shouted a warning to his men and hurled himself on the grenade, absorbing the explosion with his body while saving the lives of his fellow soldiers. S/Sgt. Bowen's extraordinary courage and concern for his men at the cost of his life served as an inspiration to his comrades and are in the highest traditions of the military service and the U.S. Army.
Hammett Bowen's Medal of Honor and other memorabilia about his life are on display in the lobby of the Hammett Bowen Operations Center for the Marion County Sheriff's Office. Ocala Florida.
Hammett Bowen Jr. Elementary School in Ocala, Florida, is named in his honor.
See also
List of Medal of Honor recipients for the Vietnam War
References
1947 births
1969 deaths
American military personnel killed in the Vietnam War
United States Army Medal of Honor recipients
United States Army non-commissioned officers
People from LaGrange, Georgia
Vietnam War recipients of the Medal of Honor
Deaths by hand grenade
United States Army personnel of the Vietnam War | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,047 |
layout: post
date: '2016-10-16'
title: "PROM DRESSES La Femme 19899 Open Back Prom Dress"
category: PROM DRESSES
tags: [PROM DRESSES]
---
### PROM DRESSES La Femme 19899 Open Back Prom Dress
Just **$459.99**
###
<a href="https://www.eudances.com/en/prom-dresses/1074-la-femme-19899-open-back-prom-dress.html"><img src="//www.eudances.com/3117-thickbox_default/la-femme-19899-open-back-prom-dress.jpg" alt="La Femme 19899 Open Back Prom Dress" style="width:100%;" /></a>
<!-- break --><a href="https://www.eudances.com/en/prom-dresses/1074-la-femme-19899-open-back-prom-dress.html"><img src="//www.eudances.com/3116-thickbox_default/la-femme-19899-open-back-prom-dress.jpg" alt="La Femme 19899 Open Back Prom Dress" style="width:100%;" /></a>
Buy it: [https://www.eudances.com/en/prom-dresses/1074-la-femme-19899-open-back-prom-dress.html](https://www.eudances.com/en/prom-dresses/1074-la-femme-19899-open-back-prom-dress.html)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,145 |
Q: Angular Material Design - page footer issue - double scrollbar I have a page footer implemented as follows.
HTML
<footer class="app-footer-main">
<section class="app-footer-items">
...
</section>
</footer>
Styles
.app-footer-main {
background-color: black;
}
.app-footer-items {
position: relative;
display: flex;
font-size: small;
justify-content: center;
}
I want this footer to be placed at the bottom of each page in the application. If the page content is large I would like to scroll down the page and see the footer at the end of the page.
HMTL that renders the main content of the application where the footer is embedded.
<mat-sidenav-container fullscreen class="sidenav-container">
<!-- Collapsible side content -->
<mat-sidenav #sidenav [mode]="'side'" class="navbar" role="navigation">
<mat-nav-list>
...
</mat-nav-list>
</mat-sidenav>
<!-- End Collapsible side content -->
<!-- Main Content Area -->
<div class="main-content">
<div class="mat-app-background">
<!-- Routed view -->
<router-outlet></router-outlet>
</div>
</div>
<!-- End Main Content Area -->
<app-footer #footer></app-footer>
</mat-sidenav-container>
Style for the main content:
.main-content {
padding: {
top: 0;
left: 15px;
right: 15px;
bottom: 0;
}
@include breakpoint($narrow-devices) {
padding: {
left: 15px;
right: 15px;
}
}
height: 100%;
overflow: auto;
}
:host ::ng-deep .mat-sidenav-container[fullscreen] {
top: 55px;
@include breakpoint($narrow-devices) {
top: 64px;
}
}
// :host /deep/ is used to allow styling child components when using emulated view encapsulation.
:host ::ng-deep .mat-sidenav-content {
transform: none !important;
}
.main-content {
& ::ng-deep .outlet,
& ::ng-deep .maxed-width {
@include breakpoint($narrow-devices) {
max-width: $page-max-width;
margin: {
left: auto;
right: auto;
}
}
}
}
.sidenav-container {
flex: 1;
}
Issue with this approach is that there are double scrollbars on the page, as you can see on the attached screenshot. One for the main content of the page and other for the footer. How can I resolve this issue please?
A: I recently had this issue. Following code worked for me: Provide margin-top should be equal to the height of the footer. I used 'vh' instead of '%' for height in main div and footer that did the trick for me.
.footer {
position: relative;
width: 100%;
height: 20vh;
margin-top: 20vh;
margin-left: auto;
margin-right: auto;
bottom: 0;
overflow: hidden;
}
A: I have managed to resolve this issue by re-structuring the HTML as follows:
<mat-sidenav-container fullscreen class="sidenav-container">
<!-- Collapsible side content -->
<mat-sidenav #sidenav [mode]="'side'" class="navbar" role="navigation">
<mat-nav-list>
...
</mat-nav-list>
</mat-sidenav>
<!-- End Collapsible side content -->
<mat-sidenav-content>
<!-- Main Content Area -->
<div class="main-content">
<div class="mat-app-background">
<!-- Routed view -->
<router-outlet></router-outlet>
</div>
</div>
<!-- End Main Content Area -->
<app-footer #footer></app-footer>
</mat-sidenav-content>
</mat-sidenav-container>
Change the height property in main-content CSS class to min-height as follows:
.main-content {
padding: {
top: 0;
left: 15px;
right: 15px;
bottom: 0;
}
@include breakpoint($narrow-devices) {
padding: {
left: 15px;
right: 15px;
}
}
min-height: 100%;
overflow: auto;
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,197 |
Olmillos de Castro es un municipio y localidad española de la provincia de Zamora, en la comunidad autónoma de Castilla y León.
El municipio incluye las localidades anejas de Marquiz de Alba, Navianos de Alba y San Martín de Tábara. Cuenta con una superficie de 71,39 km² y, según datos del padrón municipal del INE, cuenta con una población de habitantes y una densidad de 3,7 hab/km².
Ubicación
Olmillos de Castro se encuentra situado a 40 km de Zamora, la capital provincial, en las inmediaciones de la sierra Roldana y a escasos kilómetros del embalse de Ricobayo.
Historia
Durante la Edad Media Olmillos de Castro quedó integrado en el Reino de León, cuyos monarcas habrían acometido la repoblación de la localidad dentro del proceso repoblador llevado a cabo en la zona, estando vinculado su origen histórico a la fortaleza de Castrotorafe y habiendo dependido de la encomienda de la Orden de Santiago que tenía sede en dicha fortaleza.
En la Edad Moderna, Olmillos de Castro estuvo integrado en el partido de Carbajales de Alba de la provincia de Zamora, al igual que las pedanías de Marquiz de Alba y Navianos de Alba, si bien San Martín de Tábara pertenecía al partido de Tábara, tal y como reflejaba en 1773 Tomás López en Mapa de la Provincia de Zamora. Así, al reestructurarse las provincias y crearse las actuales en 1833, el municipio se mantuvo en la provincia zamorana, dentro de la Región Leonesa, integrándose en 1834 en el partido judicial de Alcañices, dependencia que se prolongó hasta 1983, cuando fue suprimido el mismo e integrado en el Partido Judicial de Zamora.
En torno a 1850, se integraron en el municipio de Olmillos de Castro las localidades de Marquiz de Alba, Navianos de Alba y San Martín de Tábara, tomando el término municipal su extensión actual.
Patrimonio
De su casco urbano destaca la iglesia parroquial de Santa Marina y especialmente su espadaña alta y esbelta, campanario neoclásico de tres vanos y pináculos, con una pequeña torre lateral de cupulina y un óculo cegado con la Cruz de Santiago marcada. El templo aloja en su interior un sencillo retablo con la imagen de la patrona bajo un calvario. La belleza del templo es totalmente opuesta a la austeridad y sencillez del edificio que acoge las dependencias municipales.
Demografía
Las cifras de 1996 están referidas a 1 de mayo y las posteriores a 1 de enero.
Vecinos ilustres
José Folgado. Fue un prestigioso artesano local, herrero, especialista en balanzas, pesas romanas y el arte de la forja, el cual tiene dedicada una placa conmemorativa en la que fue su casa familiar.
Festividades
Santa Marina: 18 de julio.
San Roque, el 19 de enero, y La Hiniesta, el 31 de mayo, son las festividades principales de Olmillos de Castro.
La pedanía de Marquiz de Alba celebra San Sebastián, el 21 de enero, y Santa Cruz, el 3 de mayo
Navianos de Alba conmemora Santa Catalina, el 30 de abril, y Nuestra Señora del Rosario, el 7 de octubre.
Los vecinos de San Martín de Tábara están de fiesta el 28 de mayo, San Roque, y el 11 de noviembre, San Martín.
Notas
Enlaces externos
Diputación de Zamora
Ficha Municipal de Caja España
Localidades de Olmillos de Castro
Localidades de Tierra de Alba
Localidades de la provincia de Zamora | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,794 |
\section{Introduction}
Hadron spectroscopy in present times appears to have become subdivided
into many different fields of interest, like conventional quark
spectroscopy, hadronic molecular states, dynamically generated resonances,
tetraquarks and pentaquarks, glueballs, gluonic hybrids, and so forth.
The advancement of detector and analysis techniques at the many new
experimental facilities in the intermediate-energy range has been resulting
in the observation of more and more hadronic states,
many of which no not seem to fit into the traditional quark spectrum of
$q\bar{q}$ mesons and $qqq$ baryons. This has led to the investigation of
other possible configurations that might be viable within QCD.
On the one hand, this makes it important and interesting
to explore the structure and properties of hadrons, which may shed light on
the dynamics of strong interactions at low energies.
On the other hand, though, a very careful analysis is required, in order to
avoid confusions and controversies.
We are convinced that the most important need of present-day research in
hadron spectroscopy is some consensus on the ideal quark model to confront
with the data. And right now is probably the best moment
to tackle this problem, since quite a lot of data,
at least concerning meson spectroscopy, is being produced
(see for example
Refs.~\cite{NPA827p291C,STosi2,ARXIV08103829,
AIPCP1182p455,ARXIV09103404,LATHUILE02p569,
NPPS186p371,ARXIV10012252,ARXIV09065333}),
and even more data, of better quality, is expected in the near future.
All this could help gather a vastly improved understanding of quarkonia
and other mesonic resonances.
However, in such a situation it is extremely important
to first agree on the hadron spectrum, as obtained from a chosen quark
model, since only then valid conclusions can be drawn about
possible incompatibilites with standard quark configurations.
Indeed a lot of work is being done on trying to understand
the conventional and unconventional structure of different
``exotic'' hadrons. For example, a very interesting analysis of the
quark content and possible molecular nature of many newly found mesonic
states has been carried out on the basis of QCD sum rules
\cite{PRD80p056002,ARXIV09111958}, while
an attempt to distinguish between quarkonium and hybrid states
was made in Ref.~\cite{PLB657p49}.
Another very interesting investigation, namely of the quark
and molecular content of baryon resonances,
was reported in Ref.~\cite{PRC78p025203}.
Furthermore, Ref.~\cite{PRD80p074028} nicely explained
the phenomenon of dynamically generated resonances
and the concept of dynamical reconstruction,
in order to be able to pinpoint (dominantly) $q\bar{q}$ states.
All these meticulous efforts and the corresponding results
would get more merit, if a universally agreed
conventional hadron spectrum was known.
The main issue is that experimental evidence for possible resonances
is obtained from total or partial-wave cross sections,
as well as angular distributions and decay modes.
In order to interpret the data, one needs a model, since perturbative
QCD cannot be used at low energies.
In the present paper, we will focus on
non-exotic meson-meson scattering. Nevertheless, the results can easily
be generalized for application to hadron-pair production \cite{AP323p1215}.
We intend to describe the cross sections and resonance pole positions
for meson-meson scattering in an as large as possible energy range,
rather than focusing on just one peak.
At this point enters the main philosophy behind the model,
namely that the enhancement structure of
cross sections in non-exotic meson-meson scattering
stems from the quark-antiquark spectrum.
Consequently, for such reactions we must study a coupled system
of a $q\bar{q}$ state and non-exotic two-meson channels.
This is an absolutely minimal requirement for modeling
the cross sections in this case.
Further extensions to multiquark or hybrid resonances might be contemplated
in case the minimal model turns out not to reproduce the experimental data
sufficiently well.
Furthermore, the proposed strategy \cite{PRD21p772,PRD27p1527}
simultaneously covers two-meson molecules and $q\bar{q}$ systems.
Hence, the physical solutions are not
just pure $q\bar{q}$ or molecular states,
but rather mixtures of these two configurations.
One could then try to find out which component is dominant
\cite{ZPC19p275,PRD44p2803,ARXIV10073461}.
However, that lies beyond the scope of this paper.
In more popular terms, one might refer to
the states obtained from such a model
as $q\bar{q}$ systems surrounded by a meson cloud.
In the past, our approach was also called the unitarization scheme
for the quark-antiquark system
\cite{Cargese75p305,AP123p1,PRL49p624,PRD29p110}.
The model we are going to use treats confined quarks and hadronic
decay channels on an equal footing, via coupled channels, regardless
of whether the energy is above or below the thresholds of the decay channels.
It was developed in Refs.~\cite{PRD21p772,
PRD27p1527,ZPC21p291,EPJC22p493,AP324p1620},
and has been extensively used to study the properties of mesonic resonances
(for some of the recent works, see
Refs.~\cite{EPL85p61002,ARXIV10052486,
ARXIV10052490,PRD80p094011,ARXIV08121527,PRD80p074001}
and references therein).
The effective meson-meson potential in the model
consists of $s$-channel exchange
of a confined $q\bar{q}$ pair, with radial quantum
number running from 0 to infinity and orbital angular momentum
compatible with total $J^{PC}$.
The importance of hadron dynamics in understanding the meson
spectrum for low and intermediate energies already becomes obvious from the fact
that it easily gets energetically favorable for the ``string'' between
the quark and the antiquark to break, alongside the creation of a new and light
$q\bar{q}$ pair, which then may lead to hadronic decay. An even stronger
indication comes from the light scalar mesons, whose unconventional nature
is related to their very strong coupling to $S$-wave two-meson channels
\cite{ZPC30p615}. The latter work also shows that the inclusion of
bare-meson exchange in the $s$-channel, in addition to the meson-meson contact
interaction
\cite{PRL49p624,PRD60p074023,PRD59p074001,ARXIV08050552,ARXIV08054803},
in certain cases leads to finding new states which might be absent when
considering contact interactions only.
This phenomenon was explained very neatly
in Ref.~\cite{AP324p1620},
where it was shown that, for the study of a restricted energy range
corresponding to a particular resonance,
the contribution from different diagrams
involving meson exchange with different quantum numbers
gives rise to a constant interaction, which is equivalent
to considering a contact interaction in unitarized models.
It was further shown in Ref.~\cite{AP324p1620} that, in order to
understand a larger energy range, covering several resonances,
meson-exchange diagrams are required as well.
This explains why the common use of contact interactions in unitarized models,
to study dynamically generated hadron resonances, works quite well
\cite{PRD80p114013,PRD76p074016,ARXIV10050283,PRC77p042203,
EPJA37p233,ARXIV10030364,PRD80p094012,PRC79p065207,PRC80p055206}.
However, from Ref.~\cite{AP324p1620} it becomes clear
that the development of a broader and more general perspective
for hadron spectroscopy requires the
treatment of quarks and hadrons as coupled systems.
In this paper, we will show that not only the handling
of coupled mesons and quarks is necessary,
but also the full solution of the scattering equations is essential.
In particular, we will demonstrate that approximating resonance pole
positions perturbatively leads to unreliable results.
In the next section, we will first describe the exact formalism,
followed by the construction of a perturbative expansion thereof.
In the subsequent sections, we will choose some specific examples
to show that no meaningful results can be obtained
from the perturbative expansion.
Finally, we will summarize the detailed discussions
in the paper.
\section{Formalism}
\label{Formalism}
We study meson-meson scattering using a model
in which quarks and hadrons are considered coupled systems.
The formalism amounts to solving a scattering equation for mesons,
with the lowest-order term of the Born series given by an
effective interaction due to the exchange of a confined $q\bar{q}$ pair.
The potential between the latter pair is written
in terms of a harmonic oscillator. The eigenenergies of the harmonic
oscillator thus correspond to the bare $q\bar{q}$ spectrum.
The effective meson-meson potential, which is the lowest-order term of the
full scattering amplitude, involves an infinite sum over all the diagrams
with $s$-channel $q\bar{q}$ exchange, having different radial quantum number $n$
($0\leq n < \infty$), as shown in Fig.~\ref{fig1} below.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45]{diagram4.eps}
\caption[]{The meson-meson potential,
i.e., the Born term of the full scattering amplitude,
which involves the exchange of a confined $q\bar{q}$ pair,
with radial quantum number $n$
running from 0 to $\infty$.}
\label{fig1}
\end{figure}
Although not strictly necessary, it is illustrative
to consider a formulation of the model
in terms of a coupled system of nonrelativistic Hamiltonians.
However, a rigorous derivation in terms of a sum of meson loops,
which leads to the same final result, is also possible.
Since the model is based on coupling a confined quark-antiquark pair
and a meson pair, we describe the system by the equations
\begin{eqnarray}
H_{c}\psi_{c}(\vec{r})+V_{T}(\vec{r})\psi_{f}(\vec{r}) &=&
E\psi_{c}(\vec{r}),
\label{hc}\\
H_{f}\psi_{f}(\vec{r})+V_{T}(\vec{r})\psi_{c}(\vec{r}) &=&
E\psi_{f}(\vec{r}),
\label{hf}
\end{eqnarray}
where the subscripts ``$c$'' and ``$f$'' (here and throughout this article)
refer to the confined quarks and free mesons
(i.e., considering them plane waves), respectively,
and $V_{T}$ is the transition potential between the two sectors.
$H_{c}$ and $H_{f}$ describe the Hamiltonians of these sectors, reading
\begin{eqnarray}
H_{c} &=&
-\frac{\bigtriangledown_{r}^{2}}{2\mu_{c}}+m_{q}+m_{\bar{q}}+V_{c}(r),\\
H_{f} &=&
-\frac{\bigtriangledown_{r}^{2}}{2\mu_{f}}+M_{1}+M_{2},
\end{eqnarray}
where the confining potential is assumed to be a harmonic oscillator, viz.\
\begin{equation}
V_{c}=\frac{1}{2}\mu_{c}\omega^{2}r^{2},
\end{equation}
with $\mu_{c}$ and $\omega$ the reduced mass and frequency
of the $q\bar{q}$ system, respectively.
Furthermore, $M_{1}$, $M_{2}$, and $m_{q}$, $m_{\bar{q}}$
are the meson and quark masses, respectively.
Our choice of a harmonic-oscillator potential in the confined sector
is based on earlier observations of regular spacings
in the quarkonium spectra \cite{PRD21p772,PRD27p1527}, which seem to be
confirmed by states found in even the most recent experiments
\cite{EPL85p61002,ARXIV09044351,ARXIV09062278,ARXIV10053490}.
In writing down the equations above, we have assumed only one $q\bar{q}$
and one meson-meson channel, for the sake of simplicity.
These equations can be straighforwardly generalized to the
multichannel case, taking then a matrix form \cite{PRD80p094011}.
Now, Eqs.~(\ref{hf}) and (\ref{hc}) can be rewritten as
\begin{eqnarray}
(E-H_{c})\psi_{c}(\vec{r}) &=& V_{T}(\vec{r})\psi_{f}(\vec{r}),
\\\nonumber
(E-H_{f})\psi_{f}(\vec{r}) &=& V_{T}(\vec{r})\psi_{c}(\vec{r}).
\end{eqnarray}
Then, the confinement wave function $\psi_{c}(\vec{r})$ must be eliminated
from the equations, as it never develops into an asymptotic state.
Thus we get
\begin{equation}
\psi_{f}(\vec{r})=(E-H_{f})^{-1}V_{T}(E-H_{c})^{-1}V_{T}\psi_{f}(\vec{r}).
\end{equation}
From first principles of standard scattering theory, we can conclude that the
factor
\begin{equation}
V_{T}(E-H_{c})^{-1}V_{T}
\end{equation}
acts like an ``effective'' meson-meson potential, which,
if denoted by $V_{MM}$, implies
\begin{equation}
\langle\vec{P}_{f}\mid V_{MM}\mid\vec{P}^{\prime}_{f}\rangle
=\langle\vec{P}_{f}\mid V_{T}(E(\vec{P}_{f})-H_{c})^{-1}V_{T}
\mid\vec{P}^{\prime}_{f}\rangle,
\label{vmm}
\end{equation}
where the total center-of-mass energy (CM) $E$ is given by
\begin{equation}
E(p_{f})=\frac{(\vec{P}_{f})^{2}}{2\mu_{f}}+M_{1}+M_{2},
\end{equation}
with $\mu_{f}$ the reduced mass of the two mesons,
and $P_{f}$ ($P^{\prime}_{f}$) denoting the CM momentum
of the two-meson initial (final) state.
Furthermore, we denote the energy eigenvalue of $H_{c}$ by $E_{nl}$,
i.e.,
\begin{equation}
E_{nl}=\omega (n_{c}+l_{c}+3/2)+M_{q}+M_{\bar{q}},
\label{enl}
\end{equation}
and the corresponding eigensolutions by
$\langle \vec{r}_{c}\mid n_{c},l_{c},m_{c}\rangle$.
By introducing in Eq.~(\ref{vmm}) a complete set corresponding to this state,
we get
\begin{eqnarray}\nonumber
&& \langle\vec{P}_{f}\mid V_{MM}\mid\vec{P}^{\prime}_{f}\rangle
\\\nonumber
&& =\sum\limits_{n_{c},l_{c},m_{c}}\langle\vec{P}_{f}\mid V_{T}
\mid n_{c},l_{c},m_{c}\rangle\langle n_{c},l_{c},m_{c}
\mid (E(\vec{P}_{f})-H_{c})^{-1}V_{T}\mid\vec{P}^{\prime}_{f}\rangle\\
&& =\sum\limits_{n_{c},l_{c},m_{c}}\langle\vec{P}_{f}
\mid V_{T}\frac{\mid n_{c},l_{c},m_{c}\rangle
\langle n_{c},l_{c},m_{c}\mid}{(E(\vec{P}_{f})-E_{nl})}V_{T}
\mid\vec{P}^{\prime}_{f}\rangle,
\end{eqnarray}
which, upon further introduction of several complete sets
corresponding to the meson-meson configuration space, gives
\begin{eqnarray}\nonumber
&& \langle\vec{P}_{f}\mid V_{MM}
\mid\vec{P}^{\prime}_{f}\rangle\;
=\sum\limits_{n_{c},l_{c},m_{c}}
\int d^{3}r_{f}\int d^{3}r_{f}^{\prime}
\int d^{3}r_{f}^{\prime\prime}\int d^{3}r_{f}^{\prime\prime\prime}
\frac{\langle\vec{P}_{f}\mid\vec{r}_{f}\rangle}
{(E(\vec{P}_{f})-E_{nl})} \times
\\
&& \times \langle\vec{r}_{f}\mid V_{T}\mid\vec{r_{f}^{\prime\prime}}\rangle
\langle\vec{r_{f}^{\prime\prime}}\mid n_{c},l_{c},m_{c}\rangle
\langle n_{c},l_{c},m_{c}\mid\vec{r_{f}^{\prime\prime\prime}}
\rangle\langle\vec{r_{f}^{\prime\prime\prime}}
\mid V_{T}\mid\vec{r}_{f}^{\prime}\rangle\langle
\vec{r}_{f}^{\prime}\mid\vec{P}^{\prime}_{f}\rangle.
\label{expansion}
\end{eqnarray}
For the transition potential, we take a local delta-shell function of the form
\begin{equation}
\langle\vec{r}_{f}\mid V_{T}\mid\vec{r}_{f}^{\prime}\rangle
=
\frac{\lambda}{\mu_{c}a}\delta (r_{f}- a)
\delta^{3}(\vec{r}_{f}-\vec{r}_{f}^{\prime}).
\label{vt}
\end{equation}
This form of potential has been proven useful
in describing the breaking of the color string
\cite{IJTPGTNO11p179}.
The $\lambda$ and $a$ in Eq.~(\ref{vt}) are the two parameters of the model,
with the former being the coupling of the meson channel
to the quark channel, and the latter an average distance
between the quarks.
The coupling $\lambda$ is varied between 0 and 1 in the present study,
with $\lambda=0$ corresponding to decoupled meson
and quark systems. Since the meson-meson state is considered a plane wave,
decoupling would result in a pure (``bare'') $q\bar{q}$ spectrum.
On the other hand, $\lambda\,\geq$ 1 represents
strong coupling to the meson-meson channel.
The parameter $a$ is taken in the range 3--5~fm.
Using the above form for $V_{T}$, and the normalization
$\langle\vec{r}\mid\vec{p}\rangle\; = e^{i\vec{p}\cdotp\vec{r}}/(2\pi)^{3/2}$,
Eq.~(\ref{expansion}) becomes
\begin{eqnarray}\nonumber
&& \langle\vec{P}_{f}\mid V_{MM}\mid\vec{P}^{\prime}_{f}\rangle
=\sum\limits_{n_{c},l_{c},m_{c}}
\int\frac{d^{3}r_{f}}{\sqrt{(2\pi)^{3}}}
\int\frac{d^{3}r_{f}^{\prime}}{\sqrt{(2\pi)^{3}}}
e^{-i\vec{P}_{f}\cdotp\vec{r}_{f}}
\frac{\lambda}{\mu_{c}a}\delta(r_{f}-a) \times \\
&& \times \frac{\langle\vec{r}_{f}\mid n_{c},l_{c},m_{c}\rangle
\langle n_{c},l_{c},m_{c}\mid\vec{r}_{f}^{\prime}\rangle}{E(\vec{P_{f}})-E_{nl}}
\frac{\lambda}{\mu_{c}a}\delta(r_{f}^{\prime}-a)
e^{i\vec{P}_{f}^{\prime}\cdotp\vec{r}\,^{\prime}_f}.
\label{vmm2}
\end{eqnarray}
The functions $\langle\vec{r_{f}}\mid n_{c},l_{c},m_{c}\rangle
=Y^{l_f}_{m_f}(\hat {r})\langle|\vec{r_{f}}|\mid n_{c},l_{c},m_{c}\rangle$
represent an overlap wave function of the meson-meson
$\rightarrow\,q\bar{q}$ vertex.
In order to determine this overlap function,
we assume a mechanism for the transition from the meson channel
to the quark channel, or vice versa.
To describe this mechanism, let us consider a confined quark-antiquark
pair with total spin, angular momentum, intrinsic spin, and radial
quantum number $j$, $l_{1}$, $s_{1}$, and $n_{1}$, respectively.
As one usually expects from QCD at low $Q^{2}$,
it is assumed that the string between the quark and the antiquark
breaks with the creation of a new $q\bar{q}$ pair.
This new pair is assumed to have vaccuum quantum number,
corresponding to a $^{2s_{2}+1}l_{2\,j_{2}}=\,^{3\!}P_0$ state.
This yields two $q\bar{q}$ pairs, which then rearrange
to produce two mesons with quantum numbers
$j^{\prime}_{1},l^{\prime}_{1},S^{\prime}_{1},n^{\prime}_{1}$
and $j^{\prime}_{2},l^{\prime}_{2},S^{\prime}_{2},n^{\prime}_{2}$,
respectively (see Fig.~\ref{fig2}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{fig1.eps}
\caption{The transition vertex from the quark channel
to the two-meson channel.}
\label{fig2}
\end{figure}
Mathematically, the rearrangement of the two $q\bar{q}$ pairs,
one of which is a $^{3\!}P_0$ state and the other having the quantum numbers
of the decaying meson, can be expressed by treating them
as four independent nonrelativistic harmonic oscillators.
Let us label them with numbers,
and assume that system $(1+2)$ represents
the quark-antiquark pair under consideration, while
system $(3+4)$ stands for the
newly created $^{3\!}P_0$ $q\bar{q}$ pair.
This can be treated as a four-body problem,
which can be reduced to a three-body problem
in the global CM system,
by considering the coordinates (momenta) of the CM
of the $(1+2)$, $\vec{r}_{12}(\vec{p}_{12})$, and
$(3+4)$, $\vec{r}_{34}(\vec{p}_{34})$
systems, along with their relative motion
$\vec{r}_{1234}(\vec{p}_{1234})$.
The situation after the transition is described
by assuming that the $(1+4)$ and $(3+2)$ systems
represent the two meson state.
This can be represented diagrammatically,
as shown in Fig.~\ref{fig3}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{fig2.eps}
\vspace{1cm}
\caption{The four quarks, labeled by numbers, before and after the transition.
It is assumed that before the transition the string between
a $q\bar{q}$ pair ($1$ and $2$) breaks to form a new pair ($3$ and $4$),
with $^{3\!}P_0$ quantum numbers, and the whole system rearranges
so as to produce two mesons made of the quark pairs $1$-$4$ and $3$-$2$.}
\label{fig3}
\end{figure}
The Hamiltonians for the systems, before and after the transition,
can be written in the global CM frame as
\begin{eqnarray}
H_\xscrpt{before}&=&\frac{1}{2}\omega\Bigl
\{ r_{12}^{2}+r_{34}^{2}+r_{1234}^{2}
+p_{12}^{2}+p_{34}^{2}+p_{1234}^{2} \Bigr\},\\
H_\xscrpt{after } &=&\frac{1}{2}\omega\Bigl
\{ r_{14}^{2}+r_{32}^{2}+r_{1432}^{2}
+p_{14}^{2}+p_{32}^{2}+p_{1432}^{2}\Bigr\},
\end{eqnarray}
where $r_{ij}$ ($p_{ij}$) is the coordinate (momentum) of the $ij$ system,
and $r_{ijkl}$ ($p_{ijkl}$) is the relative coordinate (momentum)
of the CM of the $ij$ and $kl$ systems.
The transition of e.g.\ the coordinates of the four quarks
can be expressed in terms of an orthogonal transformation
matrix $\alpha$
(as explained in Ref.~\cite{ZPC21p291}), i.e.,
\begin{equation}
\left(
\begin{array}{c}
r_{14}\\
r_{32}\\
r_{1432}
\end{array}
\right)
=
\left(
\begin{array}{ccc}
\alpha_{11} & \alpha_{12} & \alpha_{13}\\
\alpha_{21} & \alpha_{22} & \alpha_{23}\\
\alpha_{31} & \alpha_{32} & \alpha_{33}\\
\end{array}
\right)
\left(
\begin{array}{c}
r_{12}\\
r_{34}\\
r_{1234}
\end{array}
\right).
\end{equation}
Thus, if the wave functions of the systems before and after the transition
are written as $\psi^E_{\{n,l,m\}}(r_{12},r_{34},r_{1234})$
and
$\chi^E_{\{n^{\prime},l^{\prime},m^{\prime}\}}(r_{14},r_{32},r_{1432})$,
respectively, then
\begin{eqnarray}\nonumber
&& \langle\psi^E_{\{n,l,m\}}(r_{12},r_{34},r_{1234})\!\mid \; =
\sum\limits_{n^{\prime}, l^{\prime}, m^{\prime}}
\int dr_{14}\int dr_{23}\int dr_{1234}
\\\nonumber
&& \langle\psi^E_{\{n,l,m\}}(r_{12},r_{34},r_{1234})\!
\mid\chi^E_{\{n^{\prime},l^{\prime},m^{\prime}\}}(r_{14},r_{32},r_{1432})
\rangle\langle\chi^E_{\{n^{\prime},l^{\prime},m^{\prime}\}}
(r_{14},r_{32},r_{1432})\mid.
\end{eqnarray}
Now we define a transformation matrix
$D^E_{\{\{n,l,m\},\{n^{\prime},l^{\prime},m^{\prime}\}\}}$ as
\begin{eqnarray}\nonumber
&&D^E_{\{\{n,l,m\},\{n^{\prime},l^{\prime},m^{\prime}\}\}}=
\\\nonumber
&&\int dr_{14}\int dr_{23}\int dr_{1234}
\langle\psi^E_{\{n,l,m\}}(r_{12}, r_{34}, r_{1234})
\mid\chi^E_{\{n^{\prime},l^{\prime},m^{\prime}\}}(r_{14},r_{32},r_{1432})\rangle,
\end{eqnarray}
for which the following analytical expression was obtained
in Ref.~\cite{ZPC21p291}:
\begin{eqnarray}
&&{\cal D}^{E}_{\left\{
\left\{ n,\ell ,m\right\}\, ,\,
\left\{ n^{\prime},{\ell}^{\prime},m^{\prime}\right\}\right\}}
\left(\left\{\bf{r}\right\}\, ;\,\left\{\bf{r}^{\prime}\right\}\right)
\; =\;\\\nonumber
&&
\left (\frac{\pi}{4}\right)^{\frac{1}{2}N(N-1)}
\left\{\prod_{i=1}^{N}(-1)^{n_{i}}
\left[\frac{\Gamma\left(n_{i}+1\right)
\Gamma\left(n_{i}+\ell_{i}+\frac{3}{2}\right)}{2\ell_{i}+1}\right]^{1/2}
\right\}\\\nonumber
&&
\left\{
\prod_{j=1}^{N}(-1)^{n_{j}^{\prime}}
\left[\frac{\Gamma\left(n_{j}^{\prime}+1\right)
\Gamma\left(n_{j}^{\prime}+\ell_{j}^{\prime}+\frac{3}{2}\right)}{2
\ell_{j}^{\prime}+1}\right]^{1/2}
\right\}\sum_{n_{ij},\ell_{ij},m_{ij}}
\nonumber\\ & &
\left[\;
\left\{
\prod_{i=1}^{N}\,
\delta\left(\sum_{j=1}^{N}\left( 2n_{ij}+\ell_{ij}\right),\,
2n_{i}^{\prime}+\ell_{i}^{\prime}\right)\,
\left(\begin{array}{ccc}
\ell_{i1}&\cdots &\ell_{iN}\\
m_{i1}&\cdots & m_{iN}\end{array}\right|\left.
\begin{array}{c}\ell_{i}^{\prime}\\ [10pt] m_{i}^{\prime}\end{array}\right)
\right\}
\right.
\nonumber\\ & &\,\,\,\,\,\left\{
\prod_{j=1}^{N}\,
\delta\left(\sum_{i=1}^{N}\left( 2n_{ij}+\ell_{ij}\right),\,
2n_{j}+\ell_{j}\right)\,
\left(\begin{array}{ccc}
\ell_{1j}&\cdots &\ell_{Nj}\\
m_{1j}&\cdots & m_{Nj}\end{array}\right|\left.
\begin{array}{c}\ell_{j}\\ [10pt] m_{j}\end{array}\right)
\right\}
\nonumber\\ & &\left.\,\,\,\,\,
\left\{
\prod_{i=1}^{N}\,\prod_{j=1}^{N}\,
\left(\alpha_{ij}\right)^{2n_{ij}+\ell_{ij}}\,
\frac{2\ell_{ij}+1}{\Gamma\left(n_{ij}+1\right)
\Gamma\left(n_{ij}+\ell_{ij}+\frac{3}{2}\right)}
\right\}
\;\right] .
\end{eqnarray}
We denote the elements of this transition $D$-matrix by $g_{n}$,
which give the probability of the transition from
a particular meson channel to a particular quark channel, and vice versa.
These $g_{n}$ precisely correspond to the overlap wave functions
$\sqrt{a}/\mu_c\langle|\vec{r_{f}}|\mid n_{c},l_{c},m_{c}\rangle$
in Eq.~(\ref{vmm2}).
Replacing these overlap functions by $\mu_{c}g_{n}/( \sqrt{a})$ turns
Eq.~(\ref{vmm2}) into
\begin{eqnarray}\nonumber
\langle\vec{P}_{f}\mid V_{MM}\mid\vec{P}^{\prime}_{f}\rangle
&=& \frac{a^4}{(2\pi)^3}\sum\limits_{n_{c},l_{c},m_{c}}
\int d\Omega_{r_f}
\int d\Omega_{r{f}^{\prime}}
e^{-i\vec{P}_f \cdotp a \hat{r}_{f}}
\left( \frac{\lambda}{\mu_{c}a}\right)^2 \frac{\mu_{c}^2g_{n}^2}{ a} \times \nonumber\\
&&\times \frac{Y^{l_f}_{m_f} (\hat{r_f})
Y^{l_f}_{m_f} (\hat{r_f}^\prime)}{E(\vec{P_{f}})-E_{nl}}
e^{i\vec{P}_f^{\prime}\cdotp a \hat{r}_{f}^{\prime}}\nonumber\\
&=& \frac{\lambda^2 a}{(2\pi)^3}\sum\limits_{n_{c},l_{c},m_{c}}
\int d\Omega_{r_f}
\int d\Omega_{r_{f}^{\prime}}
e^{-i\vec{P}_{f}\cdotp a \hat{r}_{f}}
Y^{l_f}_{m_f} (\hat{r_f})
Y^{l_f}_{m_f} (\hat{r_f}^\prime) \times \nonumber\\
&&\times \frac{g_{n}^2}{E(\vec{P_{f}})-E_{nl}}
e^{i\vec{P}_{f}^{\prime}\cdotp a \hat{r}_{f}^{\prime}}.\nonumber \\ \label{simplify}
\end{eqnarray}
Using the standard relations for spherical harmonics
\begin{equation}
\int d\Omega_{r_f} e^{-i\vec{P}_f.\hat{r}_f a} Y_{m_f}^{l_f } (\hat{r}_f)
= (-i)^l_f 4 \pi j_l (P_f a) Y_{m_f}^{l_f } (\hat{P}_f)
\end{equation}
in Eq.~({\ref{simplify}}), we get \\[11pt]
$\langle\vec{P}_{f}\mid V_{MM}\mid\vec{P}^{\prime}_{f}\rangle \; =$ \\[-13pt]
\begin{eqnarray}\nonumber
&=& \frac{\lambda^2 a}{(2\pi)^3}\sum\limits_{n_{c},l_{c},m_{c}}
(-i)^{l_f} 4 \pi j_l (P_f a) Y_{m_f}^{l_f } (\hat{P}_f)
(i)^{l_f} 4 \pi j_l (P_f^\prime a) Y_{m_f}^{* l_f } (\hat{P}^\prime_f)
\frac{g_{n}^2}{E(\vec{P_{f}})-E_{nl}}\nonumber\\
&=& \frac{\lambda^2 a}{(2\pi)^3}\sum\limits_{n_{c},l_{c},m_{c}}
(-i^2)^{l_f} (4 \pi)^2 j_l (P_f a) j_l (P_f^\prime a) Y_{m_f}^{l_f } (\hat{P}_f)
Y_{m_f}^{* l_f } (\hat{p}^\prime_f)
\frac{g_{n}^2}{E(\vec{P_{f}})-E_{nl}}\nonumber\\
&=& \frac{\lambda^2 a}{(2\pi)^3}\sum\limits_{n_{c},l_{c}}
(-i^2)^{l_f} (4 \pi)^2 j_l (P_f a) j_l (P_f^\prime a)
\frac{2l_f+1}{4\pi} \mathbb{P}_{l_f}(\hat{P_{f}}\cdot\hat{P}_{f}^{\prime})
\frac{g_{n}^2}{E(\vec{P_{f}})-E_{nl}}
\nonumber\\
&=& \frac{ \lambda^2 a} {2\pi^2}
\sum\limits_{l_c=0}^{\infty}(2l_{f}+1)
\mathbb{P}_{l_{f}}(\hat{P_{f}}\cdot\hat{P}_{f}^{\prime} )
j_l(P_{f}a) j_l(P_{f}^{\prime} a)\sum\limits_{n_c=0}^{\infty}
\frac{g_{n}^{2}}{E(\vec{P_{f}})-E_{nl}},
\end{eqnarray}
where $\mathbb{P}_l (x )$ is the Legendre polynomial.
Using this potential, a simple closed-form expression for the $S$-matrix
can be obtained, if only one confined and one free channel are considered
(see Appendices A.1--A.5
of Ref.~\cite{IJTPGTNO11p179} for a detailed derivation), viz.\
\begin{equation}
S_{l_{f}}(E) = 1 - 2i\frac{2a\lambda^{2}\sum\limits_{n=0}^\infty
\dfrac{g_{n}^{2}}{E(\vec{P_{f}})-E_{nl}}\mu_{f}P_{f}j_{l_{f}}
(P_{f}a) h_{l_{f}}^{1}(P_{f}a)}
{1+2ia\lambda^{2}\sum\limits_{n=0}^\infty
\dfrac{g_{n}^{2}}{E(\vec{P_{f}})-E_{nl}}
\mu_{f}P_{f}j_{l_{f}}(P_{f}a) h_{l_{f}}^{1}(P_{f}a)}.
\label{smat}
\end{equation}
An exact solution for the $S$-matrix can be derived in the most general
multichannel case as well, resulting in a matrix expression with a
similar structure \cite{PRD80p094011}.
The full scattering amplitude can be depicted diagrammatically
as shown in Fig.~\ref{fig4}, where the shaded boxes represent
the effective meson-meson potential depicted in Fig.~\ref{fig1}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45,angle=-90]{diagram2.eps}
\caption{The full scattering amplitude for two mesons,
with the shaded boxes given by Fig.~\ref{fig1}.}
\label{fig4}
\end{figure}
In order to find resonances in non-exotic meson-meson systems,
we search for zeros in the denominator of the $S$-matrix (Eq.~(\ref{smat})),
which correspond to poles in the complex-energy plane. Then we vary the
parameters $\lambda$ and $a$ to fit the experimental data for a particular
case. We also study the pole movements in the complex plane, as we change
the parameter $\lambda$, in order to get more
physical insight into the model, and to evaluate the role of meson loops
coupled to quark channels in the generation of resonances.
\section{Perturbative formalism}
\label{Perturbative}
The main purpose of this paper is to show that predictions
of resonances poles based on perturbative calculations in standard quark
models can be very misleading. In order to demonstrate this in a quantitative
way, we now construct a perturbative scheme for the formalism described in the
previous section.
As explained above, the $\lambda$ in our formalism
is the coupling of the meson-meson $\leftrightarrow\, q\bar{q}$ vertex.
The term corresponding to $\lambda^{2}$ thus represents
the lowest-order meson-meson interaction
(meson-meson $\rightarrow\, q\bar{q} \, \rightarrow\,$ meson-meson ).
The position of a pole in a particular case of meson-meson scattering,
above threshold, can be expanded perturbatively in terms of $\lambda^{2}$ as
\begin{equation}
E_{m}^\xscrpt{pole}= E_{m} +\lambda^{2} E_{m}^\xscrpt{LO}
+\lambda^{4} E_{m}^\xscrpt{NLO}
+\lambda^{6} E_{m}^\xscrpt{NNLO}+\ldots .
\label{empole}
\end{equation}
The first term of this series corresponds to the confinement pole,
which is what we should get from the model if the coupling of the quark pair
to the meson channels vanished.
The second term is the leading-order term in $\lambda^{2}$.
The denominator of the $S$-matrix (Eq.~(\ref{smat})), written
up to leading order in $\lambda^{2}$
around the $m$th confinement pole, becomes
\begin{equation}
1+2ia\lambda^{2}\frac{g_{m}^{2}}
{E_{m}+\lambda^{2} E_{m}^\xscrpt{LO}-E_{m}}
\mu k j_l(ka) h_l^{1}(ka) = 0,
\end{equation}
which then yields
\begin{equation}
E_{m}^\xscrpt{LO}= - 2ia g_{m}^{2}\mu k j_l(ka) h_l^{1}(ka).
\end{equation}
So the pole position (Eq.~(\ref{empole})),
in lowest-order approximation, is
\begin{equation}
E_{m}^\xscrpt{pole}\approx E_{m}-2ia g_{m}^{2}\mu k j_l(ka) h_l^{1}(ka).
\end{equation}
We now expand the denominator of Eq.~(\ref{smat})
to higher order in $\lambda^{2}$.
In order to do so, we define
\begin{equation}
f(\lambda^{2}) = 2ia \mu k j_l(ka) h_l^{1}(ka),\label{ffn}
\end{equation}
such that Eq.~(\ref{smat}) becomes
\begin{equation}
S_l (E) = 1 -\frac{2\lambda^{2}f(\lambda^{2})
\sum\limits_{n=0}^\infty\dfrac{g_{n}^{2}}{E-E_{n}}}
{1+\lambda^{2}f(\lambda^{2})
\sum\limits_{n=0}^\infty\dfrac{g_{n}^{2}}{E-E_{n}}}.
\label{smat2}
\end{equation}
Now we expand the function $f$ around the energy $E=E_{m}$, i.e.,
around $\lambda = 0$, as
\begin{eqnarray}\nonumber
f(E_{m}) &=& f(E_{m}) +\lambda^{2}
\frac{\partial f}{\partial\lambda^{2}}
\Big\vert_{\lambda=0}+
\frac{1}{2}\lambda^{4}\frac{\partial^{2} f}
{\partial(\lambda^{2})^{2}}\Big\vert_{\lambda=0}
+\frac{1}{6}\lambda^{6}
\frac{\partial^{3} f}{\partial(\lambda^{2})^{3}}\Big\vert_{\lambda=0}
\nonumber\\ &+&\ldots\nonumber\\
&=& f(E_{m})
+\lambda^{2}\frac{\partial f}{\partial E}\frac{\partial E}
{\partial\lambda^{2}}\Big\vert_{\lambda=0}
+\frac{1}{2}\lambda^{4}
\left[
\frac{\partial^{2} f}{\partial E^{2}}\left(\frac{\partial E}
{\partial\lambda^{2}}\right)^{2}\right.
\nonumber\\
&+&\left.
\frac{\partial f}{\partial E}\frac{\partial^{2} E}{\partial\lambda^{2}}
\right]_{\lambda=0}+\frac{1}{6}\lambda^{6}\left[
\frac{\partial^{3} f}{\partial E^{3}}\left(\frac{\partial E}
{\partial\lambda^{2}}\right)^{3}
+3\frac{\partial^{2} f}{\partial E^{2}}\frac{\partial E}
{\partial\lambda^{2}}
\frac{\partial^{2} E}{\partial(\lambda^{2})^{2}}\right.
\\
&+&\left.
\frac{\partial f}{\partial E}
\frac{\partial^{3} E}{\partial(\lambda^{2})^{3}}
\right]_{\lambda=0}+
\ldots .
\label{fexp}
\end{eqnarray}
From Eq.~(\ref{empole}), we have
$E_{m}^\xscrpt{pole}|_{\lambda=0}=E_m$, and
\begin{equation}\nonumber
\frac{\partial E_{m}^\xscrpt{pole}}{\partial\lambda^{2}}
\Big\vert_{\lambda=0}= E_{m}^\xscrpt{LO},\,\,
\frac{\partial^{2} E_{m}^\xscrpt{pole}}{\partial(\lambda^{2})^{2}}
\Big\vert_{\lambda=0}= 2E_{m}^\xscrpt{NLO},\,\,
\frac{\partial^{3} E_{m}^\xscrpt{pole}}{\partial(\lambda^{2})^{3}}
\Big\vert_{\lambda=0}= 6E_{m}^\xscrpt{NNLO}, \,\,\ldots .
\end{equation}
Using these relations in Eq.~(\ref{fexp}), we get
\begin{eqnarray}\label{fexp2}
f(E_{m}) &=&
f(E_{m})+\lambda^{2} E_{m}^\xscrpt{LO}
\frac{\partial f}{\partial E}\Big\vert_{E=E_{m}}
\nonumber\\
&+&\frac{1}{2}\lambda^{4}
\left[\left( E_{m}^\xscrpt{LO}\right)^{2}
\frac{\partial^{2} f}{\partial E^{2}}\Big\vert_{E=E_{m}}
+ 2 E_{m}^\xscrpt{NLO}\frac{\partial f}{\partial E}\Big\vert_{E=E_{m}}
\right]\nonumber\\
&+&\frac{1}{6}\lambda^{6}\left[\left( E_{m}^\xscrpt{LO}\right)^{2}
\frac{\partial^{3} f}{\partial E^{3}}\Big\vert_{E=E_{m}}
+6 E_{m}^\xscrpt{NLO}E_{m}^\xscrpt{LO}
\frac{\partial^{2} f}{\partial E^{2}}\Big\vert_{E=E_{m}}
\right]
\nonumber\\
&+& 6 E_{m}^\xscrpt{NNLO}\frac{\partial f}{\partial E}\Big\vert_{E=E_{m}}+
\ldots .
\end{eqnarray}
It remains to consider the remaining part of the denominator, namely
\begin{equation}
\lambda^{2}\sum\limits_{n=0}^\infty\frac{g_{n}^{2}}{E - E_{n}}=
\lambda^{2}\sum\limits_{n\neq m}^\infty\frac{g_{n}^{2}}{E - E_{n}}+
\lambda^{2}\frac{g_{m}^{2}}{E- E_{m}},
\end{equation}
which we expand in a series in $\lambda^{2}$,
around $\lambda=0$ (so at $E=E_{m}$),
in a similar way as the function $f$, to obtain
\begin{eqnarray}\nonumber
&&\lambda^{2}\sum\limits_{n=0}^\infty\frac{g_{n}^{2}}{E - E_{n}}=
\lambda^{2}\sum\limits_{n\ne m}\frac{g_{n}^{2}}{E_{m}- E_{n}}-
\lambda^{4} E_{m}^\xscrpt{LO}\sum\limits_{n\ne m}
\frac{g_{n}^{2}}{(E_{m}- E_{n})^{2}}
\\\nonumber
&&+\lambda^{6}\left\{ (E_{m}^\xscrpt{LO})^{2}\sum\limits_{n\ne m}
\frac{g_{n}^{2}}{(E_{m}- E_{n})^{3}}
-E_{m}^\xscrpt{NLO}\sum\limits_{n\ne m}
\frac{g_{n}^{2}}{(E_{m}- E_{n})^{2}}\right\}
\\\nonumber
&&+g_{m}^{2}\left[\frac{1}{E_{m}^\xscrpt{LO}}-\lambda^{2}
\frac{E_{m}^\xscrpt{NLO}}{(E_{m}^\xscrpt{LO})^{2}}
+\lambda^{4}\left\{\frac{(E_{m}^\xscrpt{NLO})^{2}}
{(E_{m}^\xscrpt{LO})^{3}}-
\frac{E_{m}^\xscrpt{NNLO}}{(E_{m}^\xscrpt{LO})^{2}}\right\}
\right]\\ \label{expn3}&&
+g_{m}^{2}\left[
-\lambda^{6}\left\{\frac{(E_{m}^\xscrpt{NLO})^{3}}
{(E_{m}^\xscrpt{LO})^{4}}-
2\frac{E_{m}^\xscrpt{NLO}*E_{m}^\xscrpt{NNLO}}
{(E_{m}^\xscrpt{LO})^{3}}
+\frac{E_{m}^\xscrpt{N$^{3}$LO}}{(E_{m}^\xscrpt{LO})^{2}}
\right\}
\right] +\ldots.
\end{eqnarray}
Upon multiplying Eqs.~(\ref{fexp2}) and (\ref{expn3}),
we get an expansion in $\lambda^{2}$ for the pole position, viz.\
\begin{eqnarray}\nonumber
&0& = 1+{g_{m}^{2}}{E_{m}^\xscrpt{LO}}f(E_{m})\\\nonumber
&+&\lambda^{2}\left\{ \left(\sum\limits_{n\ne m}
\frac{g_{n}^{2}\left( E_{m}^\xscrpt{LO}\right)^{2} }{E_{m}- E_{n}}
- g_{m}^{2} E_{m}^\xscrpt{NLO}\right)
\frac{ f(E_{m})}{\left( E_{m}^\xscrpt{LO}\right)^{2} }
+ g_{m}^{2}\frac{\partial f}{\partial E}\Big\vert_{E=E_{m}}\right\}
\\\nonumber
&+&\lambda^{4}\left\{ \left(-\sum\limits_{n\ne m}\frac{g_{n}^{2}
\left( E_{m}^\xscrpt{LO}\right)^{4} }{(E_{m}- E_{n})^{2}}+ g_{m}^{2}
\left\{ \left( E_{m}^\xscrpt{NLO}\right)^{2}
-E_{m}^\xscrpt{LO}E_{m}^\xscrpt{NNLO}\right\}
\right)\frac{ f(E_{m})}{\left( E_{m}^\xscrpt{LO}\right)^{3} }\right.
\\\nonumber
&+&\left.
\left(\sum\limits_{n\ne m}\frac{g_{n}^{2}
E_{m}^\xscrpt{LO}}{E_{m}- E_{n}}\right)
\frac{\partial f}{\partial E}\Big\vert_{E=E_{m}}
+\frac{1}{2} g_{m}^{2}
E_{m}^\xscrpt{LO}\frac{\partial^{2} f}{\partial E^{2}}
\Big\vert_{E=E_{m}}\right\}
\\\nonumber
&+&\lambda^{6}\left\{ \left(\sum\limits_{n\ne m}\frac{g_{n}^{2}
\left( E_{m}^\xscrpt{LO}\right)^{6} }{(E_{m}- E_{n})^{3}}
-\sum\limits_{n\ne m}
\frac{g_{n}^{2}\left( E_{m}^\xscrpt{LO}\right)^{4}E_{m}^\xscrpt{NLO}}
{(E_{m}- E_{n})^{2}}\right.\right.
\\\nonumber
&-&\left.\left.g_{m}^{2}\left\{
\left( E_{m}^\xscrpt{NLO}\right)^{3}
-2E_{m}^\xscrpt{NLO}E_{m}^\xscrpt{NNLO}E_{m}^\xscrpt{LO}
+E_{m}^\xscrpt{N$^{3}$LO}
\left( E_{m}^\xscrpt{LO}\right)^{2}\right\}\right)
\frac{ f(E_{m})}{\left( E_{m}^\xscrpt{LO}\right)^{4} }\right.
\\\nonumber
&+&\left( -\sum\limits_{n\ne m}\frac{g_{n}^{2}\left( E_{m}^\xscrpt{LO}
\right)^{2} }{(E_{m}- E_{n})^{2}}+ \sum\limits_{n\ne m}
\frac{g_{n}^{2} E_{m}^\xscrpt{NLO}}{E_{m}- E_{n}}\right)
\frac{\partial f}{\partial E}\Big\vert_{E=E_{m}}+\\\nonumber
&+&\left.
\frac{1}{2}\left(\sum\limits_{n\ne m}\frac{g_{n}^{2}
\left( E_{m}^\xscrpt{LO}\right)^{2} }
{E_{m}- E_{n}}+ g_{m}^{2} E_{m}^\xscrpt{NLO}\right)
\frac{\partial^{2} f}{\partial E^{2}}\Big\vert_{E=E_{m}}\right.
\\\nonumber
&+&\left.
\frac{1}{6}g_{m}^{2}\left( E_{m}^\xscrpt{LO}\right)^{2}
\frac{\partial^{3} f}{\partial E^{3}}\Big\vert_{E=E_{m}}\right\}.
\end{eqnarray}
Solving this equation order by order in $\lambda^{2}$,
we obtain the expressions for e.g.\
the pole position to lowest order, next-to-lowest order,
and next-to-next-to-lowest order as
\begin{eqnarray}
&&E_{m}^\xscrpt{LO}= - g_{m}^{2} f(E_{m}),\label{one}\\
&&E_{m}^\xscrpt{NLO}= g_{m}^{4} f(E_{m})\frac{\partial f}{\partial E}
\Big\vert_{E=E_{m}}+ g_{m}^{2} f^{2}(E_{m})\sum\limits_{n\ne m}
\frac{g_{n}^{2} }{E_{m}- E_{n}},\label{two}
\end{eqnarray}
\begin{eqnarray}\nonumber
&&E_{m}^\xscrpt{NNLO}= g_{m}^{4} f^{3}(E_{m})\sum\limits_{n\ne m}
\frac{g_{n}^{2} }{(E_{m}- E_{n})^{2}}- g_{m}^{2} f^{3}(E_{m})\times
\\\nonumber &&\left(\sum\limits_{n\ne m}
\frac{g_{n}^{2} }{E_{m}- E_{n}}\right)^{2} -3 g_{m}^{4} f^{2}(E_{m})
\frac{\partial f}{\partial E}\Big\vert_{E=E_{m}}
\left(\sum\limits_{n\ne m}\frac{g_{n}^{2} }{E_{m}- E_{n}}\right)\\
&&-\frac{1}{2} g_{m}^{6} f^{2}(E_{m})\frac{\partial^{2} f}{\partial E^{2}}
\Big\vert_{E=E_{m}}- g_{m}^{6} f(E_{m})
\left(\frac{\partial f}{\partial E}\Big\vert_{E=E_{m}}\right)^{2}.
\label{three}
\end{eqnarray}
Similarly, one can obtain the expressions for even higher-order contributions.
\section{Results and discussion}
To test the validity of the perturbative calculus, we now choose a few
concrete examples of meson-meson systems, namely $K\pi$ in $P$ wave,
$D\bar{D}$ in $P$ wave, and $DK$ in $S$ wave.
First we compute the scattering poles
in these systems by using the exact formalism,
explained in Sect.~\ref{Formalism}.
In particular, we study the $S$-matrix (Eq.~(\ref{smat})) pole positions
in the complex-energy plane, as a function of the coupling $\lambda$.
The resulting pole trajectories behave
as expected from general considerations:
\begin{itemize}
\item{
The unperturbed, or bare,
quark-antiquark spectrum is given by Eq.~(\ref{enl}).
For small coupling to the scattering sector, we must find
resonance poles close to the levels of this spectrum.}
\item{
For larger coupling, the resonances are expected
to acquire larger widths,
so the pole positions must get larger
(negative) imaginary parts.
However, associated with larger widths are, in general, larger mass shifts.
As a consequence, also the real parts of the pole positions
will deviate substantially from the levels of the
bare $q\bar{q}$ spectrum.}
\item{
Sometimes, a further increased coupling may lead to a sufficiently large
mass shift so as to push the pole below the scattering threshold. For such a
situation, we expect a bound state.
Since a bound state has zero width, the pole should eventually end up on
the real-energy axis.}
\end{itemize}
Next, we study the same cases, but now
using the perturbative formalism derived in
Sect.~\ref{Perturbative}, up to
fourth order in $\lambda^2$.
This way we can compare exact and perturbative results
for the real and imaginary mass shifts due to meson loops.
The latter quantities are related to predictions for
the central resonance masses and resonance widths.
We will find that, for moderate to large couplings, the perturbative results
strongly deviate from and do not converge towards the exact ones.
In particular, the expected behavior of poles
below the scattering threshold, namely to move towards or along the
real-energy axis, cannot be reproduced at all by the studied perturbative
expansions.
In all cases we will employ the quark masses
$m_{n}\equiv m_{u}=m_{d}=0.406$~GeV, $m_{s}=0.508$~GeV,
$m_{c}=1.562$~GeV, as well as the universal oscillator frequency
$\omega =0.19$~GeV, all determined in the model of Ref.~\cite{PRD27p1527}.
The values of the parameters $a$ and $\lambda$ are given separately for each
case in the following discussion.
The parameter $a$ in Eq.~(\ref{vt})
describes the average distance at which quark-pair
creation or annihilation takes place,
leading to the two-meson decay of a meson.
For the case of $K\pi$, which is a nonstrange-strange
flavor combination, we choose here $a=2.534$ GeV$^{-1}$.
This is smaller than the value $a=3.2$ GeV$^{-1}$,
which would be used in a multichannel fit to several mesons.
The reason for this discrepancy is that one meson-meson channel,
namely $K\pi$, has to mimic the effect of a multichannel treatment.
In order to nonetheless obtain the $K^{\ast}(892)$ pole
at a reasonable position, we have to adjust the parameter $a$.
In the $D\bar{D}$ and $DK$ cases, we use here values
close to the ones used in a multichannel calculation \cite{PRD27p1527},
namely $a=1.72$~GeV$^{-1}$ and $a=2.5$~GeV$^{-1}$, respectively.
The price we pay is that, in the $D\bar{D}$ case, the $\psi(2S)$
pole does not come out at 3.686~GeV, but about 40--50~MeV higher.
For the parameter $\lambda$, which is the universal overall
three-meson coupling constant, we could have used, after scaling,
the same value in all three cases.
However, scaling was not carried out \cite{PRL91p012003} in the $DK$
case, which makes that the pole now greatly overshoots
the $D_{s0}(2317)$ position for $\lambda =1$.
With the correct scaling, it would end up roughly 20 MeV too high.
One might argue that only one value of the coupling $\lambda$
describes the physical situation, so that other values are not relevant.
However, as analyticity has proven in the past
to be a powerful tool for constructing scattering amplitudes,
the trajectories of their poles are also
a strong indication for the correctness of their dependence
on other parameters.
In the following, we will show that perturbative expansions,
even to higher orders, only have a very limited range of validity,
and do not cover the realistic case of large couplings
\cite{ZPC30p615,PRD59p074001,ARXIV10080466}
in strong interactions.
\subsection{$P$-wave $K\pi$ scattering}
\label{Kpiscattering}
The $K^{\ast}(892)$ resonance is well described by a Breit-Wigner
resonance in $P$-wave $K\pi$ scattering,
with central mass and resonance width of about 892 MeV
and 50 MeV, respectively.
Hence, we expect a pole in the $S$ matrix of Eq.~(\ref{smat})
for an $S$-wave nonstrange-strange quark-antiquark system,
coupled to a $P$-wave kaon-pion meson-meson system.
For the couplings $g_{n}$ we find
\cite{ZPC21p291} in this case
\begin{equation}
g_{n} = 2^{-n}\left(\frac{2n+3}{3}\right)^{1/2}.
\label{VPP}
\end{equation}
Scattering poles are obtained by studying the zeros
of the denominator in the expression of Eq.~(\ref{smat}).
In Fig.~\ref{figKpiP}(a), we depict
the $S$-matrix pole positions
for a range of $\lambda$ values
varying from 0 to just over 2.
In the limit of vanishing coupling,
one expects to find the poles at the bare masses
of the quark-antiquark system, as given by Eq.~(\ref{enl}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{kpi.eps}
\caption{\small
The $P$-wave $K\pi$ resonance pole positions
in the complex-energy plane,
corresponding to the second Riemann sheet,
under variation of the coupling $\lambda$.
The dashed curves in all the plots for N$^{n}$LO ($n=0$, $\dots$, 4)
are the same as the ones shown for N$^{\infty}$LO (labeled (a)).
The solid curves are the results obtained
from the perturbative approximations, viz.\
(b) leading (N$^{0}$LO, Born) term,
(c) next-to-leading (N$^{1}$LO),
(d) next-to-next-to-leading (N$^{2}$LO),
(e) (next-to)$^{3}$-leading (N$^{3}$LO)
and (f) (next-to)$^{4}$-leading (N$^{4}$LO) orders, respectively.}
\label{figKpiP}
\end{figure}
We obtain from Eq.~(\ref{enl}) the value $E_{00}=1.199$ GeV for the
ground-state bare mass, which indeed corresponds to the limit of
vanishing $\lambda$ along the dashed curve in Fig.~\ref{figKpiP}(a).
For larger couplings, we observe that the imaginary part
of the pole position vanishes at the $K\pi$ threshold.
This was to be expected, since a large coupling results in a bound
state below the $K\pi$ threshold, which of course has a zero width.
The shape of the pole trajectory near the $K\pi$ threshold
is in accordance with theory for poles in $P$-wave scattering and also
in higher partial waves \cite{Taylor,LNP211p331}.
For $S$-wave scattering the pole behavior is different,
as we will see in Sect.~\ref{DKscattering},
but again in agreement with theory
\cite{Taylor,LNP211p331}.
The value $\lambda=1$ corresponds to the physical pole,
as it roughly reproduces the characteristics of the $K^*$ (892)
resonance. In the present simplified model, the pole comes out
at $(0.972-i0.026)$~GeV, as shown in Fig.~\ref{figKpiP}(a).
The coefficients of the perturbative expansion
(Eq.~(\ref{empole})) are collected
in Table~\ref{Kpicoefficients},
for the case of $P$-wave $K\pi$ scattering.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{||l|r||}
\hline\hline & \\ [-5pt]
coefficient & value (GeV)\\ [5pt]
\hline & \\ [-5pt]
$E_{0}$ & (1.199, 0.)\\
$E_{0}^{\xscrpt{N}^{0}\xscrpt{LO}}$ & (-0.249080686,-0.0878366188)\\
$E_{0}^{\xscrpt{N}^{1}\xscrpt{LO}}$ & (-0.0435913117,0.0471697828)\\
$E_{0}^{\xscrpt{N}^{2}\xscrpt{LO}}$ & (0.0631440181,0.0973648258)\\
$E_{0}^{\xscrpt{N}^{3}\xscrpt{LO}}$ & (0.0869057944,-0.0785515047)\\
$E_{0}^{\xscrpt{N}^{4}\xscrpt{LO}}$ & (-0.067527691,-0.0632886834)\\
\hline\hline
\end{tabular}
\end{center}
\caption[]{Coefficients of the perturbative expansion
given in Eq.~(\ref{empole}), concerning the pole positions of the
ground-state pole in $P$-wave $K\pi$ scattering.}
\label{Kpicoefficients}
\end{table}
In Figs.~\ref{figKpiP}(b--f) we depict
the perturbative pole trajectories for the bare nonstrange-strange
$q\bar{q}$ state at 1.199~GeV. Shown are the curves
for the lowest-order (Born) term
($E_{0}^{\xscrpt{N}^{0}\xscrpt{LO}}$)
and for the next few higher-order terms
($E_{0}^{\xscrpt{N}^{1}\xscrpt{LO}}$, \ldots), respectively, up to
fourth order in $\lambda^2$.
We find that the Born term gives satisfactory
pole positions for overall couplings up to $\lambda\approx0.3$.
At each higher order, the perturbative pole positions,
i.e., the central masses and the widths
of the $K^{\ast}(892)$ resonance, are better and better determined,
up to $\lambda\approx0.75$ for the fourth-order approximation.
However, thereabove things go terribly wrong, and all approximations
completely fail to reproduce the physical pole at $\lambda=1$.
So we are forced to conclude that perturbation theory is unreliable
to describe the $K^{\ast}(892)$ resonance.
Moreover, we should add that these higher-order perturbative calculations
are much more tedious than just finding the exact solution for the coupled
quark-antiquark and meson-meson system.
\subsection{$P$-wave $D\bar{D}$ scattering}
\label{DDscattering}
Let us next consider the $D\bar{D}$ system,
which has been studied already a long time ago \cite{PRD21p772},
using the model described in Sect.~\ref{Formalism}.
In Ref.~\cite{PRD21p772} it was shown that the $P$-wave $D\bar{D}$ channel,
together with higher open-charm channels, can transform the bare vector
charmonium spectrum into the physical one. In particular, the pole stemming
from the first radial excitation comes out very close to the
$\psi(2S)$(3686) state, which turns out to contain a significant
$D\bar{D}$ component, besides $c\bar{c}$ of course.
The couplings $g_{n}$ in this case are
again given by the vector $\leftrightarrow$ pseudoscalar-pseudo\-scalar
vertex, for which we use the same expression as in Eq.~(\ref{VPP}).
The parameter $a$ is now taken at 0.34~fm.
Using these inputs, the $S$-matrix (Eq.~(\ref{smat}))
is calculated, and we search for poles on the second Riemann sheet.
We present the results of our calculation in Fig.~\ref{figDD}(a),
which depicts the complex-energy plane around the mass
of the $\psi(2S)$(3686).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{DDbar.eps}
\caption{\small
$P$-wave $D\bar{D}$ pole positions
in the complex-energy plane
(corresponding to the first and second Riemann sheets),
under variation of the coupling $\lambda$.
The dashed curves in all the plots for N$^{n}$LO ($n=0$, $\dots$, 4)
are the same as the one for N$^{\infty}$LO (labeled (a)).
The solid curves represent the perturbative results:
(b) leading (N$^{0}$LO, Born) term,
(c) next-to-leading (N$^{1}$LO),
(d) next-to-next-to-leading (N$^{2}$LO),
(e) (next-to)$^{3}$ (N$^{3}$LO),
and (f) (next-to)$^{4}$-leading (N$^{4}$LO) orders, respectively.}
\label{figDD}
\end{figure}
The dashed line in Fig.~\ref{figDD}(a)
corresponds to the movement of the pole in the complex plane
between the $D\bar{D}$ threshold
and the first radial excitation of the $J\!/\!\psi$,
as $\lambda$ is varied between the limiting values 0 and 1.
As expected, when the coupling is very small,
we find a pole close to the first radial excitation
of the confined $c\bar{c}$ spectrum of the model,
i.e., near $E_{1} =\omega (2+3/2 )+2m_{c}=$ 3.789 GeV (Eq.~(\ref{enl})).
As $\lambda$ is increased to 0.1,
the real part of the pole becomes $\Re\mbox{e}(E)\simeq3.77$~GeV
Finally, for $\lambda\simeq1$,
the pole is found below the $D\bar{D}$ threshold, very close to 3.7 GeV,
which should corresponds to the physical $\psi(2S)$(3686).
Next we show the pole positions obtained in
the perturbative expansion, viz.\ from Eqs.~(\ref{one}--\ref{three}))
and similar expressions up to fourth order in $\lambda^2$.
The coefficients of Eq.~(\ref{empole})
are collected in Table~\ref{DDcoefficients}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{||l|r||}
\hline\hline & \\ [-5pt]
coefficient & value (GeV)\\ [5pt]
\hline & \\ [-5pt]
$E_{0}$ & (3.789, 0.)\\
$E_{0}^{\xscrpt{N}^{0}\xscrpt{LO}}$ & (-0.291265968,-0.0139626856)\\
$E_{0}^{\xscrpt{N}^{1}\xscrpt{LO}}$ & (0.587237403,0.157827196)\\
$E_{0}^{\xscrpt{N}^{2}\xscrpt{LO}}$ & (-0.763203829,-0.81585325)\\
$E_{0}^{\xscrpt{N}^{3}\xscrpt{LO}}$ & (-0.589315239,2.48140259)\\
$E_{0}^{\xscrpt{N}^{4}\xscrpt{LO}}$ & (7.66667227,0.981322456)\\
\hline\hline
\end{tabular}
\end{center}
\caption[]{Coefficients of the perturbative expansion
(Eq.~(\ref{empole})) for the first radially excited pole in
$P$-wave $DD$ scattering.}
\label{DDcoefficients}
\end{table}
Figure~\ref{figDD}(b) shows that the pole position found
in the leading-order approximation agrees
with the full calculation only for very small values of $\lambda$,
but as the coupling increases,
the approximate pole starts to deviate strongly from the exact one
shown in Fig.~\ref{figDD}(a).
For example, at $\lambda$ = 0.5,
the first-order pole comes out below threshold but with a large imaginary
part, which is obviously unphysical. For the higher-order approximations,
the results are even worse, with the pole moving into the upper half plane,
or extremely deep down in the lower half for the N$^2$LO case. It becomes
evident that no perturbative approximation will produce anything even
resembling a bound-state pole for $\lambda\sim1$.
\subsection{$DK$ S-wave scattering}
\label{DKscattering}
Finally, we study the case of $S$-wave $DK$ scattering
taking
\begin{equation}
g_{n} = 2^{-n}\left( n+1 \right)^{1/2}.
\end{equation}
Now, as one can see in Fig.~\ref{figDK}(a),
the shape of the pole trajectory near the $DK$ threshold
is very different from the two $P$-wave cases.
For increasing $\lambda$, the pole approaches
the real-energy axis below threshold,
moves along the axis towards threshold as a virtual bound state,
and then becomes a bound state, moving finally to lower and lower
energies.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{Dk.eps}
\caption{\small
$S$-wave $DK$ pole positions in the complex-energy plane
(corresponding to the first and second Riemann sheets),
under variation of the coupling $\lambda$.
The dashed curves in all the plots for N$^{n}$LO ($n=0$, $\dots$, 4)
are the same as the one for N$^{\infty}$LO (labeled (a)).
The solid curves represent the perturbative results:
(b) leading (N$^{0}$LO, Born) term,
(c) next-to-leading (N$^{1}$LO),
(d) next-to-next-to-leading (N$^{2}$LO),
(e) (next-to)$^{3}$ (N$^{3}$LO),
and (f) (next-to)$^{4}$-leading (N$^{4}$LO) orders, respectively.}
\label{figDK}
\end{figure}
There is a one-to-one relation of this complex-energy pole trajectory
to the equivalent one in the complex-momentum plane. Thus, a
virtual bound state moving towards threshold corresponds
to a momentum pole moving upwards along the negative imaginary axis,
passing through the origin when the virtual bound state becomes a
true bound state, at threshold. In the present case, the pole is still
on the negative imaginary axis for $\lambda=0.4$,
but already on the positive one for $\lambda=0.5$.
This phenomenon, which happens exclusively for $S$-wave scattering,
as can be seen from the effective-range expansion,
is well described in Refs.~\cite{Taylor,LNP211p331}.
For $\lambda\approx0.6$, the bound-state pole reproduces the
$D_{s0}(2317)$ mass.
The coefficients of Eq.~(\ref{empole})
for the case of $S$-wave $DK$ scattering
are collected in Table~\ref{DKcoefficients}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{||l|r||}
\hline\hline & \\ [-5pt]
coefficient & value (GeV)\\ [5pt]
\hline & \\ [-5pt]
$E_{0}$ & (2.545,0.)\\
$E_{0}^{\xscrpt{N}^{0}\xscrpt{LO}}$ & (-0.445872986,-0.67333385)\\
$E_{0}^{\xscrpt{N}^{1}\xscrpt{LO}}$ & (-1.36316635,-1.84200144)\\
$E_{0}^{\xscrpt{N}^{2}\xscrpt{LO}}$ & (-9.95402765,-8.57593239)\\
$E_{0}^{\xscrpt{N}^{3}\xscrpt{LO}}$ & (-68.4326606,-43.6873369)\\
$E_{0}^{\xscrpt{N}^{4}\xscrpt{LO}}$ & (-299.284654,-138.032657)\\
\hline\hline
\end{tabular}
\end{center}
\caption[]{Coefficients of the perturbative expansion
(Eq.~(\ref{empole})) for the ground-state pole
in $S$-wave $DK$ scattering.}
\label{DKcoefficients}
\end{table}
One sees at a glance that these coefficients
do not promise any kind of convergence.
Indeed, upon inspecting Figs.~\ref{figDK}(b--f),
one notices that only for small values of $\lambda$
the perturbative pole positions agree with the exact ones.
However, for $\lambda\ge 0.3$, the discrepancies grow rapidly, and no
significant improvement is observed for higher orders of perturbation
theory.
\section{Summary and conclusions}
We have studied the discrepancies between perturbative estimates
for resonance pole positions and the exact ones, in the context of
a simple soluble model for hadronic decay of a meson.
In none of the considered cases satisfactory results were obtained
with the perturbative method, and not even any significant
improvement was found for increasing orders of perturbation theory.
In particular, for bound states below the lowest strong-decay
threshold, no perturbative approximation produced anything like
a pole close to the real-energy axis. But also in the case of a
normal and not even very broad resonance, namely the $K^\ast(892)$,
the perturbative approach failed completely.
These results should be a warning for quark-model builders, because of
two reasons. First of all, the found large real mass shifts are, as a
consequence of analyticity, inseparably connected to the generation of
the physical hadronic widths, as demonstrated here for the $K^\ast(892)$,
but shown already many years ago for a variety of mesons \cite{PRD27p1527},
and confirmed in several more recent papers referred to above. Therefore,
any spectroscopic conclusions based on single-channel, ``quenched''
quark models should be taken with a great deal of caution. The second
reason is that even those quark models which pay some attention
to strong decay, usually do this by employing perturbative methods.
The results presented here make it clear that a completely
non-perturbative treatment of hadronic resonances and bound states
is required for a realistic description.
\section*{Acknowledgments}
This work was supported in part by the {\it Funda\c{c}\~{a}o para a
Ci\^{e}ncia e a Tecnologia} \/of the {\it Minist\'{e}rio da Ci\^{e}ncia,
Tecnologia e Ensino Superior} \/of Portugal, under contract
CERN/\-FP/\-109307/\-2009.
\newcommand{\pubprt}[4]{{#1 {\bf #2}, #3 (#4)}}
\newcommand{\ertbid}[4]{[Erratum-ibid.~{#1 {\bf #2}, #3 (#4)}]}
\defAIP Conf.\ Proc.{AIP Conf.\ Proc.}
\defAnn.\ Phys.{Ann.\ Phys.}
\defEur.\ Phys.\ J.\ A{Eur.\ Phys.\ J.\ A}
\defEur.\ Phys.\ J.\ C{Eur.\ Phys.\ J.\ C}
\defEurophys.\ Lett.{Europhys.\ Lett.}
\defInt.\ J.\ Theor.\ Phys.\ Group Theor.\ Nonlin.\ Opt.{Int.\ J.\ Theor.\ Phys.\ Group Theor.\ Nonlin.\ Opt.}
\defLect.\ Notes Phys.{Lect.\ Notes Phys.}
\defNucl.\ Phys.\ A{Nucl.\ Phys.\ A}
\defNucl.\ Phys.\ Proc.\ Suppl.{Nucl.\ Phys.\ Proc.\ Suppl.}
\defPhys.\ Lett.\ B{Phys.\ Lett.\ B}
\defPhys.\ Rev.\ C{Phys.\ Rev.\ C}
\defPhys.\ Rev.\ D{Phys.\ Rev.\ D}
\defPhys.\ Rev.\ Lett.{Phys.\ Rev.\ Lett.}
\defZ.\ Phys.\ C{Z.\ Phys.\ C}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,026 |
2013 Minnesota Statutes
Fraudulent State Claims
Chapter 15C
Section 15C.15
15C.145 15C.16
Search Minnesota Statutes
About Minnesota Statutes
2013 Statutes New, Amended or Repealed
2013 Table of Chapters
2013 Statutes Topics (Index)
Table of Sections
Full Chapter Text
Claims against the state
State funds and accounts
2009 15C.15 New 2009 c 101 art 2 s 38
This is an historical version of this statute chapter. Also view the most recent published version.
15C.15 DEPOSIT OF STATE FUNDS; FALSE CLAIMS ACCOUNT.
Subdivision 1.Deposit of funds.
The net proceeds received by the state in an action under this chapter, after distributions made to private plaintiffs and as otherwise required by federal law, must be deposited in the state treasury and credited as follows:
(1) the portion of net proceeds equal to the amount of the actual damages that the state sustains because of an act specified in section 15C.02 must be credited to the fund that sustained the damages;
(2) the portion of net proceeds equal to the additional recovery of federal money authorized by United States Code, title 42, section 1396h, for a recovery under this chapter, as determined by the commissioner of management and budget, must be credited to the false claims account under subdivision 2, provided that the amount credited may not exceed $1,000,000 in a fiscal year; and
(3) the remainder of the net proceeds must be credited to the general fund.
Subd. 2.False claims account.
A false claims account is established in the special revenue fund in the state treasury. The commissioner of management and budget may enter into interagency agreements to deposit up to $2,055,000 for litigation and related expenses under this chapter. Money in the account deposited through interagency agreement or under subdivision 1 is annually appropriated to the attorney general for purposes of this chapter.
2009 c 101 art 2 s 38,109
Copyright © 2013 by the Revisor of Statutes, State of Minnesota. All rights reserved. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,213 |
Q: Cant deploy project on Wildfly in standalone Keycloak I starting standalone Keycloak on Wildfly.
Next i install keycloak-wildfly-adapter in my standalone keycloak folder (inside is include wildfly) with this command:
cd bin
./jboss-cli.sh --file=adapter-install-offline.cli
Then i want to deploy my app on server with this command:
mvn wildfly:deploy
and i get ERROR:
ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 50) MSC000001: Failed to start service jboss.undertow.deployment.default-server.default-host./ldap-portal: org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./ldap-portal: java.lang.NoSuchMethodError: org.keycloak.representations.adapters.config.AdapterConfig.getMinTimeBetweenJwksRequests()I
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:85)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
Caused by: java.lang.NoSuchMethodError: org.keycloak.representations.adapters.config.AdapterConfig.getMinTimeBetweenJwksRequests()I
at org.keycloak.adapters.KeycloakDeploymentBuilder.internalBuild(KeycloakDeploymentBuilder.java:107)
at org.keycloak.adapters.KeycloakDeploymentBuilder.build(KeycloakDeploymentBuilder.java:135)
at org.keycloak.adapters.undertow.KeycloakServletExtension.handleDeployment(KeycloakServletExtension.java:135)
at io.undertow.servlet.core.DeploymentManagerImpl.handleExtensions(DeploymentManagerImpl.java:252)
at io.undertow.servlet.core.DeploymentManagerImpl.deploy(DeploymentManagerImpl.java:152)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:100)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:82)
... 6 more
16:11:31,347 ERROR [org.jboss.as.controller.management-operation] (management-handler-thread - 1) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "ldap-portal.war")]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.undertow.deployment.default-server.default-host./ldap-portal" => "org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./ldap-portal: java.lang.NoSuchMethodError: org.keycloak.representations.adapters.config.AdapterConfig.getMinTimeBetweenJwksRequests()I
Caused by: java.lang.NoSuchMethodError: org.keycloak.representations.adapters.config.AdapterConfig.getMinTimeBetweenJwksRequests()I"}}
16:11:31,349 ERROR [org.jboss.as.server] (management-handler-thread - 1) WFLYSRV0021: Deploy of deployment "ldap-portal.war" was rolled back with the following failure message:
{"WFLYCTL0080: Failed services" => {"jboss.undertow.deployment.default-server.default-host./ldap-portal" => "org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./ldap-portal: java.lang.NoSuchMethodError: org.keycloak.representations.adapters.config.AdapterConfig.getMinTimeBetweenJwksRequests()I
Caused by: java.lang.NoSuchMethodError: org.keycloak.representations.adapters.config.AdapterConfig.getMinTimeBetweenJwksRequests()I"}}
Can someone explain me why this happen and how to fix it?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,467 |
\subsection{Sparse-Reward Navigation with BabyAI}
Having established the usefulness of our approach on a toy setup, we now test the generality of our approach on the more challenging task integrating visual information to act as a cue for the goal.
The overarching navigational setup we use for our experiments is the BabyAI research platform, which integrates the use of natural language and agent actions. In BabyAI, the agent is limited by the surrounding walls and field of view, and thus, must rely on exploration and external stimuli to accomplish its mission, which includes ``Pick up the Purple Box" and ``Go to Red Ball". Furthermore, the reward is only delegated upon mission completion, with reward assigned as $ 1 - \frac{n_{steps}}{n_{max}} $. Since this setting is that of sparse-rewards, learning the optimal policy is even more difficult.
To guide meta-learning in this setting, we study how language can be used as side information to reduce the ambiguity surrounding sparse tasks. We firstly reformulate BabyAI as a meta-learning setup, where different tasks are performed on the same environment but with different seeds and goals (e.g. location and color of objects, direction, initial position of the agent). The resulting variability across tasks induces ambiguity about the correct task-specific policy and mission.
We then use two BabyAI's environments, Unlock and PutNext, and use the instructions provided to the agent as side information to help it successfully learn across varying tasks.
In our experiments, we release and withhold contextual information from the pre-adapted network and hypernetwork as dynamic and static contexts. For example, if the agent's mission is to ``Open the Red Door", the agent only knows to ``Open Door" across tasks (static context) but does not know the specific color of the door (dynamic context). Hence, to generate dynamic contexts, we map the task's specific mission (a sentence) to a dictionary of learnable embeddings. For static contexts, however, only the general words such as ``Open" and ``Door" receive learnable embeddings, while specific words are replaced with ones.
\subsection{Supervised learning warm-up: Ground-truth task information in sinusoid regression}
\label{sec:sinusoid}
We begin our evaluation by conducting experiments on a simple supervised learning task in the form of sinusoid regression, inspired by the task originally used to demonstrate the utility of \gls{maml}~\citep{finn2017model}.
Further, this experiment examines whether the particular way of implementing context-conditioning introduced in Section \ref{sec:method} has the potential to outperform other strong baselines.
\paragraph{Experimental setup.}
We define each task of regressing to a sine curve by uniformly sampling the amplitude from $[0. 1, 5. 0]$ and the phase from $[0, \pi]$.
We use the amplitude and phase as the task information for this setting.
While we expect access to this task information to improve performance, our goal is to evaluate whether context-conditioning performs better than simply feeding the context as an extra input to the base network (\textit{i.e.,}~ \gls{maml-concat}).
\paragraph{Hyperparameters.}
For both the base and context networks, we use a neural network with two hidden layers each of dimension 40, similarly to \cite{finn2017model}.
Note that, for this experiment the dimensionality of $\boldtheta$ is sufficiently low that we do not need to use the \gls{film} parameterization for the output of the context network.
Instead, we have the context network output $\boldtheta$ directly, learning a function from task information to the initial weights of the base network; more details on the implementation of the context network can be found in Appendix \ref{sec:exp_details}.
For gradient-based adaptation during meta-training, for each task we use one gradient update using a support set of size $10$; during meta-evaluation we present the model with $10$ support examples from a newly sampled task and measure mean-squared error over $256$ query set examples.
\paragraph{Results.}
As shown in Figure~\ref{fig:regression}, we see that, as expected, task information reduces ambiguity in sinusoid regression as both \gls{maml-context} and \gls{maml-concat} significantly outperform \gls{maml}.
We also note that while \gls{maml-static} performs better than \gls{maml} due to the increase in parameters and therefore expressivity, its performance is worse than \gls{maml-context} and \gls{maml-concat}.
Lastly, we see that the \gls{maml-context} method outperforms the \gls{maml-concat} method, suggesting that even in a toy setup, context-conditioning is more useful than simply providing the context as an extra input to \gls{maml}.
\paragraph{Ablation.}
We next investigate the effect of ablating the amount of context that is provided to \Gls{maml-concat} and \gls{maml-context} where we provide only one of the phase or amplitude as task information (refer to Figure \ref{fig:regression_ablation}).
As expected, ablating the task information worsens the performance of both \Gls{maml-concat} and \gls{maml-context}.
Further, ablating phase information (Figure \ref{fig:regression_ablation}, left panel) affects the performance more compared to ablating amplitude information (Figure \ref{fig:regression_ablation}, right panel).
Next, \gls{maml-concat}'s performance worsens more compared to \gls{maml-context} in both experiments suggesting that context-conditioning is more robust to noisy or partial context information.
Importantly, in both experiments, the performance of \gls{maml-context} is still significantly better than \gls{maml} whereas \gls{maml-concat} is not better when phase information is ablated (refer to the plot of \gls{maml} in Figure~\ref{fig:regression} for comparison), suggesting that context conditioning on even partial task information is useful in reducing task ambiguity.
\paragraph{Extrapolation.} We next compare the extrapolation performance of the different methods by sampling an amplitude from $[5. 0, 10. 0]$, \textit{i.e.,}~ outside the range given during meta-training, $[0. 1, 5. 0]$.
The performance of \gls{maml-context} is greater than the performance of \gls{maml} again, showing the benefit of context-conditioning (refer to Figure \ref{fig:regression_extrapolation}).
Interestingly, we see that \gls{maml-concat} performs worse than all other methods and the extra parameter-count baseline \gls{maml-static} is quite competitive and performs similarly to \gls{maml-context}, suggesting that the context network is still sensitive to the distribution of task information observed in training.
In hindsight, this is to be expected, as the meta-learning formulation makes the assumption that meta-training and meta-evaluation tasks are drawn from the same distribution.
\section{Introduction}
\label{sec:intro}
Flexibility is one of the defining features of cognition. Any intelligent organism must be able to adapt its behavior to continually changing and evolving environmental and task demands~\citep{braver2009flexible}. The processes behind such adaptability are collectively referred to as \emph{cognitive control}~\citep{cohen2000anterior, botvinick2001conflict, botvinick2014computational}, and a primary goal of modern cognitive psychology and neuroscience involves understanding the mechanisms that underlie cognitive control in humans~\citep{barcelo2006task}.
A notable feature of cognitive control is the ability to derive complex rules from contextual cues~\citep{monsell1996control, salinas2004fast, dosenbach2006core, sakai2008task, collins2013cognitive}. As an example, consider a child raised in a bilingual environment with each parent speaking a different language. Upon learning that each parent speaks a different language, the child may come to expect that depending on the speaker (the context), the same object (the stimulus) will be labeled using different words (the response)~\citep{werchan20158}. In this manner, contextual information such as visual or linguistic cues enables adults and children alike to recognize the underlying structure of a new problem they face, which, in turn, enables them to decide on a strategy for interaction within the novel context ~\citep{collins2012reasoning, collins2013cognitive, werchan20158}.
Although it is well established that context-dependent adaptation is vital for flexible behavior, the computational mechanisms underlying how humans use contextual information to guide learning in a new situation are still poorly understood. While recent computational works have shed essential insights into understanding these mechanisms in simplified settings~\citep{collins2013cognitive, eckstein2020computational}, we lack computational models that can be scaled up to more realistic tasks.
In the present work, we offer a new perspective by proposing that context-dependent adaptation can be explained within a context-conditioned meta-learning framework. In standard meta-learning, a meta-learned global model determines the initialization of task-specific models, which are subsequently adapted to online feedback from each task. Here, we propose \gls{maml-context}, in which contextual cues about task structure--termed \emph{task information}-- guide the initialization of task-specific models, enabling the meta-learned prior over task structures to be informed by task information, similar to how human learning is guided by context.
We implement \gls{maml-context} by augmenting a gradient-based meta-learning algorithm~\citep{finn2017model} with a \emph{context network} that learns the relationship between task information and the initialization of task-specific models. We first use this implementation to demonstrate that the \gls{maml-context} framework can capture the context-sensitivity of human behavior in a simple but well-studied cognitive control task. We then shift our focus to larger-scale simulations, where we demonstrate competitive performance against several baselines on supervised and reinforcement learning tasks. Our work thus contributes a framework for understanding key aspects of human adaptability and a cognitively-inspired algorithm that is competitive in realistic settings.
\section{Background}
\label{ref:background}
\subsubsection{Computational accounts of context-specific adaptation in humans.}
Although the importance of contextual cues in guiding human flexibility is well-established, very little work has looked into how contextual information guides such adaptability. Recent computational works have made progress towards understanding these mechanisms by suggesting that context-specific adaptation can be modeled using nonparametric Bayesian methods~\citep{collins2013cognitive} as well as hierarchical reinforcement learning~\citep{eckstein2020computational}. However, one limitation of these works is that the tasks modeled using these frameworks are relatively simple compared to the problems faced by humans. This limitation restricts our understanding of context-sensitive adaptation as we do not have models that can capture our everyday flexibility and adaptability. Despite this limitation, a critical insight from these models is that they suggest that the learning processes involved in cognitive control occur at multiple levels of abstraction in that prior knowledge and cognitive control constrain the lower-level, stimulus-response learning~\citep{collins2018learning}. We take this insight as the motivation to pursue modeling context-specific adaptation under a \emph{meta-learning} framework, which realizes an analogous hierarchical decomposition of learning.
\subsubsection{Meta-learning.}
Meta-learning aims to learn a model suitable for a distribution of tasks, which subsequently enables few-shot adaptation to new tasks sampled from the same distribution~\citep{schmidhuber1987evolutionary,bengio1992optimization,thrun1998lifelong}, formulated in recent
works as the learning of global parameters that are shared
between independent, task-specific models \citep{finn2017model, vinyals2016matching}. While meta-learning algorithms can capture some elements of human adaptability (such as the ability to learn from very few examples), standard formulations of meta-learning are not sufficient to capture context-sensitive adaptation. This is because popular meta-learning approaches~\cite[\textit{e.g.,}~][]{vinyals2016matching, finn2017model,snell2017prototypical} and their derivatives learn in the absence of abstract task information by treating each task as a uniformly random draw from an underlying task distribution and do not use context to prime their learning.
\subsubsection{Context-conditioning in meta-learning.}
Recent works have explored augmenting meta-learning with conditioning information by modifying the meta-learner architectures to encode task-specific data into a latent task representation \citep{oreshkin2018tadam, pahde2018cross, vuorio2018toward, chen2019adaptive, lee2018gradient, lee2019learning, baik2019learning, lan2019meta, yoon2019tapnet}. Analogous to the way learning loops occurring between abstract contexts and high-level rules constrain the lower-level learning loop in the brain, in these frameworks, outer learning loop between latent task representation and high-level rules constrain the inner learning loop.
However, one important distinction between context-conditioning meta-learning and context-specific human adaptation is that the former produces the task encoding using the support set \textit{i.e.,}~ using the \emph{same} data over which the meta-learning objective is defined. For instance, \citep{oreshkin2018tadam, vuorio2018toward, baik2019learning, lan2019meta, lee2018gradient} use a conditioning network to infer information about the task, but they do so without employing external contextual information. Similarly, \cite{lee2018gradient} propose a meta-learning model that uses a transformation network to augment the base network with an implicit conditional network as a linear transformation on the weights but uses the same data as the base network. \cite{pahde2018cross, chen2019adaptive} also use contextual information at the instance or class level without any conditioning network. \cite{yoon2019tapnet} linearly transform feature embeddings with a task-specific projection but does not employ contextual information or a conditioning network. This means that while context-conditioning meta-learning enables efficient few-shot learning, it cannot fully capture and explain context-sensitive adaptation in humans.
\section{The present research}
In this work, we consider meta-learning as a useful starting point towards modeling context-sensitive adaptation in humans. However, as noted previously, unlike humans, standard formulations of meta-learning do not employ contextual cues, and only in some cases, infer a task representation from task-specific data.
To account for human behavior, we instead propose to use contextual cues to \emph{guide} meta-learning. Unlike prior works on meta-learning, we produce a task representation from the \emph{extra} available contextual information and focus on the utility of this information in structuring learning at a higher level of abstraction rather than the increased expressiveness that architectural modifications bring. This structure is motivated by human learning, in which contextual cues serve to inform a prior about the task structure at hand, which then enables rapid adaptation to novel contexts. Our experiments show that this task-specific contextual-adaptation can not only capture human behavior but also improve the speed of learning of meta-learning in supervised and reinforcement learning tasks.
Our key contributions are as follows. First, to explain context-sensitive adaptation in humans, we introduce a framework that uses task information to guide meta-learning. Second, we demonstrate that our framework can successfully capture human behavior in a well-known cognitive control task. Modeling human behavior in this task allows us to understand important aspects of human flexibility and cognitive control. Third, and unusually for a cognitive modeling framework, we show that models implemented in our framework can outperform competitive baselines in more complex problem domains such as \gls{celeba} and \gls{metaworld}. Thus, our work also contributes towards developing a cognitively inspired meta-learning framework that can be applied to more realistic problem domains.
\section{A meta-learning account of context-specific adaptation in humans}
\label{sec:method}
We now present our framework for capturing context-specific adaptation. In a standard meta-learning setup, a parametric meta-learner encodes information about the shared structure of the distribution of tasks, $p(\mathcal{T})$, into a set of global parameters $\boldtheta$ from which all task-specific predictors are derived. In particular, for each task $\mathcal{T}\taskidx{j} \sim p(\mathcal{T})$, the meta-learner receives a task-specific dataset $\mathcal{D}_{j}=\left\{\mathbf{x}_{j_{i}}, \mathbf{y}_{j_{i}}\right\}$ and produces a predictive distribution $p_{\boldtheta}(\hat{\mathbf{y}}_{j} \;\vert\; \hat{\mathbf{x}}_{j}, \mathcal{D}\taskidx{j})$ for new examples $\hat{\mathbf{x}}_{j}$ from the same task.
Here, to capture context-sensitive adaptation, we propose to augment the standard meta-learning problem statement in a way that is analogous to the way contextual cues prime human learning in a new environment. In particular, we posit that the additional environmental contextual information, $\mathbf{c}_j$, can be leveraged as conditioning information in order to prime the initial state of the model $\boldtheta$ for a specific task $\mathcal{T}_j$ (also refer to Figure~\ref{fig:architecture}). Formally, we implement conditioning on the task information $\mathbf{c}$ by parameterizing the initialization $\boldtheta$ as the output of a context model $g$ with parameters $\psi$. Using experience from the task, $\boldtheta$ is subsequently adapted with gradient descent to task specific parameters $\boldphi$, as in \gls{maml}. In practice, we take $g$ to be a neural network with weights $\psi$, which we refer to as a \emph{context network}, and update $\psi$ via back-propagation. Note that $\psi$ is updated only during the meta-update step and during the inner loop for task-specific adaptation, $\boldtheta$ is used to initialize $\boldphi$ which is subsequently updated based on task-specific data.
\subsubsection{Supervised meta-learning with task information.} We consider a family of tasks $\mathcal{T}$ with shared structure that enables a meta-learner to learn to solve a task from $\mathcal{T}_i\sim p(\mathcal{T})$. In the supervised learning setting, each task $\mathcal{T}_i$ consists of a set of examples $\mathbf{x}$ and annotations $\mathbf{y}$ (\textit{e.g.,}~ images with classification labels). Gradient-based meta-learning methods choose a parameterized model (base learner) and define an optimization objective over $\mathcal{T}$. For instance, the \gls{maml} algorithm~\cite{finn2017model} uses the following objective:
\begin{equation}
\min_{\boldtheta} \mathbb{E}_{\mathcal{T}_i}\left[\mathcal{L}_{\mathcal{T}_i} (f_{\boldtheta'})\right] = \mathbb{E}_{\mathcal{T}_i}\left[\mathcal{L}_{\mathcal{T}_i} (f_{\boldtheta - \alpha \nabla_{\boldtheta} \mathcal{L}_{\mathcal{T}_i}(f_{\boldtheta}})\right]
\end{equation}
where $f$ is a parameterized function representing the base learner or policy, $\boldtheta$ refers to the parameters that are optimized in the outer loop, and the $\boldphi$ parameters are used to compute the objective with respect to each task.
When employing task information, the meta-objective becomes
\begin{equation}\label{eq:maml-ctx}
\min_{\psi,\boldtheta}\
\mathbb{E}_{\mathcal{T}_i \sim p(\mathcal{T})}\left[
\mathcal{L}_{\mathcal{D}^\text{q}\taskidx{i}} \left(
f_{\{ g_{\psi}(\mathbf{c}_i),\boldtheta - \nabla_{\boldtheta}{ [\mathcal{L}_{\mathcal{D}^\text{s}\taskidx{i}}(f_{\{ g_{\psi}(\mathbf{c}_i),\boldtheta\}})]
} \} }
\right)\right]~,
\end{equation}
where the principal difference is that the initial parameterization of the base network depends not only on global parameters $\boldtheta$, but also task-information-dependent parameters produced as the output of $ g_{\psi}(\cdot)$. With this meta-objective, we can thus fully differentiate the objective with respect to $\boldtheta$; we may make a further application of the chain rule to derive an update for $\psi$ also using the objective value at the last inner adaptation step.
\subsubsection{Meta-policy search with task information.}
\Glsfirst{rl} assumes a \gls{mdp} consisting of \smash{$(S, A, P, R, \gamma)$}, and the goal is to discover a policy $\mathbf{\pi}$ that maximizes the return $\sum_{k=0} \gamma^k R_{k+1}$, the sum of episodic rewards discounted by $\gamma \in [0, 1)$~\citep{sutton2018reinforcement}.
\Glsfirst{meta-rl} generalizes this setting to a distribution $\rho$ over \gls{mdp}s, with the aim of finding the policy that maximizes the expectation of returns under this distribution:
$\mathbb{E}_\rho\left[\sum_{k=0} \gamma^k R_{k+1}\right]$.
Similar to the supervised scenario, we can decouple a solution to the \gls{meta-rl} problem by performing an outer loop search procedure for parameters that maximized expected return across a distribution of control tasks,
$\mathbb{E}_{\mathcal{T}_i}\left[
\mathcal{L}_{\mathcal{T}_i} (f_{\boldtheta})\right] = \mathbb{E}_{\mathcal{T}_i}\left[-\mathbb{E}_{(s_t, a_t) \sim q_{\mathcal{T}_i}} \big[\sum_t R(s_t, a_t)\big]\right]$
where $q_{\mathcal{T}_i}$ is the transition distribution of task $\mathcal{T}_i$, $s_t$ and $a_t$ are state and action at time $t$, respectively. The main difference from the supervised case is that we cannot explicitly differentiate through the dynamics of the environment, and so the standard approach is to use policy gradient methods to update meta-parameters $\boldtheta$; we refer to \cite{finn2017model} for more details. With task information, algorithmically, updating $\psi$ and $\boldtheta$ is similar to the supervised case. During the inner adaptation steps, only $\boldtheta$ is updated to compute the task-specific parameters $\boldphi$. However, during the meta-update step, the gradient of the post-update objective value is used to update both $\psi$ and $\boldtheta$, in a generalization of the \gls{maml} algorithm.
\subsubsection{Implementing a context-conditioning network.}
Learning a function $g$ that produces a parameter initialization for a high-dimensional function $f$ such as a neural network poses problems of under-fitting and computational inefficiency.
There have been methods proposed to alleviate this issue \citep[\textit{e.g.,}~][]{ha2017hypernetworks,mackay2019self}, all resting on the same premise (or empirical demonstration) that producing a subset of the parameter of $f$ is sufficient.
In all our large-scale experiments, we make use of the \gls{film} parameterization from \cite{perez2018film}; namely, the context network $g$ produces the shift and scale parameters in the hidden layers~\citep{ioffe2015batch} of the base network $f$, thereby acting to linearly transform activations in the base network.
\section{Modeling human behavior}
\label{sec:cognitive}
We begin by applying our proposed framework to capture human behavior in a well-known cognitive control experiment.
\subsubsection{Task description.} We model our setup after the experiment in \cite{werchan20158, werchan2016role}. In their study, 8-month-old infants participated in a learning task followed by a violation-of-expectation inference test. In the learning task, infants viewed toy-word mappings that could be grouped into distinct rule sets using the faces and corresponding voices as higher-order contexts (refer to Figure \ref{fig:cognition1}). Each face-voice context labeled the toys using different words, similar to a bilingual environment in which one caregiver speaks English, and another caregiver, Spanish. Near the end of the learning task, a novel face-voice context was presented with several observed toy-word pairs and a novel toy-word pairing. This is akin to the infant observing a new caregiver introducing a new word in Spanish. During the inference test, infants were presented with the first two face-voice contexts from the learning task paired with the novel toy-word pairing presented at the end of the learning task (refer to Figure ~\ref{fig:cognition3}). One of these presentations was consistent with the rule set structure formed during learning, while the other was inconsistent. Sensitivity to this contrast would demonstrate that the infant infers that the Spanish-speaking caregiver should use the novel object-label mapping introduced by the third caregiver, while the English-speaking caregiver should not. Infants looked longer at the inconsistent pairing compared to the consistent pairing, implying greater surprisal during inconsistent pairings~\footnote{Looking time is a common metric used to study children's cognitive indications such as surprise and expectation violation}. If contextual cues did not help learn a hierarchical rule set, then no difference in the looking time would have been observed. Similar studies have also been undertaken with adults \citep{collins2012reasoning, collins2013cognitive}, demonstrating that both adults and infants use contextual cues for faster task adaptation.
\input{figures/exp_celeba}
\subsubsection{Experimental setup.} If our framework can capture context-sensitive adaptation, then we should be able to replicate the looking-time results from \cite{werchan20158, werchan2016role}. To test this, we created an analogous problem setting which consisted of a similar leaning task and inference test. During the learning task, we provided tasks comprising a context, $\mathbf{c} \in \{0, 1, 2\}$ representing the speaker identity and two disjoint batches of stimulus-response pairs $(\mathbf{x}, \mathbf{y}) \in \{0, 1, 2\} \times \{0, 1, 2, 3, 4\}$, each representing an object identity paired with a word label. Like in the behavioral learning task, stimulus-response mappings appear only within valid contexts. Further, one of the stimulus-response pairs,
$(\mathbf{x}, \mathbf{y}) = (2, 4)$ is only presented in one context ($\mathbf{c}=2$) even though it is valid in another ($\mathbf{c}=0$). For the inference test, we create two conditions -- consistent and inconsistent. In the consistent condition, the context network is presented with context $\mathbf{c}=0$, the produced parameters are adapted with seen examples from the context, and the adapted model's loss is evaluated on the held-out stimulus-response mapping, $(\mathbf{x}, \mathbf{y}) = (2, 4)$. In the inconsistent condition, the context network is presented with context $\mathbf{c}=1$, the produced parameters are adapted with seen examples from the context, and the adapted model's loss is evaluated on the held-out stimulus-response mapping, $(\mathbf{x}, \mathbf{y}) = (2, 4)$. Detailed data sampling procedure and worked-out task examples are included in the Supplementary.
\subsubsection{Hyperparameters.} Both the base and context networks use a neural network with two hidden layers of size 10. Since $\boldtheta$'s dimensionality is sufficiently low, the context network, which maps task information to base network weights, directly outputs $\boldtheta$. For task-specific adaptation, we use one gradient update using a support set of size $10$. During inference, we present the model with $2$ support examples from a newly sampled task and measure mean-squared error over $1$ query example.
\subsubsection{Results.} We compare our approach, which we term \gls{maml-context}, against \gls{maml}, the meta-learning method for \gls{sl} in \citep{finn2017model}. We hypothesize that our framework should be sensitive to the evaluation condition just like humans. Since \gls{maml-context} uses the context as higher-order information, its error in the consistent condition should be lower compared to the error in the inconsistent condition (analogous to the difference in looking time/surprise in humans). On the other hand, because \gls{maml} doesn't utilize contextual information, its error should not be influenced by the condition. Thus, its performance would serve as an ideal baseline to compare our framework. Note that absolute value of the validation errors are not particularly important, rather the relative difference in the validation errors across conditions is more important.
We first see that \gls{maml-context} learns faster compared to \gls{maml} (Figure~\ref{fig:cognition2}). This is not surprising as \gls{maml-context} employs the contextual information whereas \gls{maml} does not. We further note that the variance in the performance of \gls{maml} is quite high. Next, in Figure~\ref{fig:cognition5} we see that \gls{maml-context} can qualitatively reproduce the looking time results from \citep{werchan20158, werchan2016role} as the error of \gls{maml-context} in the consistent condition is considerably lower than the error in the inconsistent condition ($3.65$ vs. $6.19$). A paired t-test revealed that this difference was statistically significant, $t(8) = -19.3, p < 0.001$. We further observe that as per our predictions, the baseline \gls{maml} is not affected by the difference in condition as its error on the consistent condition is similar to the inconsistent condition ($3.9$ vs $3.7$). A paired t-test revealed that this difference was not significant, $t(8) = 0.1, p = 0.5$. We also observe that the variance in the error of \gls{maml} is quite high. This is partly driven by the high variance during learning -- whenever \gls{maml} reaches a lower error on the meta-training, it overfits on the training set (due to lack of context information) leading to a very high loss on the validation set. These results show that predictions made by our proposed framework are consistent with human behavior in a well-studied cognitive control task.
\section{Large-scale experiments}
\label{sec:expoverview}
The previous section showed that \glsfirst{maml-context} is consistent with psychological findings about context-sensitive adaptation on a controlled cognitive task. We now evaluate whether \gls{maml-context} can perform competitively in more complex problem settings by guiding adaptation in meta-learning.
\subsubsection{Overview of task information.} In the setting of \acrshort{mujoco}, we explore task information as a diagnostic cue by using scalar parameter as task information. For the more challenging \acrshort{celeba} dataset, we use a binary vector with attribute information as task information. For the \gls{metaworld} tasks, we use the 3D goal position as task information.
\subsubsection{Baseline comparisons.} We compare our approach of context-conditioned adaptation, \gls{maml-context}, against three categories of baseline as described below. For hyperparameters that are common to all comparison methods, we use the same settings as are used in \cite{finn2017model} and \cite{rothfuss2019promp} where applicable.
\textbf{\Gls{maml}} is the meta-learning method for \gls{sl} as described in \citep{finn2017model} and \textbf{\gls{promp}} is a policy-gradient meta-\gls{rl} method that improves upon the initial application of \gls{maml} to \gls{rl} in \cite{finn2017model} by combining \gls{maml} and \gls{ppo} \cite{rothfuss2019promp}.
These methods make no use of task information and serve as lower bounds to task-information conditioning.
\textbf{\Gls{maml-static}} and \textbf{\gls{promp-static}} are baselines with the same architecture as \gls{maml-context} but do not depend on the context and instead replace the context $\mathbf{c}$ with a constant vector; this baseline is intended as a parameter count-equivalent baseline to \gls{maml-context} in order to distinguish architectural differences in performance as compared to \gls{maml} and \gls{promp}.
\textbf{\Gls{maml-concat}} and \textbf{\gls{promp-concat}} use the same architecture as the \gls{maml-context} method but use task information in the form of concatenation to the observation; this setup is analogous to goal-conditioned RL, where policies are trained to reach a goal state that is provided as additional input~\cite{kaelbling1993learning,schaul2015universal,pong2018temporal,sutton2019horde}.
These baselines are provided with the same amount of information as \gls{maml-context} but do not decouple context and task-specific feedback into initialization and adaptation phases, respectively, and therefore test the utility of task-information in priming meta-learning like humans do as opposed to simply being treated as extra observational information.
\subsection{Ambiguous classification with \acrshort{celeba}}
\label{sec:celeba}
\subsubsection{Experimental setup.}
We use a construction similar to \cite{finn2018probabilistic} to generate an ambiguous binary classification task with the \gls{celeba} dataset.
\footnote{We focused on the \gls{celeba} dataset instead of \gls{mini}, the usual dataset for the evaluation of few-shot classification methods~\cite{vinyals2016matching}, as we can easily generate task descriptors.}
In particular, for each task, we sample 2 of the 40 attributes from \gls{celeba}, then subsequently sample for the support set one image that contains these attributes (a positive example) and one image that does not contain these attributes (a negative example); this binary classification task is often ambiguous, as most images in \gls{celeba} have more than two attributes active.
The task information is provided in the form of a two-hot vector that identifies the two attributes upon which the base network has to make a classification decision.
The query set comprises 15 examples as in the experimental setup in \cite{vinyals2016matching}.
\subsubsection{Hyperparameters.}
The context network pipeline embeds the two-hot task information vector via a learned embedding matrix; these embeddings are summed then fed as input to a two-layer feed-forward neural network with 40 hidden units.
As per the implementation of \gls{film}-conditioning, the context network outputs a feature map that performs linear transformations to the base network's hidden activations.
The base network itself is a four-layer convolutional network with 32 filters applied at stride 2, similar to the small-scale convolution network employed in few-shot classification on the \gls{mini} dataset~\citep{vinyals2016matching,finn2017model}.
We set hyperparameters on the held-out validation set; all settings as well as details on the implementation of the context network are included in Supplementary.
\subsubsection{Results.} As shown in Table 1, \gls{maml-static} suffers from the need to fit the extra parameters and \gls{maml} performs the task with a low degree of accuracy.
Next, we see that \gls{maml-context} performs marginally better than \gls{maml-concat}. These results suggests that for the highly-ambiguous few-shot \gls{celeba} task, our cognitively-inspired method outperforms the context-independent method like \gls{maml} while performing competitively (if not better) compared to context-concatenation method \gls{maml-concat}.
\subsection{Parameterized \acrshort{mujoco} Tasks}
\label{sec:mujoco}
We next comparing the above methods on simple continuous control tasks by using a set of three parameterized environments from the \gls{mujoco} simulator~\citep{todorov2012mujoco}.
For the below results, the average return for the pre- and post-task-specific adaptation is computed from trajectories sampled before and after the inner loop for task-specific adaptation.
\subsubsection{Environments.}
In these environments, the underlying dynamics of the \gls{half-cheetah}, \gls{walker-2d}, and \gls{ant-goal} environments depend on a randomly sampled scalar parameter: In \gls{half-cheetah} and \gls{walker-2d}, a scalar parameter controls the direction of motion (forward or backward) that is produced for a given action; for \gls{ant-goal}, a randomly sampled 2D position defines a goal to which the actuator must be moved.
We use this scalar parameter as task information for this setting.
\subsubsection{Hyperparameters.}
For all method including the \gls{promp} and \gls{promp-concat} baselines, the base policy is a fully-connected network with two hidden layers of dimension $64$ and ReLU nonlinearities as in \cite{rothfuss2019promp}.
For \gls{promp-context} and the \gls{promp-static} baseline, the base policy is conditioned with a \gls{film} module; this module is fed contextual input and outputs a feature map that performs linear transformations on the policy network's hidden layers.
In our experiments, \gls{film} is represented as a fully connected network with two hidden layers (of increasing dimension--32 and 64--to achieve up-sampling of the context) and outputs $W_i$ and $b_i$ for each hidden representation $h_i$ in the policy network, performing the transformation $h'_i = W_i \odot h_i + b_i$.
For other hyperparameters that are common to all four comparison methods, we use the same settings as are used in \cite{rothfuss2019promp}.
In particular, the number of inner optimization steps is set to one, entailing two rollouts in the environment to evaluate the pre- and post-adaptation objectives.
\subsubsection{Results.}
Figure \ref{fig:mujoco} reports the post-adaptation performance of all methods.
First, task information is beneficial as \gls{promp-context} consistently outperforms both \gls{promp} and \gls{promp-static} in all three environments.
Next, \gls{promp-context} performs better than \gls{promp-concat} in the \gls{half-cheetah} environment and at least as well as \gls{promp-concat} in \gls{walker-2d} and \gls{ant-goal}.
This suggests that our cognitively inspired approach of learning conditionally with the task information is a competitive parameterization compared to learning jointly (\textit{i.e.,}~ \gls{promp-concat}, a standard in goal-conditioned \gls{rl} setups).
\subsection{\gls{metaworld} manipulation tasks}
\label{sec:metaworld}
We next investigate on a more challenging setting, using a set of five parameterized environments from the \gls{metaworld} benchmark~\citep[][ Figure~\ref{fig:metaworld} (top)]{yu2019meta}.
\subsubsection{Environments.}
The \gls{metaworld} benchmark presents a variety of simulated continuous control tasks that consist of a 4-dimensional robot actuator that is required to move from an initial position to a goal position.
In these environments, we use the goal position, a 3$\times$1 vector, as task information; this goal position is re-sampled when a new task is encountered.
Similar to the \gls{mujoco} environments, the goal position is normally treated as a direct concatenation to the state observation; we instead use the goal position as an input to context module in order to investigate the effect of context-conditional adaptation in a meta-policy search algorithm.
This use has two advantages: First, the goal position is readily available in different environments in \gls{metaworld}.
Second, and more importantly, we hypothesize that goal information serves as an integral cue for reducing task ambiguity even in the various dense-reward environments in \gls{metaworld}.
Furthermore, a task is defined as a fixed goal with varying initial states.
This makes these environments more challenging: rather than fixing the goal and initial state, the pre- and post-adaptation policies are evaluated with different initial states and goals.
\subsubsection{Hyperparameters.}
We use the same base policy and context network implementations as in the previous section.
Since \gls{metaworld} environments are substantially more difficult to solve as compared to the \gls{mujoco} environments, these environments needed more inner adaptation steps to show post-adaptation improvement; an inventory of hyperparameter settings are provided in Supplementary.
\subsubsection{Results.}
Results are shown in Figure~\ref{fig:metaworld}.
We first observe that even for \gls{reach}, a very simple environment, task information is necessary to perform well on the task, as evident by the superior performance of \gls{promp-context} and \gls{promp-concat}.
One possible explanation for this is that the reward available in \gls{reach} is insufficient to guide meta-learning by itself, and that the goal information serves as a useful cue to guide meta-learning.
Additionally, we observe that in the \gls{reach} environment, context-conditioning is not especially beneficial compared to context-concatenation as both \gls{promp-context} and \gls{promp-concat} perform similarly on this task.
Next, we see that in both \gls{door-lock} and \gls{door-unlock}, task information is not necessarily crucial to perform well, as both \gls{promp-context} and \gls{promp-concat} perform similarly to \gls{promp}.
Interestingly, the over-parameterized architecture in the \gls{promp-static} method worsens the performance on the \gls{door-unlock} environment.
The most interesting cases are the \gls{soccer} and \gls{basket-ball} environments: Here, we see that \Gls{promp-context} significantly outperforms all other methods.
Furthermore, we see that simply providing task-information as an extra input is not beneficial as evident from the performance of the \Gls{promp-concat} method on these two environments.
In summary, our proposed contextual-conditional meta-learning outperforms all the methods (including \gls{promp-concat}) on both the \gls{soccer} and \gls{basket-ball} environments and performs as well as the other methods (if not better) in the remaining environments.
As a main takeaway, results from this experiment suggest that our cognitively inspired framework is a promising way to improve the performance of meta-learning on more challenging tasks such as \gls{metaworld}.
\section{Conclusion}
\label{sec:conclusion}
An extensive literature in psychology and neuroscience demonstrates that context-specific adaptation is an integral component of cognitive control~\citep{monsell1996control, dosenbach2006core, sakai2008task}. Here, we explain context-sensitive adaptation under a meta-learning framework that integrates task information to guide adaptation to new tasks. Our modeling results on a cognitive control task support existing theories that propose higher-order contextual information helps humans structure learning~\citep{collins2012reasoning, frank2012mechanisms, collins2013cognitive, donoso2014foundations, eckstein2020computational}. According to these theories, hierarchical learning based on contextual cues ensures that learning new information does not conflict with behaviors learned in other contexts; for instance, an infant in a bilingual environment receiving two different labels for the same word would not get confused when labels are consistent with the higher-order context provided by the identity of the speaker.
Our large-scale experiments further show that our cognitively inspired meta-learning framework is also a promising approach towards improved adaptation in meta-learning. Analogous to the way people use contextual cues as a prior over task structure, our framework thus highlights the value of task information in bringing meta-learning algorithms closer to human-like learning.
\section{Extended related work}
\label{sec:litrev}
\paragraph{Context-conditioning in meta-learning.}
We now briefly review prior work on context-conditioned meta-learning before contrasting our contribution against these papers.
\acrshort{tadam}~\citep{oreshkin2018tadam} uses \gls{film} conditioning to improve the performance of feature extractors for few-shot classification. However, \Acrshort{tadam} uses a very specific variable for conditioning -- the mean of the class prototypes is the output from a conditional network which serves as inferred task context. On the other hand, we employ \gls{film} with a variety of task-specific contextual cues and show how this can capture context-sensitive adaptation in humans. From our perspective, \Acrshort{tadam} serves as a useful starting point towards considering how context-conditioned meta-learning can be adapted to capture human-like behavior.
\citep{lee2019learning} learn task-specific balancing variables in order to balance between the meta-knowledge and task-specific update. However, in contrast to our work, they do not employ contextual cues.
\citep{baik2019learning} aim to control the influence of prior knowledge for each task and propose a method that performs selective forgetting by applying a task-dependent layer-wise attenuation on MAML initialization. This is in contrast to our proposal of utilizing the additional information provided by contextual cues to capture human behavior.
\citep{vuorio2018toward} is similar to our work from an architectural point of view as it employs a modulation network that produces a task embedding which is used to generate parameters that modulate the task network. However, the key difference is that while \citep{vuorio2018toward} generates the parameters by identifying the mode of tasks sampled from a multi-modal task distribution, we generate the parameters by utilizing contextual information. Future work could investigate the benefits of generating the parameters by utilizing both the multi-modal distribution as well as the auxiliary contextual information.
\cite{chen2019adaptive} show that using image tags as auxiliary information helps to learn a better representation for prototypical networks, enabling better generalization to unseen tasks. \cite{pahde2018cross} learn both an image classifier and a text-conditioned image generator as a pre-training step; the generator is then used to provide auxiliary data during the few-shot adaptation stage. Both these approaches use contextual information at the instance or class level; in contrast, we operate over task-specific contexts, thus enabling to model human behavior.
\cite{lee2018gradient} is based on the idea that task-specific learning should require fewer degrees of freedom compared to meta-learning and proposes a meta-learning model that determines a subspace and a corresponding metric that task-specific learners can learn in. This is in contrast with our main idea of contextual adaptation.
\cite{yoon2019tapnet} linearly transform the network output with a task-specific projection; whereas we use contextual information to initialize the meta-learner.
\cite{rakelly2019efficient} learn a policy that adapts to the task at hand by performing inference over a latent context variable on which the policy is conditioned. Here, context is defined as the history of past transitions, which is orthogonal to our setting of using the extra available contextual cues (and not the history of past transitions) to prime learning. Further, they do not investigate priming learning with context variables.
Lastly, the work of \cite{andreas2018learning} uses auxiliary contextual information to constrain adaptation which makes it closer to our proposed method. However, while \cite{andreas2018learning} perform task-specific parameter estimation in a linguistically structured latent space, we condition on arbitrary task information before interaction with a task, therefore combining more flexible adaptation of task-specific models with guidance provided by arbitrary (\textit{i.e.,}~ beyond linguistic) context.
\paragraph{Other uses of context.} In addition to context-conditioned meta-learning, there has been a wide variety of work that study the utility of contextual information in decision-making. In the supervised setting, the use of descriptions or tags as extra inputs improves fine-grained image classification~\citep{reed2016learning,he2017fine} and zero-shot learning~\citep{norouzi2013zero}. Contextual information has also been used in sequential decision-making in the form of instruction following~\citep{macmahon2006walk,vogel2010learning,branavan2010reading,chen2011learning,artzi2013weakly,kim2013adapting,Andreas15Instructions}, to guide learning of reward functions~\citep{bahdanau2018learning,zou2019reward} and environment models~\citep{narasimhan2018grounding}, or for better exploration~\citep{harrison2018guiding}. While these methods make use of contextual information, they do so in parallel with concept or policy learning and usually do not deal with few-shot settings. This is analogous to the \textsc{concat} baseline used in our experiments and therefore cannot capture context-specific adaptation in humans. Here, we use contextual information to guide the initialization of task-specific parameters, followed by few-shot adaptation using feedback from the target task; this ordering enforces the use of the task information as a prime for interaction with the target task, similarly to context-sensitive adaptation in humans.
\clearpage
\section{Additional experimental details}
\label{sec:exp_details}
\subsection{Modeling human behavior}
For the cognitive modeling experiment, we report the average of five seeds.
During the learning task, to reproduce the behavioral task of \cite{werchan20158, werchan2016role}, we provided tasks comprising a context, $\mathbf{c} \in \{0, 1, 2\}$ and two disjoint batches of stimulus-response pairs $(\mathbf{x}, \mathbf{y}) \in \{0, 1, 2\} \times \{0, 1, 2, 3, 4\}$, where each stimulus-response mappings appeared only within valid contexts. Table \ref{tab:cog} presents the training data sampling procedure in detail.
The hyperparameters are provided below. Further details can be determined by inspecting the attached code that reproduces all of our results (\texttt{code\_cognitive.zip}).
\begin{tabular}{l|ccc}
\toprule
{\sc Cognitive} & {\sc Hyperparameters} \\
\midrule
Gradient Clip Norm & 10.0 \\
Inner Loop Learning Rate & $0.1$ \\
Outer Loop Learning Rate & $0.005$ \\
Number of Meta-training Steps & $100$\\
Number of Inner Adaptation Steps & 1 \\
\bottomrule
\end{tabular}
\subsection{\gls{celeba}}
The hyperparameters for the \gls{celeba} experiments are provided below.
Note that \cite{finn2018probabilistic} hold out entire attributes at meta-test time, while we hold out combinations of attributes; our setup therefore treats the \gls{celeba} attributes similarly to natural language descriptions with no unobserved vocabulary. An interesting next step would be to add in a component that extrapolates the context network to be applied to out-of-vocabulary items~\citep[\textit{e.g.,}~][]{hice2019}.
Further details can be determined by inspecting the attached code that reproduces all of our results (\texttt{code\_celeba.zip}).
\begin{tabular}{l|ccc}
\toprule
{\sc \gls{celeba}} & {\sc Hyperparameters} \\
\midrule
Gradient Clip Norm & 10.0 \\
Inner Loop Learning Rate & $0.01$ \\
Outer Loop Learning Rate & $0.001$ \\
Number of Meta-training Steps & $10^{4}$\\
Number of Inner Adaptation Steps & 5 \\
Meta-batch Size & 4 \\
\bottomrule
\end{tabular}
\subsection{Reinforcement learning experiments}
For all \gls{rl} experiments, we report the average over three seeds. The hyperparameters for \gls{mujoco} and \gls{metaworld} are provided below.
Further details (and the environment-specific horizon length) can be determined by inspecting the attached that reproduces all of our results (\texttt{code\_rl.zip}).
\begin{tabular}{l|ccc}
\toprule
{\sc \gls{mujoco}} & {\sc Hyperparameters} \\
\midrule
Clip Parameter & 0.3 \\
Discount ($\gamma$) & 0.99 \\
Lambda ($\lambda$) & 1.0 \\
KL Coeff & 0.0 \\
Learning Rate & $3.0\cdot 10^{-4}$ \\
Tasks per Iteration & 40 \\
Trajectories per Task & 20 \\
Inner Step Size $\alpha$ & 0.1 \\
Inner Adaptation Steps & 1-2 (env-specific) \\
Grad Steps Per \gls{promp} Iter & 3-5 (env-specific) \\
\bottomrule
\end{tabular}
\begin{tabular}{l|ccc}
\toprule
{\sc \gls{metaworld}} & {\sc Hyperparameters} \\
\midrule
Clip Parameter & 1.0 \\
Discount ($\gamma$) & 0.99 \\
Lambda ($\lambda$) & 1.0 \\
KL Coeff & 0.0 \\
Learning Rate & $3.0\cdot 10^{-4}$ \\
Tasks per Iteration & 20 \\
Trajectories per Task & 5 \\
Inner Step Size $\alpha$ & 0.05 \\
Inner Adaptation Steps & 4 \\
Grad Steps Per \gls{promp} Iter & 5 \\
\bottomrule
\end{tabular}
\section{Architecture details}
Table \ref{tab:architecture} provides the architecture details for the different experiments. Note that FC(x,y) is a standard fully-connected network with two hidden layers of size x and y, Conv([x,y], s, f, n) is a n layer convolutional network with f kernels of size [x,y] with stride length s, and LSTM([x, y], h) is a LSTM network with hidden layers of size x and y with a hidden state of size h.
\begin{table*}[htb]
\centering
\begin{tabular}{l|ccc}
\toprule
{\sc Meta-train} & {\sc Meta-test} \\
\midrule
10 points with $\mathbf{c} = 0, \mathbf{x} = 0, \mathbf{y} = 0$ & 2 points with $\mathbf{c} = 0, \mathbf{x} = 1, \mathbf{y} = 1$ \\
10 points with $\mathbf{c} = 0, \mathbf{x} = 1, \mathbf{y} = 1$ & 2 points with $\mathbf{c} = 0, \mathbf{x} = 0, \mathbf{y} = 0$ \\
10 points with $\mathbf{c} = 1, \mathbf{x} = 0, \mathbf{y} = 2$ & 2 points with $\mathbf{c} = 1, \mathbf{x} = 1, \mathbf{y} = 3$ \\
10 points with $\mathbf{c} = 1, \mathbf{x} = 1, \mathbf{y} = 3$ & 2 points with $\mathbf{c} = 1, \mathbf{x} = 0, \mathbf{y} = 2$ \\
5 points with $\mathbf{c} = 2, \mathbf{x} = 0, \mathbf{y} = 0$; 5 with $\mathbf{c} = 2, \mathbf{x} = 1, \mathbf{y} = 1$; & 2 points with $\mathbf{c} = 2, \mathbf{x} = 2, \mathbf{y} = 4$ \\
5 points with $\mathbf{c} = 2, \mathbf{x} = 0, \mathbf{y} = 0$; 5 with $\mathbf{c} = 2, \mathbf{x} = 2, \mathbf{y} = 4$; & 2 points with $\mathbf{c} = 2, \mathbf{x} = 1, \mathbf{y} = 1$ \\
5 points with $\mathbf{c} = 2, \mathbf{x} = 1, \mathbf{y} = 1$; 5 with $\mathbf{c} = 2, \mathbf{x} = 2, \mathbf{y} = 4$; & 2 points with $\mathbf{c} = 2, \mathbf{x} = 0, \mathbf{y} = 0$ \\
\bottomrule
\end{tabular}
\caption{Detailed training procedure for the cognitive modeling experiment}
\label{tab:cog}
\end{table*}
\begin{table*}
\centering
\begin{tabular}{l|p{2.5cm}p{2.5cm}p{2.5cm}p{2.7cm}p{2.5cm}}
\toprule
{\sc Dataset} & {Base Network} & {Context Network} & {\sc*-static Input} & {\sc*-concat Input} & {\sc \gls{maml-context} Input} \\
\midrule
\gls{celeba} & Conv([3,3], 2, 32, 4) & FC(40, 40) with \gls{film} conditioning & Constant Vector embedded by a LSTM([40,40], 32) & Two-hot vector w/ attribute information & Two-hot vector w/ attribute information embedded by a LSTM([40,40], 40) \\
\hline
\\
\gls{mujoco},\gls{metaworld} & FC(64, 64) & FC(32, 64) & Constant Vector & Scalar parameter for \gls{mujoco}, 3D goal position for \gls{metaworld}) & Scalar parameter for \gls{mujoco}, 3D goal position for \gls{metaworld})\\
\bottomrule
\end{tabular}
\caption{Architectural details for the experiments. The first two columns correspond to the network architecture for the base and contextual network respectively. The last three columns describe the type of contextual input that is fed into the context network for Static, Concat, and MLTI baselines. Note that for \gls{maml} and \gls{promp} baselines, there is no contextual input. }
\label{tab:architecture}
\end{table*}
\section{Ethics Statement}
Our research contributes towards improving our understanding of cognitive control in humans as well as the field of meta-learning, which aims to emulate the ability of humans to learn new tasks rapidly. There are many benefits to such contributions, such as the development of automated systems that can quickly adapt and learn to solve a variety of tasks, although the current problem settings are simplistic as compared to the everyday variability that humans face. However, in the longer term, progress in adaptable and robust algorithms leads towards automation, which will disrupt the labor structures that many people rely on for employment.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,922 |
\section{Introduction}
\label{sec:intro}
Much of the literature on the TMD-factorization formalism is based on methods like those of Collins Soper and Sterman (CSS)~\cite{Collins:1981uk,Collins:1981uw, Collins:1984kg, Collins:2011qcdbook}
where, traditionally,
applications have been at very high scales.
The formalism involves a factorization with
TMD parton densities and/or
fragmentation functions together with evolution equations and associated properties like universality.
TMD correlation functions have attracted interest, both for their
usefulness in perturbative calculations,
and for their potential to yield information about underlying
non-perturbative QCD structures.
Results with essentially the same or a related
structure are also found in SCET
~\cite{Becher:2010tm,Echevarria:2012pw,Rothstein:2016bsq}.
In this paper, we focus on the CSS formalism
and its updated version in~Ref.~\cite{Collins:2011qcdbook}.
TMD correlation functions are most useful for $\Tsc{q} \ll Q$, where $\Tsc{q}$ is the relevant
transverse momentum and $Q$ is the overall hard scale.
When $\Tsc{q}$ is of order $Q$, the cross section does not factor into TMD correlation functions, but normal collinear
factorization applies. It is, of course, necessary to be able to
analyze cross sections over the whole range of $\Tsc{q}$ including
intermediate transverse momenta. To this end, CSS organized the cross
section into an additive form, $W+Y$, where $W$ is the pure TMD factorization
term and $Y$ is a correction term using collinear factorization. $W$ dominates in the limit
of small $\Tsc{q}/Q$ while $Y$ is a correction for large $\Tsc{q}/Q$. This
was designed with the aim to have a formalism that is valid to leading
power in $m/Q$ uniformly in $\Tsc{q}$; here $m$ is a typical hadronic
mass scale.
However, it has become increasingly clear that the original
CSS $W+Y$ method is not sufficient for modern TMD applications. One reason is that there is a growing number of
lower-$Q$ phenomenological studies focused on the intrinsic
transverse motion related to nonperturbative binding and nucleon structure.
The advantages of the usual $W+Y$
decomposition are clearest when $Q$ is large enough
that there is a broad intermediate range of transverse momentum
characterized by $m \ll \Tsc{q} \ll Q$; that is, there is a range where $\Tsc{q}/Q$ is
sufficiently small that TMD factorization is valid to good accuracy,
while $m/\Tsc{q}$ is also sufficiently small that collinear
factorization is simultaneously valid. However, at lower phenomenologically interesting values of
$Q$, neither of these ratios is necessarily very small.
Some other difficulties will be summarized below. These
particularly concern the ability of the original $W+Y$ method to
properly match collinear factorization for the cross section
integrated over $\T{q}$.
The problems create practical difficulties for studies specifically devoted to extracting
and analyzing non-perturbative transverse momentum dependence. For such applications, the relevant experiments
often involve hard scales of only a few GeV. The phase space of $\Tsc{q}$ has a narrow transition window between
a solidly perturbative transverse momentum region (where $\Tsc{q}\simeq O(Q)$) and a
non-perturbative region (where $\Tsc{q} \simeq O(m)$), making the treatment of matching the perturbative and nonpertubative content in the intermediate region rather delicate.
A classic analysis of the issues concerning the matching of the TMD factorization
and collinear factorization was given by Arnold and Kauffman
\cite{Arnold:1990yk}, and more recently in Refs.~\cite{Guzzi:2013aja,Su:2014wpa,Boglione:2014oea,Boer:2015uqa}.
See especially Sec.~2.6 of Ref.~\cite{Guzzi:2013aja} for a recent overview of many of the issues to be discussed in this paper.
Over the past several years, most theoretical attention in TMD physics has been focused on
the details of evolution of the $W$-term and its associated TMD correlation functions.
However, particularly with recent results like~\cite{Su:2014wpa,Boglione:2014oea,Boer:2015uqa},
it is evident that a satisfactory treatment of non-zero $\Tsc{q}/Q$ corrections
and the matching to $\Tsc{q} \gtrsim Q$ is important since it relates various phenomenological analyses to TMD theory. This is especially the case in efforts to
interpret transverse momentum spectra in terms of hadronic structure, where a detailed separation and
identification of large and small $\Tsc{q}/Q$ behavior and its potential interplay is important.
Generally, to get results that are valid over all $\Tsc{q}$ we need to
combine the information given by TMD factorization and by collinear
factorization. TMD factorization is appropriate for $\Tsc{q} \ll Q$;
its accuracy degrades as $\Tsc{q}$ increases and eventually it does
not give even a qualitatively correct account of the cross section.
Collinear factorization is valid in two ways. One is for the cross
section differential in $\Tsc{q}$ with $\Tsc{q} \sim Q$; the accuracy
degrades as $\Tsc{q}$ decreases, and collinear factorization becomes
entirely inapplicable for the differential cross section once
$\Tsc{q}$ is of order $m$ or smaller. But collinear factorization is
also valid for the cross section integrated over $\T{q}$.
In this article, we argue for an enhanced formalism. As already
stated, the $W+Y$ formalism as given by CSS was designed to combine
the best of TMD and collinear factorization at intermediate $\Tsc{q}$.
What was not done was to adjust the formalism to work nicely also for
the cross section integrated over all $\T{q}$. We summarize an
interconnected set of problems as follows:
\begin{itemize}
\item A standard way of presenting the $W$ term, with the solution to
the evolution equations, is as a Fourier transform from a transverse
coordinate $\Tsc{b}$ to transverse momentum. When $\Tsc{b} \to 0$,
the $\Tsc{b}$-space integrand $\tilde{W}(\Tsc{b})$ goes to zero.
(See Appendix~\ref{sec:Wzero}.) Therefore, the integral over all
transverse momentum of the corresponding momentum-space contribution
$W(\Tsc{q})$ is zero. Now, at small $\Tsc{q}$, $W(\Tsc{q})$ is the
dominant TMD-factorized contribution to the cross section, and is
necessarily positive. Therefore, at some larger $\Tsc{q}$, the
$W(\Tsc{q})$ term must become negative. By construction, the $Y$
term compensates to give the physical positive cross section, so
this is not a problem in principle. However, if $W$ becomes
\emph{large} and negative at $\Tsc{q} \sim Q$, the $Y$ term becomes
large and positive, so the formalism involves implementing a
cancellation of two large quantities. This can enormously magnify
the effects of truncation errors in perturbative quantities, since
these have different structures in $W$ and $Y$.
\item In pure parton-model treatments of TMD functions, the transverse
momentum integral of the $W$-term gives the collinear factorization
parton model for the cross section integrated over $\T{q}$. The
previous item shows that, at least within the original CSS approach,
this connection is not merely subject to higher-order perturbative
corrections, but is totally lost.
\item In real QCD, consider the cross section integrated over all
$\T{q}$; it is of the form of factors
of collinear parton densities and/or fragmentation functions at
scale $Q$ convoluted with hard scattering that is expanded in powers
of $\alpha_s(Q)$. The lowest order for the integrated cross section itself is
correctly given by a perturbative expansion of the hard scattering,
with the first term being zeroth order in $\alpha_s(Q)$
(concentrated at $\Tsc{q}=0$). We can try doing this for all
quantities in
\begin{equation}
\label{eq:int.sigma}
\int \diff[2]{\T{q}}
\frac{ \diff{\sigma} }{ \diff[2]{\T{q}} \dots }
=
\int \diff[2]{\T{q}} W
+ \int \diff[2]{\T{q}} Y.
\end{equation}
Since the integral over $W$ is zero,
the integrated cross section is given by the integral over $\T{q}$
of the $Y$ term. But the CSS construction
of the $Y$ term shows
that its lowest term is the same order as for collinear
factorization for the differential cross section, which is first
order in $\alpha_s(Q)$~\cite{Collins:1984kg} .
We thus have a paradox: a mismatch of orders in $\alpha_s(Q)$
between the left and right hand sides of Eq.\ (\ref{eq:int.sigma}).
The real source of the paradox and an indications of what to do
about are indicated next.
\item The zero value of $\int \diff[2]{\T{q}} W$ is not obtained from
a fixed order perturbative application of collinear factorization to
$\tilde{W}(\Tsc{b},Q)$ at $\Tsc{b}=0$, but from the solution of
evolution equations for $\tilde{W}$, as seen in Eq.\
(\ref{eq:finalevolved}) below. Each order of the perturbative
expansion in powers of $\alpha_s(Q)$ contains up to two logarithms per
loop of $Q\Tsc{b}$. These logarithms are evidently infinite at
$\Tsc{b}=0$, and fixed order perturbative calculations are entirely
inapplicable to $\int \diff[2]{\T{q}} W$ with the original CSS
definition.
Recall that $W$ is an approximation to the cross section only for
$\Tsc{q} \ll Q$. Thus the transverse-coordinate-space quantity
$\tilde{W}(\Tsc{b},Q)$ is important for a physical cross section only
for $\Tsc{b}$ bigger than about $1/Q$. Finite perturbative orders
of the collinear expansion are useful when $\Tsc{b}$ is of order
$1/Q$.
\item Even without the issue of $W(\Tsc{q})$ becoming negative at large
$\Tsc{q}$, there is the issue that it involves, in momentum space, a
convolution of two independent TMD densities. At large $\Tsc{q}$,
these can be computed perturbatively in terms of collinear
parton distribution functions (pdfs) and/or collinear fragmentation functions (ffs).
Power counting indicates that they are roughly of order
$1/\Tsc{q}^2$. Therefore, the basic TMD factorization formula gives
a cross section that has this same power counting, and extends
infinitely far beyond the kinematic limit. The $Y$ term compensates
this in principle, but the different perturbative truncations in $Y$
and $W$ imply that the result can be numerically a bad
approximation.
\end{itemize}
The culprit in each of the above is that the TMD factorization formula
used in $W(\Tsc{q})$ was derived to be a good approximation to the
cross section for $\Tsc{q} \ll Q$, but in the integral over $\T{q}$,
the formula is being used far beyond its domain of applicability.
There is a uniqueness to the particular form of $W(\Tsc{q})$ that
gives rise to its undesirable properties at large $\Tsc{q}$. The
uniqueness arises from the use of a strict leading power expansion
in $\Tsc{q}/Q$ when constructing the TMD factorization formula for
$W$. As an illustration, consider a lowest-order perturbative
expansion that gives in $W$ a factor $\alpha_s \ln(Q/\Tsc{q}) /
\Tsc{q}^2$ at small $\Tsc{q}$, with its characteristic logarithm.
The use of exactly a single power of $\Tsc{q}$ (times logarithms)
entails keeping the same formula at large $\Tsc{q}$, where the
logarithm becomes negative.
The use of a strict leading power in $\Tsc{q}/Q$ is is important
because the non-leading powers are much more complicated and often
non-factorizing. This issue is particularly important because, to
leading power, gluons can connect subgraphs in different kinematic
regions. To get factorization, Ward identities are used to extract
these gluons into attachments to Wilson lines in operator
definitions of the correlation functions like TMD pdfs and ffs.
However, the Ward identities apply only in the context of an
approximation that is valid at leading power (or perhaps one power
beyond). The result is the afore-mentioned uniqueness in the
factorized form. Essentially the same considerations apply in SCET
for essentially the same reasons --- see
Ref.~\cite{Rothstein:2016bsq}.
It therefore becomes quite non-trivial to adjust the TMD
factorization formula to get nicer properties at large $\Tsc{q}$
without violating the derivation of TMD factorization.
Many implementations of TMD factorization calculate TMD functions by effectively resuming logarithms of $\Tsc{b} Q$.
The usefulness this type of resummation assumes that there is a broad range of $\Tsc{b}$ where $1/Q \ll \Tsc{b} \ll 1/m$. At smaller $Q$ the
window satisfying this condition shrinks and eventually vanishes, so that the advantage of such techniques becomes questionable.
Moreover, errors introduced by including the region where $\Tsc{b} \ll 1/Q$
can start to become a significant fraction of the resummation calculation.
The situation is simpler if one simply works in a leading logarithm
approximation as in the work of Parisi and Petronzio (PP)
\cite{Parisi:1979se}. There an ad hoc modification to impose rough
approximations to the true kinematics is appropriate. But
modifications are much harder to impose in the middle of a full proof
of factorization that is to be applied generally.
Our approach in this paper is to preserve the factorized form of
$\tilde{W}(\Tsc{b})$ in transverse coordinate space, but to modify the
way in which it is used to construct a contribution from $W(\Tsc{q})$ to the cross
section, to try to evade the problems listed above. We must
preserve the property that $W$ gives a good approximation
to the cross section at low transverse momentum, including the
important region where $\Tsc{q}$ is in the non-perturbative region of
order $m$. Naturally, the definition of $Y$ must be correspondingly
modified.
The paper is organized as follows: We provide a general background of the main issues
in Sec.~\ref{sec:principles}, and outline the principles that will guide our matching procedure.
We review the basic logic of the $W+Y$ method in Sec.~\ref{sec:largesmall}, and include some clarifying remarks.
Since an important component of our procedure is that it leaves the treatment of the $W$-term largely unaltered,
we will also need to review the standard factorization and evolution of the $W$-term in the CSS TMD factorization formalism, which we
do in Sec.~\ref{sec:review}. Next, we will explain our modifications,
starting in Sec.~\ref{sec:bstarmod} with a modified treatment of the
standard $\bstarsc$-prescription. This will allow us to construct a
generalized $W$-term. From this we will obtain a correspondingly generalized
$Y$-term in Sec.~\ref{sec:Yterm}. Thus we will have constructed a new $W+Y$ method, but with additional parameters. In Sec.~\ref{sec:together} we
discuss how the principles from Sec.~\ref{sec:principles} constrain parametrizations. In Sec.~\ref{sec:bcfg}, we elaborate on
technical steps needed to calculate in the new $W+Y$ prescription, and in Sec.~\ref{sec:demo} we demonstrate the utility of our treatment by calculating
the $Y$ term with simple parametrizations of collinear quark pdfs and ffs.
We conclude by summarizing our logic and commenting on ways forward in Sec.~\ref{sec:con}.
\section{Guiding Principles}
\label{sec:principles}
The standard $W+Y$ construction relies on the fact that, at very large $Q$,
there is a broad range where $m/\Tsc{q}$ and $\Tsc{q}/Q$ are
both good small expansion parameters.
We suggest the following principles to guide the choice of an
improved formalism:
\begin{enumerate}
\item When the $W$ term is integrated over all $\T{q}$, it should
obey an ordinary collinear factorization property. This implies
that when the scales in the pdfs and ffs are set to $\mu = Q$, the
result should agree with the ordinary factorization calculation
for the integrated cross section to zeroth order in $\alpha_s(Q)$,
thereby matching the parton-model result appropriately.
\item For $\Tsc{q} \gtrsim O(Q)$, the cross section
given by $W+Y$ should appropriately match fixed order collinear perturbation
theory calculations for large transverse momentum.
\item For very large $Q$, the normal $W + Y$ construction should
automatically be recovered for the $m \ll \Tsc{q} \ll Q$ region, to leading power in $Q$.
\item The modified $W$ term should be expressed in terms of the
same coordinate space quantity $\tilde{W}$ as before, in order
that operator definitions of the pdfs and ffs can be used,
together with their evolution equations.
\item The sum $W+Y$ should give a leading power approximation to the cross
section over the whole range of $\Tsc{q}$. Fixed order expansions
of $Y$ in collinear perturbation theory are suitable for
calculating $Y$, while the usual solution of evolution equations
is used for $W$.
\end{enumerate}
We will use these principles to strongly motivate our new constructions of $W$ and $Y$.
We emphasize here that many of the elements of this article have already
been used in the past in various forms. Our purpose in this paper is to synthesize and systematize them.
For example, a detailed discussion of large and small $\Tsc{q}$ matching and the associated perturbation theory errors in intermediate
regions of $\Tsc{q}$ appears in Ref.~\cite{Arnold:1990yk} -- see especially Sections 1.2-1.4 for a clear discussion.
The work of Catani-Trentadue-Turnock-Webber and related
treatments, especially Bozzi-Catani-de Florian-Grazzini (BCFG) in~\cite{Bozzi:2005wk}
replace $\ln (Q^2 \Tsc{b}^2)$ terms in a resummation with $\ln (Q^2 \Tsc{b}^2 + 1)$, thus cutting off the $\Tsc{b} \ll 1/Q$ contribution.
This is similar to work by Parisi and Petronzio~\cite{Parisi:1979se} that used this method to handle the
$\Tsc{b} \ll 1/Q$ region in a leading-log approach.
BCFG also impose constraints on the relationship between integrated and transverse momentum dependent cross sections that are
very similar to our points 1) through 3) above.
Nadolsky, Stump and Yuan (NSY)~\cite{Nadolsky:1999kb} performed a CSS-style analysis of semi-inclusive
deep inelastic scattering (SIDIS), but modified the large $\Tsc{q}$ behavior of their resummed term
by introducing $\Tsc{q}/Q$ corrections to the $x$ and $z$ kinematic variables.
Specifically, NSY modified the $W$-term at larger values of $\Tsc{q}/Q$
to improve matching asymptotic term as order $\Tsc{q}/Q$ corrections start to become large.
By examining the kinematics of the process, they found that
an improved matching is achieved if one replaces the standard $x$ and $z$ variables in the collinear pdfs and ffs of the $W$ term by\footnote{See the discussion regarding matching in Section VA of Ref.~\cite{Nadolsky:1999kb} and the
comparison between the modified and unmodified treatments in Fig.~9 of Ref.~\cite{Nadolsky:1999kb}.}
\begin{align}
x {}& \to \tilde{x} = x \left( \frac{\Tsc{q}^2 + Q^2}{Q^2} \right) \, , \label{eq:xtilde} \\
z {}& \to \tilde{z}= z \left( \frac{\Tsc{q}^2 + Q^2}{Q^2} \right) \, . \label{eq:ztilde}
\end{align}
In Ref.~\cite[Eq.~(13.75)]{Collins:2011qcdbook}, Collins proposed to impose a direct cutoff on the large $\Tsc{q}$ part of the $W$-term.
Our method follows a very similar approach (see Sec.~\ref{sec:wterm}), with our $\Xi$ function in Eq.~\eqref{eq:Wnew} corresponding to
Collins's $F(\Tsc{q}/Q)$, and our $\TTnew{}{}$ corresponding roughly to Collins's $L_F$.
Likewise, CSS introduced a mass-scale $Q_T^{\rm min} \sim m$ in Ref.~\cite{Collins:1984kg} to regulate the
low $\Tsc{q}$ part of the $Y$-term calculation. The role of $Q_T^{\rm min}$ is analogous to what we will call $\lambda$ in Sec.~\ref{sec:largesmall}.
The replacements in Eqs.~\eqref{eq:xtilde}--\eqref{eq:ztilde} are physically motivated in that they approximate the kinematic corrections
on $x$ and $z$ momentum fractions that begin to be important at larger $\Tsc{q}$. See also Sec.~2.6 of Ref.~\cite{Guzzi:2013aja} for a review of the kinematical rescaling procedure.
In most implementations of the ResBos Monte Carlo, for both Drell-Yan and SIDIS,
the computational algorithm automatically forces a switch between the $W$-term (there called the ``resummed term'') to a calculation done using purely fixed order perturbative QCD above
some $\Tsc{q}$. In fact, this is useful also for improving the efficiency of computer calculations
since it means that computationally intensive calculations of the $W$-term can be short circuited above some $\Tsc{q}$
without compromising the accuracy of the calculation. (See Refs.~\cite{Balazs:1997xd,Nadolsky:1999kb}.) For very low $\Tsc{q}$, the ResBos
Monte Carlo switches off the $Y$-term for $\Tsc{q} \lesssim 0.5-1.0$~GeV~\cite{pavelprivate}.
Boer and den Dunnen~\cite{Boer:2014tka,Boer:2015uqa} used a method similar to BCFG, but implemented the transition
to very small $\Tsc{b}$ by using a modified renormalization group scale (called $\mub'$).
This aspect of the Boer-den Dunnen approach is very similar to what we will use in this article.
We suggest that, to maintain context, it will be useful to read the articles listed above concurrently
with this paper.
\section{$W$ and $Y$ Terms}
\label{sec:largesmall}
We start by reviewing the $W + Y$ construction. This will establish notational conventions to be used throughout the
paper in addition to clarifying the logic of the $W + Y$ method. We will also introduce
one of our modifications.
Consider a generic transverse momentum dependent cross section
that depends on a hard scale $Q$ and is differential in a transverse
momentum $q_T$. It may also be differential in other kinematical
variables, but for simplicity we will not show these explicitly.
It could be any cross section for which a TMD
factorization theorem exists. We will use the abbreviated notation
\begin{equation}
\label{eq:firstequation}
\cs{}{} = \frac{\diff{} \sigma}{\diff[2]{\T{q}} \diff{Q} \cdots} \,\, .
\end{equation}
The ellipsis indicates possible dependence on other kinematical
variables like $z$ and $x$, whose exact values are not relevant to our
immediate discussion. Although the logic in this paper is meant to apply generally, explicit expressions
will be written for SIDIS. CSS-style derivations of TMD factorization are
given for SIDIS in Refs.~\cite{Meng:1991da,Meng:1995yn} (see also~\cite[Sec.~13.15]{Collins:2011qcdbook}).
The TMD formalism separates Eq.~\eqref{eq:firstequation} into a sum of
two terms. One term ($W$) describes the small transverse momentum
behavior $\Tsc{q} \ll Q$ and an additive correction term ($Y$)
accounts for behavior at $\Tsc{q} \sim Q$:
\begin{equation}
\cs{}{} = \TT{}{} + \YY{}{} + O\mathopen{}\left( \frac{m}{Q} \right)^c \cs{}{} \, . \label{eq:basic}
\end{equation}
The first term on the right is written in terms of TMD pdfs and/or TMD ffs and is constructed to be an accurate description in the limit of
$\Tsc{q}/Q \ll 1$. It includes all
non-perturbative transverse momentum dependence.
The $Y$-term is described entirely in terms of
\emph{collinear}
factorization. Our aim is to construct $W$ and $Y$ such that
$W+Y$ gives the cross section up to an error that, relative to the
cross section, is of order
a positive ($c>0$) power of $m/Q$, where $m$ is a
hadronic mass scale.
The original CSS definition of $W$ is as given in, for
example, Ref.~\cite[13.71]{Collins:2011qcdbook} (where it is called
$L$):
\begin{equation}
\label{eq:wterm}
\TT{}{} \equiv \appor{TMD} \cs{}{} \,.
\end{equation}
The $\appor{TMD}$ ``approximator'' is an instruction to replace the object
to its right by an approximation that is designed to be good in the
$\Tsc{q} \ll Q$ limit. That is, it replaces the exact $\cs{}{}$ by the
approximate $\TT{}{}$:
\begin{align}
\appor{TMD} \cs{}{} = \cs{}{}
& + O \mathopen{}\left( \frac{\Tsc{q}}{Q} \right)^a \cs{}{}
\nonumber \\
& + O \mathopen{}\left( \frac{m}{Q} \right)^{a'} \cs{}{}
\, ,
\label{eq:TMDapdef}
\end{align}
where $a, a' >0$.
Another approximator, $\appor{coll}$, handles the
large $\Tsc{q} \sim Q$ region. It replaces $\cs{}{}$ with an
approximation that is good when $\Tsc{q} \sim Q$. That is,
\begin{align}
\appor{coll} \cs{}{} = \cs{}{}
& + O \mathopen{}\left( \frac{m}{\Tsc{q}} \right)^b \cs{}{}
\, , \label{eq:collapdef}
\end{align}
where $b>0$.
Since $\appor{coll}$ is to be applied to the
$\Tsc{q} \sim Q$ region, one only needs collinear factorization
at a fixed order and with a hard scale $\mu \sim Q$.
If $\Tsc{q} \lesssim m$ and $\Tsc{q} \sim Q$ were the only regions of
interest, then the $\appor{TMD}$ and $\appor{coll}$ approximators would be sufficient. One could
simply calculate using fixed order collinear factorization for the
large $\Tsc{q}$-dependence and TMD factorization for small $\Tsc{q}$-dependence.
A reasonable description of the full transverse momentum
dependence would be obtained by simply interpolating between the
two descriptions~\cite{Chay:1991jc,Anselmino:2006rv}.
However, the region between large and small $\Tsc{q}$ needs special
treatment if errors are to be strictly power suppressed point-by-point
in $\Tsc{q}$. The standard method is to construct a sequence of
nested subtractions. The smallest-size region is a neighborhood of
$\Tsc{q} = 0$, where $\appor{TMD}$ gives a very good approximation.
So, one starts by adding and subtracting the $\appor{TMD}$
approximation:
\begin{align} \cs{}{} \, = \, & \appor{TMD} \cs{}{} \nonumber \\
& \;\; + \Bigg[ \cs{}{} - \appor{TMD} \cs{}{} \Bigg] \, .
\label{eq:nextapp}
\end{align}
From Eq.~\eqref{eq:TMDapdef}, the error term in the square brackets is order $( \Tsc{q}/Q )^a$ and is
only unsuppressed at $\Tsc{q} \gg m$.
Therefore, one may apply $\appor{coll}$ and then use a fixed-order
perturbative expansion in collinear factorization:
\begin{align}
\Gamma( & m \lesssim \Tsc{q} \lesssim Q,Q)
\nonumber\\
={}& \appor{TMD} \cs{}{}
+ \appor{coll} \left[ \cs{}{} - \appor{TMD} \cs{}{} \right]
\nonumber\\
&
+ O\mathopen{}\left( \left( \frac{m}{\Tsc{q}} \right)^b \left( \frac{\Tsc{q}}{Q} \right)^a \right) \cs{}{}
\nonumber \\
&
+ O\mathopen{}\left( \left( \frac{m}{\Tsc{q}} \right)^b \left( \frac{m}{Q} \right)^{a'} \right) \cs{}{}
\nonumber \\
={}& \TT{}{} + \appor{coll}\cs{}{} - \appor{coll}\appor{TMD} \cs{}{}
\nonumber \\
& + O\mathopen{}\left( \frac{m}{Q}\right)^{\rm c} \cs{}{}
\, ,
\label{eq:powercounting}
\end{align}
where $c = \min(a,a',b)$. Thus, the cross section is determined
point-by-point in the mid-$\Tsc{q}$ region, up to powers of $m/Q$, by a combination of TMD and
collinear correlation functions.
The CSS construction of $W+Y$ defines $W$ and $Y$ to be the
first and second terms on the second line of Eq.\
(\ref{eq:powercounting}). Their specific definitions of
$\appor{coll}$ and $\appor{TMD}$ allowed Eq.\
(\ref{eq:powercounting}) to work only in the $m
\lesssim \Tsc{q} \lesssim Q$ region, which we emphasize by the
argument on the left side of Eq.~\eqref{eq:powercounting}. The
error estimates in Eq.~\eqref{eq:powercounting} are inapplicable
outside this range, i.e., they must not be applied when $\Tsc{q} \gg
Q$ or $\Tsc{q} \ll m$. This is because they were extracted from the
leading power of expansions in relatively small kinematic variables
$\Tsc{q}/Q$ and $m/\Tsc{q}$
to give Eqs.~(\ref{eq:TMDapdef}) and~(\ref{eq:collapdef}).
The issues are illustrated by Eq.\ (\ref{eq:collapdef}). The
$(m/\Tsc{q})^b$ estimate is obtained from an expansion in powers of
mass with respect to the smallest scale in the collinear
hard-scattering; it is of the order of the first omitted term in the
expansion. But once $\Tsc{q}$ gets much smaller, the error can be
arbitrarily larger. As a mathematical example, suppose
\begin{equation}
\Gamma = \frac{1}{ (\Tsc{q}^2 + m^2)^2 }.
\end{equation}
The leading power expansion in $m/\Tsc{q}$ is
\begin{equation}
\appor{coll} \Gamma = \frac{1}{ \Tsc{q}^4 },
\end{equation}
and the error is
\begin{equation}
\Gamma -\appor{coll} \Gamma
= \left( - \frac{2m^2}{ \Tsc{q}^2 } - \frac{m^4}{ \Tsc{q}^4 }
\right) \Gamma.
\end{equation}
For the error estimate when $m \lesssim \Tsc{q}$, we can correctly
take $b=2$:
\begin{equation}
\Gamma -\appor{coll} \Gamma
= O\mathopen{}\left( \frac{m^2}{ \Tsc{q}^2 } \right) \Gamma.
\end{equation}
But when $\Tsc{q} \ll m$, the error is a stronger behavior,
$m^4/\Tsc{q}^4$ relative to $\Gamma$.
It is useful to review the precise meaning of notation in the
error estimates, which is as follows: An $O(\Tsc{q}/Q)$ error means
that there exist constant positive real numbers, $\mathcal{C}$ and
$\mathcal{A}$, such that the error is less than $\mathcal{C}
\Tsc{q}/Q$ for $\Tsc{q}/Q < \mathcal{A}$. Analogous statements apply
to $O(m/\Tsc{q})$ and $O(m/Q)$ error estimates. Thus, the error
estimates in Eqs.~\eqref{eq:basic}--\eqref{eq:powercounting} provide
no constraints on the behavior in the $\Tsc{q} \gtrsim Q$ or
$\Tsc{q} \lesssim m$ regions. As shown above, the true errors in
those regions could be much worse than a naive extrapolation of the
powers in Eqs.~\eqref{eq:basic}--\eqref{eq:powercounting} would
suggest.
The above observations do not represent a fundamental breakdown of the
formalism. They merely indicate that some extra care is needed to
construct a formalism valid also for
$\Tsc{q} \lesssim m$ and $\Tsc{q} \gtrsim Q$.
For $\Tsc{q} \lesssim m$, collinear factorization is
certainly not applicable for the differential cross section. But
this region is actually where the $W$-term in
Eq.~\eqref{eq:TMDapdef} has its highest validity. So one simply
must ensure that the would-be $Y$-term
\begin{equation}
\appor{coll} \cs{}{} - \appor{coll} \appor{TMD} \cs{}{}
\end{equation}
is sufficiently suppressed in Eq.~\eqref{eq:powercounting} for
$\Tsc{q} \lesssim m$. Therefore, we will modify the usual
definition of $Y$ by inserting a suppression factor at low
$\Tsc{q}$:
\begin{align}
\label{eq:yterm}
& \YY{}{} \nonumber \\
&{}\equiv \left\{ \appor{coll} \left[ \cs{}{} - \TT{}{} \right] \right\} X(\Tsc{q}/\lambda) \nonumber \\
&{}= \left\{ \appor{coll} \cs{}{} - \appor{coll} \appor{TMD} \cs{}{} \right\} X(\Tsc{q}/\lambda) \, .
\end{align}
The smooth cutoff
function $X(\Tsc{q}/\lambda)$ approaches zero for $\Tsc{q}
\lesssim \lambda$ and unity for $\Tsc{q} \gtrsim \lambda$. It ensures
that the $Y$-term is a correction for $\Tsc{q} \gtrsim m$ only. As
long as $\lambda = O(m)$, any $\lambda$-dependence must be weak.
This is analogous to the introduction of a $Q_T^{\rm min}$ in Ref.~\cite[Eq.~(2.8)]{Collins:1984kg}.
The exact functional form of $X(\Tsc{q}/\lambda)$ is arbitrary, but is most useful in calculations if it sharply
suppresses $\Tsc{q} \ll m$ contributions while not affecting $\Tsc{q} \gtrsim m$. While a step function is acceptable,
we suggest using a slightly smoother function since one expects the transition from perturbative to non-perturbative physics to
be relatively smooth. One possible choice is
\begin{equation}
X(\Tsc{q}/\lambda) = 1 - \exp \left\{ -(\Tsc{q} / \lambda)^{a_X} \right\} \ . \label{eq:Xparam}
\end{equation}
This is what we will use in sample calculations in Sec.~\ref{sec:demo}. A large value for the
power $a_X$ makes the switching function more like a step function.
In common terminology, the first term in braces on the second line of Eq.~\eqref{eq:yterm} is
called the ``fixed order'' (FO) contribution, while the second term is
the ``asymptotic'' (AY) contribution. We will
use the notation
\begin{align}
\fixo{}{} & \equiv \appor{coll} \cs{}{} \label{eq:fodef} \\
\as{}{} &\equiv \appor{coll} \appor{TMD} \cs{}{} \label{eq:asydef} \, .
\end{align}
So,
\begin{equation}
\YY{}{} \equiv \left\{ \fixo{}{} - \as{}{} \right\} X(\Tsc{q}/\lambda) \, .
\label{eq:Y_}
\end{equation}
This corresponds to the terminology in, for example, Ref.~\cite{Nadolsky:1999kb}. The term ``fixed order'' is meant to imply
that the calculation of $\Gamma$ is done entirely with collinear factorization with hard parts calculated to low order in perturbation theory using $\mu = Q$ and
with collinear pdfs and ffs calculated using $\mu = Q$. That is, the hard part and the parton correlation functions are evaluated at the same scale.
Now we can extend the power
suppression error estimate in Eq.~\eqref{eq:powercounting} down to
$\Tsc{q} = 0$ to recover Eq.~\eqref{eq:basic}.
Equation~\eqref{eq:powercounting} becomes
\begin{align}
\label{eq:basic2}
\Gamma(\Tsc{q} \lesssim Q,Q) = &\TT{}{} + \YY{}{}\nonumber \\
& + O\mathopen{}\left( \frac{m}{Q}\right)^{\rm c} \cs{}{},
\end{align}
which is Eq.~\eqref{eq:basic}, but restricted to $\Tsc{q} \lesssim Q$.
So far, aside from introducing an explicit $X(\Tsc{q}/\lambda)$, we have only
reviewed the standard $W+Y$ construction. The $\Tsc{q} \lesssim Q$ restriction on
the left of Eq.~\eqref{eq:basic2} should be emphasized. Since we rely
on strict power counting in $\Tsc{q}/Q$ and $m/\Tsc{q}$, the region of $\Tsc{q} \gtrsim Q$ is
not guaranteed to be well-described by the above $W+Y$ construction. We will correct this in
Secs.~\ref{sec:wterm}--\ref{sec:together}
with a modified $W$-term definition.
\section{Review of TMD Factorization and Basic Formulas}
\label{sec:review}
Our proposed modifications to the transition to the $\Tsc{q} / Q \gtrsim 1$ region will leave the
standard treatment of TMD factorization~\cite[Chapters
10,13,14]{Collins:2011qcdbook} in the $\Tsc{q} / Q \ll 1$ region
only slightly modified.\footnote{See also Ref.~\cite{Rogers:2015sqa} for a recent brief overview and large list of references relating to the development of TMD factorization.} In particular,
the operator definitions for transverse-coordinate-space TMD
functions, along with their evolution properties, are exactly the same as in the usual formalism.
This is an important aspect of our suggested modifications, so it is worthwhile to review the basics
of TMD factorization for the low $\Tsc{q}$ region. This section gives a short summary of the most important
formulas, with the organization of notation optimized for discussions in later sections. We will also
refer frequently to the review of TMD evolution in Ref.~\cite[Sec.~II]{Collins:2014jpa}, especially~\cite[Eqs.~(22, 24)]{Collins:2014jpa}.
\subsection{TMD Evolution}
\label{sec:evolution}
The evolution of $\TT{}{}$ follows from generalized renormalization properties of the operator definitions
for TMD pdfs and ffs. To separate perturbative and non-perturbative contributions,
one defines large and small $\Tsc{b}$ through a function $\bstarsc$ that
freezes above some $b_{\rm max}$ and equals $\Tsc{b}$ for small $\Tsc{b}$:
\begin{equation}
\bstarsc(\Tsc{b}) \longrightarrow
\begin{dcases}
\Tsc{b} & \Tsc{b} \ll b_{\rm max} \\
b_{\rm max} & \Tsc{b} \gg b_{\rm max} \, . \label{eq:bdef}
\end{dcases}
\end{equation}
The relevant renormalization group scales are
\begin{equation}
\label{eq:mubdef}
\mub \equiv C_1/\Tsc{b} \, ,\qquad \mubstar \equiv C_1/\bstarsc \, , \qquad \muQ \equiv C_2 Q \, ,
\end{equation}
where $C_1$ and $C_2$ are constants that are chosen to optimize perturbative convergence.
We first solve the evolution equations, to give the following
forms for the $W$-term for SIDIS (neutral-current and neglecting
heavy flavors):
\begin{widetext}
\begin{align}
\TT{}{}
=&{} \sum_{j} H_{j}(\mu_Q,Q)
\int \frac{\diff[2]{\T{b}}}{(2 \pi)^2}
e^{i\T{q}\cdot \T{b} }
\tilde{F}_{j/A}\big( x_A, \T{b} ; Q_0^2, \mu_{Q_0} \bigr)
\,
\tilde{D}_{B/j} \big( z_B, \T{b} ; Q_0^2, \mu_{Q_0} \bigr)
\nonumber\\&
\, \times \exp\left\{
\int_{\mu_{Q_0}}^{\muQ} \frac{ \diff{\mu'} }{ \mu' }
\biggl[ 2 \gamma(\alpha_s(\mu'); 1)
- \ln\frac{Q^2}{ (\mu')^2 } \gamma_K(\alpha_s(\mu'))
\biggr]
+ \tilde{K}(\Tsc{b};\mu_{Q_0})
\ln \left( \frac{ Q^2 }{ Q_0^2 } \right)
\right\}
\nonumber\\
=&{} \sum_{j} H_{j}(\mu_Q,Q)
\int \frac{\diff[2]{\T{b}}}{(2 \pi)^2}
e^{i\T{q}\cdot \T{b} }
\tilde{F}_{j/A}\big( x_A, \T{b} ; Q_0^2, \mu_{Q_0} \bigr)
\,
\tilde{D}_{B/j} \big( z_B, \T{b} ; Q_0^2, \mu_{Q_0} \bigr)
\nonumber\\&
\, \times \exp\left\{
\int_{\mu_{Q_0}}^{\muQ} \frac{ \diff{\mu'} }{ \mu' }
\biggl[ 2 \gamma(\alpha_s(\mu'); 1)
- \ln\frac{Q^2}{ (\mu')^2 } \gamma_K(\alpha_s(\mu'))
\biggr]
\right\} \nonumber\\&
\,\times
\exp \left\{ \left[ \tilde{K}(\Tsc{b};\mubstar) - \int_{\mubstar}^{\mu_{Q_0}} \frac{d\mu'}{\mu'} \gamma_K(\alpha_s(\mu')) \right] \ln \left( \frac{ Q^2 }{ Q_0^2 } \right) \right\}\, .
\label{eq:solnf}
\end{align}
Here $\tilde{F}_{j/A}\big( x_A, \T{b} ; Q_0^2, \mu_{Q_0} \bigr)$,
and $\tilde{D}_{B/j} \big( z_B, \T{b} ; Q_0^2, \mu_{Q_0} \bigr)$ are,
respectively, the TMD pdf and TMD ff evaluated at a reference scale
$Q_0$.
Their operator definitions are given in Eqs.~(13.42,13.106) of
Ref.~\cite{Collins:2011qcdbook}. The exponential factor on the
second line implements the evolution from {$Q_0$ to
$Q$}. There $\tilde{K}(\Tsc{b};\mu)$ is the Collins-Soper (CS)
evolution kernel (see~\cite[Eq.~(6,11,25)]{Collins:2014jpa}), while
$\gamma_K(\alpha_s(\mu))$ and $\gamma(\alpha_s(\mu'); 1)$ are
anomalous dimensions for the CS kernel and a TMD pdf/ff respectively
(see~\cite[Eq.~(7,8,9,10,12)]{Collins:2014jpa}). See also
Refs.~\cite{Rogers:2015sqa,Collins:2012ss} and references therein for
detailed discussions of the evolution equations and their origins.
In the last part of Eq.\ (\ref{eq:solnf}), we have used the
renormalization group to change the $\mu$ argument of $\tilde{K}$ from
$\mu_{Q_0}$ to $\mubstar$. This is in anticipation of later
manipulations, where $\mubstar$ will be a suitable scale for
perturbatively calculated quantities.
We define the $\Tsc{b}$-space version $\tilde{W}$ of the $W$-term by
\begin{equation}
\label{eq:FTdef}
\TT{}{} = \int \frac{\diff[2]{\T{b}}}{(2 \pi)^2} e^{i\T{q}\cdot \T{b} } \, \tilde{W}(\Tsc{b},Q) \, .
\end{equation}
To economize notation, we will assume there is only one flavor of parton so that we may drop the
sum over $j$ and the $j$
subscript. In detailed calculations, the sum needs to be restored.\footnote{Recall, however, that for scattering off a quark, there is no flavor dependence in the hard scattering until order $\alpha_s^3$.
So flavor independence is likely a good approximation. See the discussion at the beginning of section VIA of Ref.~\cite{Collins:2014jpa}. }
In the limit $\Tsc{b} \ll 1/m$, each TMD correlation function can be expanded in an OPE and expressed
in terms of collinear correlation functions. Then the transverse coordinate dependence is itself perturbatively generated.
Let us define a notation to describe this limit. First, substitute
$\Tsc{b} \to \bstarsc$ in Eq.~\eqref{eq:solnf} to regulate the
$\Tsc{b} \gtrsim 1/m$ region. Second, expand the result in an OPE and
drop order $O(\Tsc{b} m)$ corrections. Finally we replace $Q_0$ and
$\mu_{Q_0}$ by $\mubstar$, so that perturbatively calculations have no
large logarithms.
We call
the result $\tilde{W}^{\rm OPE}(\bstarsc(\Tsc{b}),Q)$:
\begin{align}
\tilde{W}^{\rm OPE}(\bstarsc(\Tsc{b}),Q)
\equiv{} & H(\mu_{Q},Q) \sum_{j' i'} \int_{x_A}^1
\frac{d \hat{x}}{\hat{x}} \tilde{C}^{\rm pdf}_{j/{j'}}(x_A/\hat{x},\bstarsc(\Tsc{b});\mubstar^2,\mubstar,\alpha_s(\mubstar)) f_{j'/A}(\hat{x};\mubstar) \times \nonumber \\
& \times \int_{z_B}^1 \frac{d \hat{z}}{\hat{z}^3} \tilde{C}^{\rm ff}_{i'/{j}}(z_B/\hat{z},\bstarsc(\Tsc{b});\mubstar^2,\mubstar,\alpha_s(\mubstar)) d_{B/i'}(\hat{z};\mubstar) \times \nonumber \\
& \times \exp \left\{ \ln \frac{Q^2}{\mubstar^2} \tilde{K}(\bstarsc(\Tsc{b});\mubstar) +
\int_{\mubstar}^{\mu_{Q}} \frac{d \mu^\prime}{\mu^\prime} \left[ 2 \gamma(\alpha_s(\mu^\prime);1)
- \ln \frac{Q^2 }{{\mu^\prime}^2} \gamma_K(\alpha_s(\mu^\prime)) \right]\right\} \, . \label{eq:Tcoll}
\end{align}
The functions $f_{{j'}/A}(x;\mu)$ and
$d_{B/{j'}}(z;\mu)$ are the ordinary collinear pdf and ff.
Equation~\eqref{eq:Tcoll} is the standard result for the small $\Tsc{b}$ limit and corresponds to Eq.~(22) of Ref.~\cite{Collins:2014jpa}, but without
the non-perturbative exponential factors.
Thus,
\begin{equation}
\tilde{W}(\Tsc{b},Q) = \tilde{W}^{\rm OPE}(\bstarsc(\Tsc{b}),Q) + O\mathopen{}\left( ( \Tsc{b} m )^p \right) \, \label{eq:smallblimit}
\end{equation}
with $p > 0$.
\subsection{Separation of Large and Small $\Tsc{b}$}
\label{sec:organization}
For Eq.~\eqref{eq:bdef}, a common functional form is~\cite{Collins:1981va}:
\begin{equation}
\label{eq:bstardeff}
\bstarsc(\Tsc{b}) \equiv \sqrt{\frac{\Tsc{b}^2}{1 + \Tsc{b}^2 / b_{\rm max}^2}} \, .
\end{equation}
The standard steps for separating large and small $\Tsc{b}$ are to first write a ratio,
\begin{equation}
\label{eq:gjH.def0}
e^{-g_{A}(x_A,\Tsc{b};b_{\rm max})-g_{B}(z_B,\Tsc{b};b_{\rm max})}
\equiv \frac{ \tilde{W}(\Tsc{b},Q_0) }
{ \tilde{W}^{\rm OPE}(\bstarsc(\Tsc{b}),Q_0) } \, .
\end{equation}
The ratio on the right side \emph{defines} the exponential functions on the left according to some reference scale $Q_0$.
The $g$-functions, therefore, account for all the error terms on the right side~\eqref{eq:smallblimit} (at some $Q_0$).\footnote{It is essentially just convention that the
$g$-functions appear in an exponent.}
Next, one notices that the CS evolution is identical for the numerator and denominator, apart
from the fact that the evolution kernel is evaluated at $\Tsc{b}$ in the former and $\bstarsc(\Tsc{b})$ in the latter.
Thus, one may re-express the right side of Eq.~\eqref{eq:gjH.def0} in terms of $\tilde{W}$ at an arbitrary $Q$ in a very simple form by applying CS evolution
to the numerator and denominator separately and canceling out many common evolution factors. The result is
\begin{equation}
\label{eq:gjH.def}
e^{-g_{A}(x_A,\Tsc{b};b_{\rm max})-g_{B}(z_B,\Tsc{b};b_{\rm max})}
= \frac{ \tilde{W}(\Tsc{b},Q) }
{ \tilde{W}^{\rm OPE}(\bstarsc(\Tsc{b}),Q) }
e^{2 g_K(\Tsc{b};b_{\rm max}) \ln(Q/Q_0)} \, .
\end{equation}
Here, $g_K(\Tsc{b};b_{\rm max})$
is the difference between the CS
evolution kernels evaluated at $\Tsc{b}$ and $\bstarsc(\Tsc{b})$:
\begin{equation}
\label{eq:gK.def}
g_K(\Tsc{b};b_{\rm max}) \equiv -K(\Tsc{b},\mu)+K(\bstarsc(\Tsc{b}),\mu) \, .
\end{equation}
Now the kernel $\tilde{K}(\Tsc{b};\mu)$ is very strongly
universal; it is independent not just of the process, but also of
scale, polarization, $x$, $z$, flavors, and polarization. The
``non-perturbative'' function $g_K(\Tsc{b};b_{\rm max})$, defined
by Eq.\ (\ref{eq:gK.def}), inherits the same strong universality properties as
$K(\Tsc{b},\mu)$.
Equation~\eqref{eq:gjH.def} allows us to write
\begin{equation}
\TT{}{}
={}
\int \frac{\diff[2]{\Tsc{b}}}{(2 \pi)^2}
e^{i\T{q}\cdot \Tsc{b} } \tilde{W}^{\rm OPE}(\bstarsc(\Tsc{b}),Q) \tilde{W}_{\rm NP}(\Tsc{b},Q;b_{\rm max}) \label{eq:solngb} \, ,
\end{equation}
where $\tilde{W}_{\rm NP}(\Tsc{b},Q;b_{\rm max})$ is the combination
of all non-perturbative exponential functions in Eq.~\eqref{eq:gjH.def},
\begin{align}
\tilde{W}_{\rm NP}(\Tsc{b},Q;b_{\rm max})= e^{-g_{A}(x_A,\Tsc{b};b_{\rm max})-g_{B}(z_B,\Tsc{b};b_{\rm max})}\,
e^{-2 g_K(\Tsc{b};b_{\rm max}) \ln(Q/Q_0)}\, . \label{eq:npparts}
\end{align}
$\tilde{W}_{\rm NP}(\Tsc{b},Q;b_{\rm max})$ is a function to be
parameterized and fit to data, or to be determined by
appealing to non-perturbative methods.\footnote{To call $g_A$,
$g_B$, and $g_K$ functions ``non-perturbative'' is somewhat
of a misnomer. The definition of $\tilde{W}^{\rm NP}$ is indeed
such that it does include all the strongly non-perturbative
contributions. But if $b_{\rm max}$ is conservatively small,
$\tilde{W}^{\rm NP}$ also includes contributions, at moderate
$\Tsc{b}$, that could be estimated perturbatively.}
$\tilde{W}^{\rm OPE}(\bstarsc(\Tsc{b}),Q)$ is calculable in collinear
factorization in terms of collinear pdfs and ffs and allows the use of low order perturbation
theory for perturbatively calculable parts. It is exactly the original
definition of $\tilde{W}$, but evaluated at $\bstarsc(\Tsc{b})$ instead of $\Tsc{b}$.
The exponential factors in Eq.~\eqref{eq:npparts} account for the non-perturbative
transverse coordinate dependence. Notice that by construction
\begin{equation}
\frac{\diff{} }{\diff{b_{\rm max}}}
\left[
\tilde{W}^{\rm OPE}(\bstarsc(\Tsc{b}),Q)
\tilde{W}_{\rm NP}(\Tsc{b},Q;b_{\rm max})
\right]
= 0
\, .
\end{equation}
Substituting Eqs.~(\ref{eq:Tcoll}) and (\ref{eq:npparts}) into Eq.~\eqref{eq:solngb} produces
the most familiar representation of the evolved $\tilde{W}(\Tsc{b},Q)$:
\begin{eqnarray}
\tilde{W}(\Tsc{b},Q)
& = & H(\mu_{Q},Q) \sum_{j' i'} \int_{x_A}^1
\frac{d \hat{x}}{\hat{x}} \tilde{C}^{\rm pdf}_{j/{j'}}(x_A/\hat{x},\bstarsc(\Tsc{b});\mubstar^2,\mubstar,\alpha_s(\mubstar)) f_{j'/A}(\hat{x};\mubstar) \times \nonumber \\
& & \times \int_{z_B}^1 \frac{d \hat{z}}{\hat{z}^3} \tilde{C}^{\rm ff}_{i'/{j}}(z_B/\hat{z},\bstarsc(\Tsc{b});\mubstar^2,\mubstar,\alpha_s(\mubstar)) d_{B/i'}(\hat{z};\mubstar) \times \nonumber \\
& \times & \exp \left\{ \ln \frac{Q^2}{\mubstar^2} \tilde{K}(\bstarsc(\Tsc{b});\mubstar) +
\int_{\mubstar}^{\mu_{Q}} \frac{d \mu^\prime}{\mu^\prime} \left[ 2 \gamma(\alpha_s(\mu^\prime);1)
- \ln \frac{Q^2 }{{\mu^\prime}^2} \gamma_K(\alpha_s(\mu^\prime)) \right]\right\} \nonumber \\
& \times & \exp\left\{ -g_{A}(x_A,\Tsc{b};b_{\rm max})-g_{B}(z_B,\Tsc{b};b_{\rm max})
-2 g_K(\Tsc{b};b_{\rm max}) \ln \left( \frac{Q}{Q_0} \right) \right\} \, .
\label{eq:finalevolved}
\end{eqnarray}
This now includes all the necessary non-perturbative functions and corresponds to Eq.~(22) of Ref.~\cite{Collins:2014jpa}. In the case that
non-perturbative functions are dropped, the $W$-term matches Eq.~(1.1) of Ref.~\cite{Collins:1984kg}.
With the method of Eqs.~\eqref{eq:bstardeff}--\eqref{eq:finalevolved}, the relationship between $\tilde{W}^{\rm OPE}(\bstarsc(\Tsc{b}),Q)$ and $ \tilde{W}_{\rm NP}(\Tsc{b},Q;b_{\rm max})$ and
the exact definition of $\tilde{W}$ from the factorization derivation is kept explicit.
Equation~\eqref{eq:gjH.def} is exact because the evolution is the same for the numerator and denominator. Therefore, all $O\mathopen{}\left( ( \Tsc{b} m )^p \right)$ corrections in
Eq.~\eqref{eq:smallblimit} are accounted for automatically in the definition of the non-perturbative parts in Eq.~\eqref{eq:npparts}. The only errors in the relationship
between the $W$-term and the physical cross section are the overall $m/Q, \, \Tsc{q}/Q$-suppressed errors from the factorization derivation.
This section has been a compressed review of steps already reviewed recently in Sec.~2.B.III of Ref.~\cite{Collins:2014jpa}. We refer the
reader to this and references therein for more details.
\end{widetext}
\section{Modified $\bstarsc$-prescription and $W$-Term}
\label{sec:bstarmod}
\label{sec:wterm}
Next, we modify the definition of $W$. This is to provide a
convenient solution to the problem that with the definitions given
so far, the integral over all $\T{q}$ of $W(\Tsc{q})$ is zero, because
$\tilde{W}(\Tsc{b})$ is zero at $\Tsc{b}=0$ (see App.\ \ref{sec:Wzero}).
It would be preferable for the integral to have a normal collinear
expansion in terms of pdfs and ffs at scale $\muQ$; the lowest order
term then reproduces the lowest order collinear factorization result
for the integrated cross section. At the same time, we wish to
preserve the results for the $\T{b}$-space quantity
$\tilde{W}(\Tsc{b})$, since these embody the derived factorization
and evolution properties.
Most importantly, the modified $W$ term must still approximate the
cross section at low $\Tsc{q}$ to the same accuracy as in Eq.\
(\ref{eq:TMDapdef}).
We achieve the modified $W$ in two stages.
The first is to modify the Fourier transform in Eq.\
(\ref{eq:FTdef}) to read
\begin{equation}
\label{eq:FTdef1}
\TTa{}{}
=
\int \frac{\diff[2]{\T{b}}}{(2 \pi)^2}
e^{i\T{q}\cdot \T{b} } \, \tilde{W}(\bone(\Tsc{b}),Q) \, .
\end{equation}
where
\begin{equation}
\bone(\Tsc{b}) = \sqrt{ \Tsc{b}^2 + b_0^2/(C_5Q)^2 } \, .
\label{eq:bcut}
\end{equation}
That is, $\tilde{W}(\Tsc{b},Q)$ is replaced by
$\tilde{W}(\bone(\Tsc{b}),Q)$.
The function $\bone(\Tsc{b})$ is arranged to agree with $\Tsc{b}$
when $\Tsc{b} \gg 1/Q$, but to be of order $1/Q$ when $\Tsc{b}=0$, ,
thereby providing a cutoff at small $\Tsc{b}$.
Then, when (\ref{eq:FTdef1}) is integrated over $\T{q}$, we
get $\tilde{W}(b_0/(C_5Q),Q)$, instead of the previous value
$\tilde{W}(0,Q)=0$. We have included an explicit numerical factor of $b_0 \equiv 2
\exp(-\gamma_E)$ since this tends to lead to simpler formulas later on.
We have chosen the value of $\bone(0)$ to be
proportional to $1/Q$, so that, from Eq.\ (\ref{eq:finalevolved}),
$\tilde{W}(b_0/(C_5Q),Q)$ has a normal collinear factorization
property. The numerical constant $C_5$ fixes
the exact proportionality between $\bone(0)$ and $1/Q$.
But at the same time (\ref{eq:FTdef1}) still gives an approximation to the cross section of
the appropriate accuracy. This is because, when
$\Tsc{q} \ll Q$, the dominant range of $\Tsc{b}$ is much larger than
$1/Q$, and so the modification in (\ref{eq:FTdef1}) only gives a
power-suppressed contribution. Of course, at large $\Tsc{q}$, there
are more substantial changes. But then we approach the domain of
validity of collinear factorization, and so the accuracy of the $W+Y$ form
is preserved provided that, in the definition \eqref{eq:yterm} of $Y$,
we replace $\TT{}{}$ by $\TTa{}{}$.
Note that the integrand in (\ref{eq:FTdef1}) is non-singular at
$\Tsc{b}=0$, unlike (\ref{eq:FTdef}). Thus the large $\Tsc{q}$
behavior is exponentially damped. Even so, the function still extends
to arbitrarily large $\Tsc{q}$.
So the second and final stage of modification for $W$ is to make an
explicit cutoff at large $\Tsc{q}$, to give:
\begin{multline}
\label{eq:Wnew}
\TTnew{}{}
\\
\equiv
\Xi\mathopen{}\left(\frac{\Tsc{q}}{Q},\eta\right)
\int \frac{\diff[2]{\T{b}}}{(2 \pi)^2}
e^{i\T{q}\cdot \T{b} } \tilde{W}(\bone(\Tsc{b}),Q) \, .
\end{multline}
Here $\Xi\mathopen{}\left(\Tsc{q}/ (Q\eta)\right)$ is a cutoff function that we
introduce to ensure that $\TTnew{}{}$ vanishes for $\Tsc{q} \gtrsim
Q$, and $\eta$ is a parameter to control exactly where the suppression
of large $\Tsc{q}$ begins. $\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right)$ should
approach unity when $\Tsc{q}\ll Q$ and should vanish for $\Tsc{q}
\gtrsim Q$. This preserves the required approximation property of
$\TTnew{}{}$ at small $\Tsc{q}$. At the same time, since the changes
are dominantly at large $\Tsc{q}$, the integral over all $\T{q}$ still
has a normal collinear expansion, as we will make more explicit below.
A simple $\Theta(Q - \Tsc{q})$ step function is acceptable for
$\Xi$. When we combine $\TTnew{}{}$ with a $Y$-term in
Secs.~\ref{sec:Yterm}--\ref{sec:together} we will introduce methods to
minimize sensitivity to the exact form of
$\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right)$. However, a smoother function is
preferred since the domain of validity of the $W$-term approximation
does not end at a sharp point in $\Tsc{q}$, and thus a smooth function
characterizes general physical expectations. A reasonable choice is
\begin{equation}
\Xi\mathopen{}\left(\frac{\Tsc{q}}{Q},\eta\right) = \exp \left[ -\left( \frac{q_T}{ \eta Q} \right)^{a_\Xi} \right] \, , \label{eq:Xiparam}
\end{equation}
with $a_\Xi > 2$.
The only differences between the old and new $W$-term are: i) the use
of $\bone(\Tsc{b})$ rather than $\Tsc{b}$ in $\tilde{W}$, and ii) the
multiplication by $\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right)$. (The second
modification was proposed by Collins in
Ref.~\cite[Eq.~(13.75)]{Collins:2011qcdbook}. There $\Xi$ is called
$F(\Tsc{q}/Q)$.) Equation~\eqref{eq:Wnew} matches the standard
definition in the limit that $C_5$ and $\eta$ approach infinity.
Finally, we will present a fully optimized formula for $\TTnew{}{}$ corresponding
to the one for the original $\TT{}{}$ in Eq.\ (\ref{eq:finalevolved}).
But first it will be convenient to construct some auxiliary results.
Naturally, $\bstarsc$ is to be replaced by
\begin{equation}
\bstarsc(\bone(\Tsc{b}))
= \sqrt{ \frac{ \Tsc{b}^2 + b_0^2/(C_5^2Q^2) }
{ 1 + \Tsc{b}^2/b_{\rm max}^2 + b_0^2/(C_5^2Q^2b_{\rm max}^2) }
} \, .
\end{equation}
Also we define
\begin{equation}
b_{\rm min} \equiv \bstarsc(\bone(0))
= \frac{b_0}{C_5Q} \sqrt{\frac{1}{1 + b_0^2/(C_5^2Q^2b_{\rm max}^2)}} \, .
\end{equation}
Then, for large enough $Q$ and $b_{\rm max}$
\begin{equation}
b_{\rm min} \approx \frac{b_0}{C_5Q} \, .
\end{equation}
Thus, $b_{\rm min}$ decreases like $1/Q$, in contrast to $b_{\rm max}$ which remains fixed.
Note also that
\begin{equation}
\bstarsc(\bone(\Tsc{b})) \longrightarrow
\begin{dcases}
b_{\rm min} & \Tsc{b} \ll b_{\rm min} \\
\Tsc{b} & b_{\rm min} \ll \Tsc{b} \ll b_{\rm max} \\
b_{\rm max} & \Tsc{b} \gg b_{\rm max} \, . \label{eq:bdef2}
\end{dcases}
\end{equation}
For $\Tsc{b} \ll 1/Q$,
$\bstarsc(\bone(\Tsc{b})) \approx \bstarsc(\Tsc{b})$.
Instead of $\mubstar$, we will ultimately use the scale
\begin{equation}
{\color{blue}\mu_{b_{1*}}} \equiv \frac{C_1}{\bstarsc(\bone(\Tsc{b}))} \,
\end{equation}
to implement renormalization group improvement in TMD correlation functions.
There is a maximum cutoff on the renormalization scale equal to
\begin{equation}
\mu_c
\equiv \lim_{\Tsc{b} \to 0} {\color{blue}\mu_{b_{1*}}}
= \frac{C_1C_5 Q}{b_0} \sqrt{1 + \frac{b_0^2}{C_5^2b_{\rm max}^2 Q^2}}
\approx \frac{C_1C_5Q}{b_0}
\, . \label{eq:mucut}
\end{equation}
The approximation sign corresponds to the limit of large $Qb_{\rm max}$.
Note that,
\begin{equation}
b_{\rm min} \mu_c = C_1 \, .
\end{equation}
\begin{widetext}
The steps for finding a useful formula for the evolved
$\TTnew{}{}$ are as follows.
Equation~\eqref{eq:solngb} becomes
\begin{equation}
\TTnew{}{} = \Xi\mathopen{}\left(\frac{\Tsc{q}}{Q},\eta\right) \int \frac{\diff[2]{\T{b}}}{(2 \pi)^2}
e^{i\T{q}\cdot \T{b} } \tilde{W}_{\rm NP}(\bone(\Tsc{b}),Q) \tilde{W}(\bstarsc(\bone(\Tsc{b})),Q) \, . \label{eq:evolWL}
\end{equation}
Now the definition of $\tilde{W}(\Tsc{b},Q)$ is unchanged,
and only the $\Tsc{b} \to \bone(\Tsc{b})$ replacement is new.
Therefore instead of
Eq.~\eqref{eq:finalevolved} we simply need
\begin{eqnarray}
\tilde{W}(\bone(\Tsc{b}),Q)
& = & H(\mu_{Q},Q) \sum_{j' i'} \int_{x_A}^1
\frac{d \hat{x}}{\hat{x}} \tilde{C}^{\rm pdf}_{j/{j'}}(x_A/\hat{x},\bstarsc(\bone(\Tsc{b}));{\color{blue}\mu_{b_{1*}}}^2,{\color{blue}\mu_{b_{1*}}},\alpha_s({\color{blue}\mu_{b_{1*}}})) f_{j'/A}(\hat{x};{\color{blue}\mu_{b_{1*}}}) \times \nonumber \\
& & \times \int_{z_B}^1 \frac{d \hat{z}}{\hat{z}^3} \tilde{C}^{\rm ff}_{i'/{j}}(z_B/\hat{z},\bstarsc(\bone(\Tsc{b}));{\color{blue}\mu_{b_{1*}}}^2,{\color{blue}\mu_{b_{1*}}},\alpha_s({\color{blue}\mu_{b_{1*}}})) d_{B/i'}(\hat{z};{\color{blue}\mu_{b_{1*}}}) \times \nonumber \\
& \times & \exp \left\{ \ln \frac{Q^2}{{\color{blue}\mu_{b_{1*}}}^2} \tilde{K}(\bstarsc(\bone(\Tsc{b}));{\color{blue}\mu_{b_{1*}}}) +
\int_{{\color{blue}\mu_{b_{1*}}}}^{\mu_{Q}} \frac{d \mu^\prime}{\mu^\prime} \left[ 2 \gamma(\alpha_s(\mu^\prime);1)
- \ln \frac{Q^2 }{{\mu^\prime}^2} \gamma_K(\alpha_s(\mu^\prime)) \right]\right\} \nonumber \\
& \times & \exp\left\{ -g_{A}(x_A,\bone(\Tsc{b});b_{\rm max})-g_{B}(z_B,\bone(\Tsc{b});b_{\rm max})
-2 g_K(\bone(\Tsc{b});b_{\rm max}) \ln \left( \frac{Q}{Q_0} \right) \right\} \, . \label{eq:finalevolvedb}
\end{eqnarray}
This is the same as Eq.~\eqref{eq:finalevolved} except that $\bstarsc(\bone(\Tsc{b}))$ and ${\color{blue}\mu_{b_{1*}}} = C_1 / \bstarsc(\bone(\Tsc{b}))$ are
used instead of $\bstarsc(\Tsc{b})$ and $\mubstar = C_1 / \bstarsc(\Tsc{b})$.
Note that $g_K(\bone(\Tsc{b});b_{\rm max})$ depends
on $Q$ through $\bone$, albeit only for $\Tsc{b} \lesssim 1/Q$.
For $\Tsc{b} \gg 1/Q$, $g_K(\bone(\Tsc{b});b_{\rm max}) \to g_K(\Tsc{b};b_{\rm max})$.
Also, $g_K(\bone(\Tsc{b});b_{\rm max})$ does not
vanish exactly as $\Tsc{b} \to 0$ but instead approaches a power of $1/Q$.
Up to this point, we have introduced two new parameters, $\eta$ and
$C_5$, in the treatment of the $W$-term.
\section{Modified $Y$-Term}
\label{sec:Yterm}
Now we can construct a $Y$-term from nearly identical steps to those of Sec.~\ref{sec:largesmall}.
Recall that the TMD approximator, $\appor{TMD}$,
replaces the cross section by an approximation that is good in the
$\Tsc{q} / Q \ll 1$ limit -- see Eq.~\eqref{eq:wterm}.
The $\appor{TMD}$ from Ref.~\cite{Collins:2011qcdbook} replaces $\cs{}{}$ by the definition of
$W(\Tsc{q},Q)$ that follows most directly from the derivation of TMD
factorization. However, any approximator that is good when
$\Tsc{q} \ll Q$ is equally valid here.
Therefore, we write
\begin{equation}
\label{eq:wtermmod2}
\TTnew{}{} \equiv \appor{TMD}^{\rm New} \cs{}{} \,.
\end{equation}
Here $\appor{TMD}^{\rm New}$ applies the same approximations as
$\appor{TMD}$, but with the use of
$\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right) $ and $\bstarsc(\bone(\Tsc{b}))$ as
in Eq.\ (\ref{eq:Wnew}).
Since the changes only affect the region
$\Tsc{q} \gtrsim Q$,
power counting for small $\Tsc{q}$ proceeds in exactly the same way as in
Sec.~\ref{sec:largesmall}:
\begin{equation}
\appor{TMD}^{\rm New} \cs{}{} = \cs{}{}
+ O\mathopen{}\left( \frac{\Tsc{q}}{Q}\right)^a \cs{}{}
+ O\mathopen{}\left( \frac{m}{Q} \right)^{a'} \cs{}{}
\,.
\label{eq:TMDapdef2}
\end{equation}
The large $\Tsc{q} \sim Q$ region is dealt with using the same $\appor{coll}$ approximator as in Sec.~\ref{sec:largesmall}:
\begin{equation}
\appor{coll} \cs{}{} = \cs{}{}
+ O\mathopen{}\left( \frac{m}{\Tsc{q}} \right)^b \cs{}{}
\, .
\label{eq:collapdef2}
\end{equation}
Continuing the usual steps, a $Y$-term is constructed by adding and subtracting $\TTnew{}{}$:
\begin{equation}
\cs{}{} \, = \appor{TMD}^{\rm New} \cs{}{} \
+ \Bigg[ \cs{}{} - \appor{TMD}^{\rm New} \cs{}{} \Bigg]
\, .
\label{eq:nextapp2}
\end{equation}
The term in brackets is
only unsuppressed for large $\Tsc{q}$, so we apply to it the
large $\Tsc{q}$ approximator, $\appor{coll}$,
and use collinear factorization:
\begin{align}
\Gamma(m \lesssim \Tsc{q},Q) ={}&
\appor{TMD}^{\rm New} \cs{}{} \
+ \appor{coll}
\left[ \cs{}{} - \appor{TMD}^{\rm New} \cs{}{} \right]
\nonumber \\ &
+ O\mathopen{}\left( \left( \frac{m}{\Tsc{q}} \right)^b \left( \frac{\Tsc{q}}{Q} \right)^a \right) \cs{}{}
+ O\mathopen{}\left( \left( \frac{m}{\Tsc{q}} \right)^b \left( \frac{m}{Q} \right)^{a'} \right) \cs{}{}
\nonumber \\
={}& \TTnew{}{}
+ \appor{coll}\cs{}{} - \appor{coll}\appor{TMD}^{\rm New}
\cs{}{}
+ O\mathopen{}\left( \frac{m}{Q}\right)^{\rm c} \cs{}{}
\, ,
\label{eq:powercounting2}
\end{align}
where $c = {{\rm min}(a,a',b)}$.
Finally, we insert a factor of the $X(\Tsc{q}/\lambda)$ function from Eq.~\eqref{eq:yterm} to remove any $Y$-term
contribution in the $\Tsc{q} < m$ region. The final $Y$-term is
\begin{align}
\label{eq:yterm2}
\YYnew{}{}
&{}\equiv \left\{ \appor{coll} \left[ \cs{}{} - \TTnew{}{} \right]
\right\} X(\Tsc{q}/\lambda)
\nonumber \\
&{}= \left\{ \appor{coll} \cs{}{} - \appor{coll} \appor{TMD}^{\rm New} \cs{}{} \right\} X(\Tsc{q}/\lambda) \, .
\end{align}
Then,
\begin{align}
\fixo{}{} & \equiv \appor{coll} \cs{}{} \label{eq:fixocal} \\
\asnew{}{} &\equiv \appor{coll} \appor{TMD}^{\rm New} \cs{}{} \label{eq:asydef2} \, .
\end{align}
So,
\begin{equation}
\YYnew{}{} = \left\{ \fixo{}{} - \asnew{}{} \right\} X(\Tsc{q}/\lambda) \, . \label{eq:finalydef2}
\end{equation}
As usual, $\appor{coll}$ is an instruction to set all renormalization scales to $\mu = \mu_Q$ and drop
powers of $m/\Tsc{q}$ or $m/Q$.
In $\appor{coll} \appor{TMD}^{\rm New} \cs{}{}$, the $\appor{TMD}^{\rm New}$ inserts a multiplication by
a factor of $\Xi(\Tsc{q}/Q,\eta)$, effectively setting $\appor{TMD}^{\rm New} \cs{}{}$ to zero for
large $\Tsc{q} \gtrsim Q$ (see, e.g., Eq.~\eqref{eq:Xiparam}). Thus, if $\Xi(\Tsc{q}/Q,\eta)$ gets dropped when
$\appor{coll}$ is applied, there is a potential to introduce large errors. Therefore, $\appor{coll}$ should \emph{not} drop the
factor of $\Xi(\Tsc{q}/Q,\eta)$ because $\appor{coll}$, by definition, must leave the $\Tsc{q} \gg m$ region unmodified. Similarly,
the use of $\bone(\Tsc{b})$ affects the small $\Tsc{b}$ limit of $\bstarsc(\bone(\Tsc{b}))$, and therefore can also have a large effect on
on $\appor{TMD}^{\rm New} \cs{}{}$ at large $\Tsc{q}$. Thus,
$\appor{coll}$ should preserve the use of $\bone(\Tsc{b})$.
In contrast, $b_{\rm max} \sim 1/m$ mainly affects the small $\Tsc{q}$
region. Therefore,
we define $\appor{coll}$ to apply the
$b_{\rm max} \to \infty$ limit in Eq.~\eqref{eq:asydef2}. Examples of implementations of Eqs.~\eqref{eq:fixocal}--\eqref{eq:finalydef2} will be
given in Secs.~\ref{sec:bcfg} and~\ref{sec:demo}.
Now observe that $\Xi(\Tsc{q}/Q,\eta)$ approaches zero as $\Tsc{q}$ gets
much larger than $Q$. Then $\YYnew{}{}$ approaches the usual collinear
factorization result for $\cs{}{}$ at large $\Tsc{q}$.
Therefore, we may at last remove the $\Tsc{q} \lesssim Q$ restriction on the
left side of Eq.~\eqref{eq:basic2} and write a $W+Y$ representation of the cross section that extends over the whole range of $\Tsc{q}$:
\begin{equation}
\label{eq:basic22}
\Gamma(\Tsc{q},Q) = \TTnew{}{} + \YYnew{}{} + O\mathopen{}\left( \frac{m}{Q}\right)^{\rm c} \cs{}{} \, .
\end{equation}
We have reached our goal of constructing a $W+Y$ expression that does not require that we
specify limitations on the range of $\Tsc{q}$. What remains is to
determine the most appropriate values for $\eta$ and $C_5$.
\section{Connection with $\T{q}$-integrated cross sections and collinear factorization}
\label{sec:together}
In this section, we analyze the integral over all $\T{q}$ of the
right-hand side of Eq.\ (\ref{eq:basic22}), and show how it matches
standard collinear factorization for the integrated cross section.
We integrate Eq.~\eqref{eq:basic22} over all transverse momentum,
and then reorganize the result as follows:
\begin{align}
\int \diff[2]{\T{q}} \cs{}{} ={}& \int \diff[2]{\T{q}} \left[ \TTnew{}{} + \YYnew{}{} \right] \, \nonumber \\
={}& \int \diff[2]{\T{q}} \left[ \Xi\mathopen{}\left(\frac{\Tsc{q}}{Q},\eta\right)
\int \frac{\diff[2]{\T{b}}}{(2 \pi)^2}
e^{i\T{q}\cdot \T{b} } \tilde{W}(\bone(\Tsc{b}),Q) + \YYnew{}{} \right] \, \nonumber \\ \nonumber \\
={}& \int \diff[2]{\T{q}} \int \frac{\diff[2]{\T{b}}}{(2 \pi)^2}
e^{i\T{q}\cdot \T{b} } \tilde{W}(\bone(\Tsc{b}),Q) & \qquad \qquad {\rm Term \, 1} \nonumber \\
{}&- \int \diff[2]{\T{q}} \left( 1 - \Xi\mathopen{}\left(\frac{\Tsc{q}}{Q},\eta\right) \right) \int \frac{\diff[2]{\T{b}}}{(2 \pi)^2}
e^{i\T{q}\cdot \T{b} } \tilde{W}(\bone(\Tsc{b}),Q) & \qquad \qquad {\rm Term \, 2} \nonumber \\
{}&+ \int \diff[2]{\T{q}} \YYnew{}{} \, . & \qquad \qquad {\rm Term \, 3} \nonumber \\
\end{align}
Term 1 is $\TTnew{}{}$ integrated over $\T{q}$, but without the $\Xi$ factor, so it can easily
be simplified. Term 2 corrects for the omission of $\Xi$, while
term 3 is the integral of the $Y$ term.
Now term 1 equals $\tilde{W}(\bone(0),Q) = \tilde{W}(b_{\rm min},Q)$. Since
$b_{\rm min}=O(1/Q)$, we can replace it by the OPE $\tilde{W}^{\rm
OPE}(b_{\rm min},Q))$ form, to leading power in $m/Q$, Eq.\
(\ref{eq:smallblimit}), to obtain
\begin{equation}
\mbox{Term 1}
= \tilde{W}^{\rm OPE}(b_{\rm min},Q)
+ O\mathopen{}\left( ( m/Q )^p \right) .
\end{equation}
Then, we can use Eq.\ (\ref{eq:Tcoll}) to give a factorization in
terms of collinear pdfs and ffs at a scale of order $Q$. Since in
that formula $\bstarsc(\Tsc{b})$ is replaced by $b_{\rm min}=O(1/Q)$, while
$\mubstar$ is of order $Q$, both the $\tilde{C}$ factors and the
quantities in the exponential can be expanded in powers of
$\alpha_s(Q)$ without large logarithms. We therefore have a normal
collinear expansion. The lowest-order term gives
\begin{equation}
\mbox{Term 1}
= H_{{\rm LO}, \, j j} f_{j/A}(x;\mu_c) \, d_{B/j}(z;\mu_c)
+ O(\alpha_s(Q)),
\end{equation}
with our choice of scale given in Eq.\ (\ref{eq:mucut}). This agrees
with the lowest-order term for the integrated cross section itself,
i.e., for $\int \diff[2]{\T{q}} \cs{}{}$.
Both terms 2 and 3 are dominated in their integrals by $\Tsc{q}$ of
order $Q$. They therefore have normal collinear expansions, starting
at order $\alpha_s(Q)$.
Overall, we therefore have well-behaved perturbative expansions of
collinear factorization for each term, unlike the case for the
$\Tsc{q}$ integrals of the original CSS forms for $W$ and $Y$.
We now show more explicitly that terms 2 and 3 are dominated by
$\Tsc{q}$ of order $Q$. For term 2, the factor $1-\Xi$ gives a power
suppression for $\Tsc{q} \ll Q$, while the use of $\bone(\Tsc{b})$
instead of $\Tsc{b}$ gives an exponential suppression for $\Tsc{q} \gg
Q$, as we have already seen. For term 3, the construction of
$\YYnew{}{}$ gives power suppression when $\Tsc{q} \ll Q$, with the
factor $X(\Tsc{q}/\lambda)$ in (\ref{eq:finalydef2}) ensuring that no
pathologies arise when $\Tsc{q}$ is very small (below $m$). At large
$\Tsc{q}$, beyond $Q$, the $\fixo{}{}$ term obeys the kinematic limit, while
the $\asnew{}{}$ term is exponentially suppressed, for the same reason
as for $\TTnew{}{}$.
\end{widetext}
\section{Calculating the asymptotic term in the BCFG method}
\label{sec:bcfg}
Perturbative calculations for the hard coefficient for the $Y$
term in the original CSS version can be performed by starting from
the normal collinear coefficient for the cross section as a function
of $\Tsc{q}$. Then the asymptote at small $\Tsc{q}$ is subtracted.
This asymptote is simply the leading power expansion in $\Tsc{q}/Q$ when $\Tsc{q}$ is
much smaller than $Q$, and involves simply a factor of $1/\Tsc{q}^2$
time logarithms of $Q/\Tsc{q}$ in each order of perturbation theory.
The coordinate-space version of the subtraction in each order is
correspondingly a polynomial in $\ln (Q\Tsc{b})$.
In the new scheme, the coordinate space formula is unchanged, but it
is not so simple to perform a practical analytic calculation of its
Fourier transform to give $\asnew{}{}$. This is because of the
substitution of $\bone(\Tsc{b})$ for $\Tsc{b}$. We now explain how
to do this, following Ref.~\cite{Bozzi:2005wk}.
Calculations of $\asnew{}{}$ need Fourier-Bessel transforms of
terms of the form
\begin{multline}
\label{eq:logs2}
\alpha_s(\muQ)^m \ln^n \mathopen{}\left( \frac{\muQ^2 \bone(\Tsc{b})^2}{b_0^2} \right) =
\\
\alpha_s(\mu_Q)^m \ln^n \mathopen{}\left( \frac{\muQ^2 \Tsc{b}^2}{b_0^2} + \frac{C_2^2}{C_5^2} \right) \, .
\end{multline}
with $m \geq 1$ and $0 \leq n \leq 2m$ and $b_0 \equiv 2
\exp(-\gamma_E)$. (The use of $b_0$ in the argument of the
logarithm is a convention that typically results in simpler
formulas.)
These terms arise from the perturbative expansion $\appor{coll}
\appor{TMD}^{\rm New} \cs{}{}$. This can be considered as arising
from the collinear factorization of Eq.\ (\ref{eq:Tcoll}) with
$\mubstar$ replaced by $\muQ$, with all couplings expressed in terms
of $\alpha_s(\muQ)$, and then with a fixed-order perturbative
expansion applied to the product of the $\tilde{C}$ factors and the
exponential in Eq.\ (\ref{eq:Tcoll}).
If $Q^2 \Tsc{b}^2 \gg C_2^2/ C_5^2$, then we neglect the second term in the logarithms, and Eq.~\eqref{eq:logs2} becomes
the much more familiar form from standard CSS-like treatments
\begin{equation}
\label{eq:logsreduce}
\alpha_s(\mu_Q)^m \ln^n \mathopen{}\left( \frac{\mu_Q^2 \Tsc{b}^2}{b_0^2} \right) \, .
\end{equation}
\subsection{Standard Logarithms}
In the CSS and related treatments, with the standard $W+Y$ construction, the logarithms are of the form of Eq.~\eqref{eq:logsreduce}.
In that case, the momentum space expressions are well-known (see, e.g., Eq.~(36) of Ref.~\cite{Nadolsky:1999kb}).
After Fourier transformation, coordinate space logarithmic terms
like Eq.~\eqref{eq:logsreduce} give $\Tsc{q}$-dependence like
\begin{equation}
\frac{1}{\Tsc{q}^2} \, , \, \frac{1}{\Tsc{q}^2} \ln \mathopen{}\left( \frac{Q^2}{\Tsc{q}^2} \right) \, , \, \dots \label{eq:momlogs}
\end{equation}
where the ``$\dots$'' refers to higher power logarithms.
\subsection{Modified logarithms}
A primary motivation for our modified $W+Y$ construction is to
accommodate a non-zero $b_{\rm min}$ in
Eqs.~\eqref{eq:evolWL} and \eqref{eq:finalevolvedb},
and thus a non-zero $C_2/C_5$ in Eq.~\eqref{eq:logs2}.
Fortunately, for the case of non-zero $C_2/C_5$, analytic expressions for
the finite parts of the Fourier-Bessel transforms have been worked out in Appendix B of BCFG, Ref.~\cite{Bozzi:2005wk}.
Indeed, the case of of $C_5 = C_2$ corresponds exactly to the
$\ln^m (Q^2 \Tsc{b}^2/b_0^2) \to \ln^m (Q^2 \Tsc{b}^2/b_0^2 + 1)$
prescription of PP~\cite{Parisi:1979se} and used in implementations
like~\cite{Bozzi:2005wk}.
Now, the discussion so far has been based on the expression for
$\TTnew{}{}$ in terms of TMD densities. However, to get the hard
coefficient for $\asnew{}{}$, as needed in $\YYnew{}{}$, it is also
possible to start from the hard coefficients for ordinary collinear
factorization for the cross section. Then one does the expansion at
small $\Tsc{q}$ to give $1/\Tsc{q}^2$ times logarithms. Finally to
obtain the effect of the use of $\bone(\Tsc{b})$ instead of
$\Tsc{b}$, one makes the substitutions given below.
One can read the substitutions off from
results like Ref.~\cite[Eqs.~(B.10)-(B.13)]{Bozzi:2005wk}. For example,
\begin{align}
\frac{1}{\Tsc{q}^2}
\to{}& \frac{C_2 b_0}{\Tsc{q} \mu_Q C_5} K_1 \mathopen{}\left( \frac{C_2 \Tsc{q} b_0}{C_5 \mu_Q} \right) \,
\label{eq:logrep1} \\
\frac{1}{\Tsc{q}^2} \ln \mathopen{}\left( \frac{\mu_Q^2}{\Tsc{q}^2} \right)
\to{}&
\frac{C_2 b_0}{\Tsc{q} \mu_Q C_5}
\left[ K_1 \mathopen{}\left( \frac{C_2 \Tsc{q} b_0}{C_5 \mu_Q} \right) \ln \mathopen{}\left( \frac{C_2 \mu_Q}{C_5
\Tsc{q}} \right) + \right.
\nonumber \\
& \qquad \qquad
+ \left. K_1^{(1)}\mathopen{}\left( \frac{C_2 \Tsc{q} b_0}{C_5 \mu_Q} \right) \right] \, .
\label{eq:logrep2}
\end{align}
Here, $K_\nu(x)$ is the modified Bessel function of the second kind and
\begin{equation}
K_1^{(1)}(x) \equiv \left. \frac{\partial}{\partial \nu} K_\nu(x) \right|_{\nu = 1} \, .
\end{equation}
The left and right sides of Eqs.~\eqref{eq:logrep1}--\eqref{eq:logrep2} are approximately equal for fixed $C_5$ and $\Tsc{q} \ll \mu_Q$.
See also the discussion around Ref.~\cite[Eq.~(B.25)]{Bozzi:2005wk}.
Now one may perform substitutions like Eqs.~\eqref{eq:logrep1}--\eqref{eq:logrep2} to
known results for the asymptotic term like Eq.~(36) of Ref.~\cite{Nadolsky:1999kb} to obtain the generalized, non-zero $C_2/C_5$, asymptotic term.
Reference~\cite[Appendix B]{Bozzi:2005wk} contains results for any $n$, so the modified asymptotic term, and
thus the new $Y$-term, can be obtained to any order from previously existing expressions.
For completeness, low order expressions for the asymptotic terms are given in Appendix~\ref{sec:asy}.
\section{Demonstration}
\label{sec:demo}
To illustrate the steps above, we have performed sample calculations of the $Y$-term using
analytic approximations for the collinear pdfs and collinear ffs. For simplicity, we consider only the target up-quark $\gamma^\ast q \to q g$ channel,
and for the running $\alpha_s(\mu)$ we use the two-loop $\beta$-function solution and keep the number of flavors at $n_f = 3$ since
we are mainly interested in the transition to low $Q$. Thus we use $\Lambda_{\rm QCD} = 0.339$~GeV~\cite{Bethke:2012jm}.
To further simplify our calculations, we use analytic expressions for the collinear correlation functions, taken from appendix A1 of Ref.~\cite{Gluck:1991ng} for the
up-quark pdf and from Eq.~(A4) of Ref.~\cite{Kniehl:2000hk} for the up-quark-to-pion fragmentation function.
Due to these simplifying assumptions, the following should be regarded as a
toy model calculation, meant to
illustrate the basic steps of a $Y$-term calculation and to demonstrate plausibility for use in more complete and detailed calculations.
\begin{figure}
\centering
\begin{tabular}{c@{\hspace*{10mm}}c}
\includegraphics[scale=0.3]{cutsA.eps} \\
(a) \\
\vspace{8mm} \\
\includegraphics[scale=0.3]{cutsB.eps}
\\
(b)
\\[5mm]
\end{tabular}
\caption{The cutoff functions in Eq.~\eqref{eq:Xparam} for low $\Tsc{q} / \lambda$ (blue dashed line) and
in Eq.~\eqref{eq:Xiparam} for large $\Tsc{q}/Q$ (brown solid line) for $Q = 20.0$~GeV (plot (a)) and $Q = 2.0$~GeV (plot (b)).
In both, $\lambda = 2/3$~GeV and $\eta = 0.34$. The region of $\Tsc{q} \gtrsim Q/4$ is determined by the $\fixo{}{}$ calculation.
For all $Q$, $\Tsc{q} \lesssim \lambda$ is considered non-perturbative. (Color online.)
}
\label{fig:cutoffs}
\end{figure}
First, one must establish parameters for our large and small $\Tsc{q}$ cutoff
functions. For $X(\Tsc{q}/\lambda)$ we use Eq.~\eqref{eq:Xparam}, and try $a_X = 4$ since this gives a rapid but reasonably gentle suppression of small $\Tsc{q}$. The choice
of $\lambda$ should be such that it has reached unity at values of $\Tsc{q}$ near the
perturbative-nonperturbative transition, say, $\Tsc{q} \approx 1.0$~GeV. Thus, we choose $\lambda = 2/3$~GeV.
The result is shown as the blue dashed curves in Figs.~\ref{fig:cutoffs}. To understand the plots, recall that $X(\Tsc{q}/\lambda)$ is used to
restrict to large $\Tsc{q}$ the region where $\Tsc{q}$-dependence is calculated with collinear factorization at fixed order fixed in perturbation theory.
For $\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right)$ we use Eq.~\eqref{eq:Xiparam}. The value of $a_\Xi$ controls how rapidly
the $\Tsc{q} \sim Q$ contribution from the $W$-term gets cutoff. For large $Q$, the transition can be rather
smooth since there is a broad region where $\asnew{}{}$ and $\fixo{}{}$ overlap. In our example calculation, we find that $a_\Xi = 8$ works well.
The value of $\eta$ should be chosen such that $\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right) \to 0$ when $\Tsc{q}$ is large enough that
approximations that use $\Tsc{q} \ll Q$ might be considered suspect. For small $\Tsc{q}$,
$\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right) \to 1$. We find that the transition between $\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right) \approx 0$
and $\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right) \approx 1$ occurs between about $\Tsc{q} \approx Q/4$ and $\Tsc{q} \approx Q/2$ if $\eta = 0.34$.
These results for $\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right)$ are shown as the tan curves in Figs.~\ref{fig:cutoffs}.
To understand the plots, recall that the purpose of $\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right)$ is to
suppress the $\Tsc{q} = O(Q)$ region of the $W$-term where it fails to provide even a rough approximation.
\begin{figure}
\centering
\includegraphics[scale=0.3]{asymptotplots.eps}
\caption{The absolute value of the asymptotic term calculation with $\Xi$ replaced by 1, and with the substitutions in Eq.~\eqref{eq:logrep1}--\eqref{eq:logrep2} and various choices for $C_5$. The brown dashed curve
is the limit of the standard CSS $Y$-term approach. In all cases, $C_2 = 1$. The blue dotted and magenta dash-dotted curves correspond $C_5= 0.5$ and $C_5 = 2.0$ respectively. All curves
are normalized to ${\rm FO}(q_{\rm T},Q)$ and $\Tsc{q} = 1$~GeV. The variation
between the curves can be viewed as an measure of the sensitivity of the $\as{}{}$ calculation to different choices of $C_5$. (Color online.) In all cases, we take $x = 0.1$ and $z = 0.5$.
}
\label{fig:asymplots}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{c@{\hspace*{10mm}}c}
\includegraphics[scale=0.3]{YplotLargeQ.eps} \\
(a) \\
\vspace{8mm} \\
\includegraphics[scale=0.31]{YplotLowQ.eps}
\\
(b)
\\[5mm]
\end{tabular}
\caption{The Y-term (blue solid curves) calculated using the method of Eq.~\eqref{eq:finalydef2} and
Sec.~\ref{sec:bcfg}. One calculation (a) is for a large scale, $Q=20.0$~GeV and one calculation (b) is for a small
scale, $Q = 2.0$~GeV. For comparison, the $\fixo{}{}$ (green dashed) and $\asnew{}{}$ (magenta dot-dashed) calculations are also shown. In all cases, $C_5 = 1.0$. The curves
are normalized to the value of ${\rm FO}(q_{\rm T},Q)$ at $\Tsc{q} = 1.0$~GeV. (Color online.) In all cases, we take $x = 0.1$ and $z = 0.5$.
}
\label{fig:yterms}
\end{figure}
Next, we examine the effect of varying $C_5$ on the calculation of the asymptotic term.
Standard expressions for the asymptotic term can be found in, for example,
Eq.~(36) of Ref.~\cite{Nadolsky:1999kb}. We use these results, along with the substitutions in
Eqs.~\eqref{eq:logrep1}--\eqref{eq:logrep2}, to plot the new asymptotic term of Eq.~\eqref{eq:asydef2} for a range of $C_5$ values.
The result is shown in Fig.~\ref{fig:asymplots}, where
we have temporarily set $\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right)$ to $1$
in order to highlight the effect of varying $C_5$. The results for $C_5 = 0.5$ and $C_5 = 2.0$ are shown.
The standard CSS result, corresponding to $C_2 / C_5 \to 0$, is also shown for comparison.
In all of our calculations, $C_2 = 1.0$.
One can observe the approach to the CSS result as $C_5$ increases.
Finally, we restore the explicit $\Xi\mathopen{}\left(\Tsc{q}/Q,\eta\right)$ in the asymptotic term and
calculate the $Y$-term according to Eq.~\eqref{eq:finalydef2} for two values of $Q$, one large and one small.
The results are shown in Figs.~\eqref{fig:yterms}(a,b).
Here we use $C_5 = 1.0$ as a compromise between the various choices in Fig.~\ref{fig:asymplots} and to match with a
common choice used in calculations like those of Ref.~\cite{Bozzi:2005wk}.
For $Q = 20$~GeV (Fig.~\ref{fig:yterms}(a)), there is
a region $1.0 \, {\rm GeV} \lesssim \Tsc{q} \lesssim 6.0 \, {\rm GeV}$ where the $Y$-term is a useful non-trivial
correction. Beyond about $\Tsc{q} \approx 6.0$~GeV, the $Y$-term simply approaches the $\fixo{}{}$ calculation (where the $W$-term vanishes).
Within our $W+Y$ method, the $Y$-term remains a reasonable correction for large $\Tsc{q}/Q$ even down to $Q = 2.0$~GeV, as shown in
Fig.~\eqref{fig:yterms}(b). There it forces a matching with the $\fixo{}{}$ calculation at $\Tsc{q} = O(Q)$, while it vanishes for small $\Tsc{q}$.
Note that, if the entire range of $\Tsc{q}$ up to order $Q$ is considered, then the treatment of the $Y$-term
plays an important role in describing the general shape of the $\Tsc{q}$-spectrum, particularly for the smaller $Q$ values. Indeed, for smaller $Q$, the $Y$-term
appears to dominate the tail region.
These observations highlight the importance of achieving well-constrained \emph{collinear} treatments of the large $\Tsc{q}$ region.
Most likely, calculations of the fixed order term to rather high order should be included in implementations to adequately describe the large $\Tsc{q}$ behavior. For instance, Ref.~\cite{Daleo:2004pn} finds that
order $\alpha_s^2$ fixed order calculations are needed to get
acceptable phenomenological success (see the comparison of curves in
Fig.~4 of Ref.~\cite{Daleo:2004pn}). Reference~\cite{deFlorian:2013taa} finds that threshold resummation corrections are also needed.
\section{Breakdown of Factorization in the Photoproduction Limit}
\label{sec:phot}
\begin{figure}
\centering
\begin{tabular}{c@{\hspace*{10mm}}c}
\includegraphics[scale=0.3]{varylam2.eps} \\
(a) \\
\vspace{8mm} \\
\includegraphics[scale=0.3]{varylam1.eps}
\\
(b)
\vspace{8mm} \\
\includegraphics[scale=0.3]{varylam0.eps}
\\
(c)
\\[5mm]
\end{tabular}
\caption{The $Y$-term calculated with $C_5=1.0$ and with the three values: $\lambda = 1/3$~GeV (blue dotted curves), $\lambda=1/2$~GeV (blue solid curves)
and $\lambda=2/3$~GeV (blue dot-dashed curves). The green dashed curves show the fixed order calculations. Graph (a) is for $Q=20$~GeV, graph (b) is for $Q=2$~GeV, and
graph (c) is for $Q=0.9$~GeV. (Color online. See text for further explanation.) In all cases, we take $x = 0.1$ and $z = 0.5$.
}
\label{fig:ytermslam}
\end{figure}
Of course, both TMD and collinear factorization theorems apply to the limit of a large hard scale $Q$;
part of the statement is that corrections to the factorized formulas are suppressed by powers of $m/Q$. Therefore, one expects factorization to
work well in practice for very large $Q$ and to fail completely for $Q \to 0$, with the in-between region being less clear. In the SIDIS case, the $Q \to 0$ limit corresponds
to photoproduction: $\gamma + P \to H + X$. If $Q$ is gradually decreased from some
initially very large values, one expects uncertainties related to the general onset of non-perturbative physics beyond factorization
to gradually increase.
This is, of course, a standard and well-known aspect of QCD.
The most obvious signal of the breakdown of perturbative QCD factorization is that $\alpha_s(Q)$ begins
to blow up when $Q \to O(m)$. However, it is instructive to examine the transition from the solidly large $Q$ region to the $Q \to 0$ region in more detail.
An analysis of the transition could guide applications of TMD factorization over a wide range of scales, aid in error analysis in applications, and provide intuition for how to match to truly non-perturbative physics.
For example, in the true photoproduction limit, it may be useful to switch to a physical picture more closely resembling a Regge exchange model~\cite{Kramer:1978xc}.
TMD factorization is most useful if there are distinct regions where
i) $\Tsc{q} \lesssim O(m)$, where TMD correlation functions can be used,
and ii) $\Tsc{q} \sim O(Q)$, where collinear factorization applies. One way to test whether that is the case is to vary the $\lambda$
parameter of Eq.~\eqref{eq:yterm}. This controls the suppression of the $Y$-term for $\Tsc{q} < O(m)$, so varying it
should have small or negligible effects on how one treats the perturbative $\Tsc{q}$-dependence at $\Tsc{q} \gg m$.
Figure~\ref{fig:ytermslam}(a) confirms that this is true in our sample calculation for a very large value of $Q = 20$~GeV. The graph shows the $Y$-term for several values of $\lambda$
along with the $\fixo{}{}$ calculation (dashed) for comparison. The region where $1~{\rm GeV} \lesssim \Tsc{q} \lesssim 6~{\rm GeV}$ corresponds
to a region where, roughly, $m \ll \Tsc{q} \ll Q$. Therefore, one could probably rely mostly on the $W$-term with its TMD pdfs and ffs to
give a reasonable general description for $1~{\rm GeV} \lesssim \Tsc{q} \lesssim 6~{\rm GeV}$.
However, $\Tsc{q}$ is still large enough in this region that $\Tsc{q}/Q$ power corrections to the $W$-term might not be totally
negligible, so the $Y$-term is a useful and important correction to a $W$-term calculation in the moderate $\Tsc{q}$ region. Including it can
enhance precision over a wide range of $\Tsc{q}$. In the $1~{\rm GeV} \lesssim \Tsc{q} \lesssim 6~{\rm GeV}$ region
the $Y$-term has negligible sensitivity to the exact value of $\lambda$ (so long as $\lambda = O(m)$). The $Y$-term is, therefore, unambiguous in
its region of relevance, up to choices in $C_5$ and $\eta$. Moreover, variations in $C_5$ and $\eta$ can be understood in terms of higher order corrections.
There is some residual sensitivity to $\lambda$ for $\Tsc{q} < 1.0$~GeV, but for $Q=20$~GeV, $\Tsc{q} < 1.0$ definitely corresponds to
a region where $\Tsc{q}/Q \ll 1$. So, we are justified in simply ignoring the $Y$-term in the $\Tsc{q} \lesssim 1.0$ region.
In Fig.~\ref{fig:ytermslam}(b), we consider the lower $Q$ value of $2.0$~GeV, where $Q$ is relatively small, but still
large enough to hope that TMD factorization is still useful. The range of $\Tsc{q}$ as a fraction of $Q$ is the same as in Fig.~\ref{fig:ytermslam}(a).
For this smaller $Q$ region, one might reasonably expect values of
$\Tsc{q} \approx 0.2$~GeV to about $\Tsc{q} \approx 0.9$~GeV to qualify as $\Tsc{q} \ll Q$. However, $\Tsc{q}$ is still large
enough here that concerns about $\Tsc{q}/Q$ power corrections from a $Y$-term are definitely warranted. As can be seen from
the graph, the $Y$-term has significant uncertainties at intermediate $\Tsc{q}$ when $Q \sim 2.0$~GeV coming from the exact choice of $\lambda$.
Nonetheless, the region of $\Tsc{q} \lesssim 0.2$~GeV corresponds to $\Tsc{q}/Q \lesssim 0.1$, so in the the smallest $\Tsc{q}$ region
one may be confident in the applicability of factorization. Likewise, for $\Tsc{q} \gtrsim 0.9$~GeV, one may begin relying on
the $\fixo{}{}$ calculation. Our $W+Y$ construction interpolates between these two descriptions. Therefore, it is reasonable to expect a fit to $Q=2.0$~GeV data
to be qualitatively consistent with TMD factorization, even though
uncertainties associated with $m/Q$-suppressed violations of factorization may begin to be more discernible. Said differently,
at $Q \sim 2.0$~GeV, there may be a window of intermediate $m < \Tsc{q} < Q$ where $m/\Tsc{q}$ and $\Tsc{q}/Q$ are not both simultaneously small, yet we
obtain a reasonable overall description by calculating the $\Tsc{q} \lesssim m$ behavior and the $\Tsc{q} \sim Q$ behavior and interpolating between the two. The only uncertainty then is
in the exact nature or the interpolation. Notice, furthermore, that once a fit has been performed at $Q \sim 2.0$~GeV, any sensitivity to $\lambda$
automatically vanishes after evolution to large $Q$, as illustrated by Fig.~\ref{fig:ytermslam}(a). In other words, there is no disadvantage to
optimizing fits at liberally low $Q$ since the limiting behavior at large $Q$ is unaffected.
To see the total breakdown of TMD factorization explicitly, we may push to even lower $Q$; in Fig.~\ref{fig:ytermslam}(c) we repeat
the calculation of the $Y$-term for $Q = 0.9$~GeV, again over the same range of $\Tsc{q}$ as a fraction of $Q$. Here, $\Tsc{q}/Q$ corrections
may be important already at $\Tsc{q} \sim 0.2$~GeV where transverse momentum dependence is still non-perturbative and the $Y$-term is totally
controlled by the value of $\lambda$, and the functional form of $X(\Tsc{q})$. A fit done in this region will likely be totally dominated by the (arbitrary, as far
as TMD factorization is concerned) choice in $X(\Tsc{q})$ and the value of $\lambda$. Thus, as $Q$ drops below about $1$~GeV,
TMD factorization begins to lose its predictive power and its usefulness.
It is important to emphasize that the calculations in Figs.~\ref{fig:asymplots}--\ref{fig:ytermslam} are meant to demonstrate
general features only. To gain a true understanding of the moderate $Q$ region and the transition to the $Q \to 0$ limit,
up-to-date collinear pdfs and ffs are needed for all partonic channels, higher order perturbative calculations including
flavor thresholds should be included, and a $W$-term with a specific parametrization for non-perturbative $\Tsc{q}$-dependence is
needed.
The region of $Q$ of order a few GeV can likely be enhanced by extensions of factorization to higher twist~\cite{Arleo:2010yg} and/or by using new
non-perturbative correlation functions that treat kinematics exactly like fully unintegrated pdfs~\cite{Collins:2005uv,Collins:2007ph,Rogers:2008jk}.
\section{Summary}
\label{sec:con}
We conclude by summarizing the logic of our modified
$W + Y$ construction.
TMD factorization applies for small $\Tsc{q}\ll Q$ and degrades in
accuracy as $\Tsc{q}$ increases. In contrast, collinear factorization
applies when $\Tsc{q} \sim Q$ and also to the cross section integrated
over all $\T{q}$; its accuracy on the differential cross section
degrades as $\Tsc{q}$ decreases. The standard $W + Y$ prescription
was arranged to apply also for intermediate $\Tsc{q}$; in particular
it keeps full accuracy when $m \ll \Tsc{q} \ll Q$, a situation in
which both pure TMD and pure collinear factorization have degraded
accuracy\footnote{What we have
in mind here is that the errors in TMD factorization include a
power of $\Tsc{q}/Q$ as well as a power of $m/Q$, while the error
in collinear factorization is a power of $m/\Tsc{q}$. Now an
optimal blend of TMD and collinear factorization can have an error
of a particular power of $m/Q$ uniformly in $\Tsc{q}$. But,
because of the $\Tsc{q}/Q$ and $m/\Tsc{q}$ errors in each
individual kind of factorization, either one of these by itself
has much worse accuracy than the blend, when $m \ll \Tsc{q} \ll
Q$.}. However, it did not specifically address the issue of
matching to collinear factorization for the cross section integrated
over $\T{q}$.
Furthermore, for the $\Tsc{q} \gtrsim Q$ and $\Tsc{q} \lesssim m$
regions, the CSS $W+Y$ formalism as it stands does not robustly revert
to the $\fixo{}{}$ or $\TT{}{}$ terms alone. A variety of methods for
dealing with these and related issues exists in the literature (see
Sec.~\ref{sec:principles}), but they usually appear at the level of
implementations rather than in the formalization itself. We have
synthesized components from these previous approaches into a
relatively compact prescription.
With our method, the redefined $W$ term allowed us, in
Sec.~\ref{sec:together},
to construct a relationship between between integrated-TMD-factorization formulas and standard collinear factorization formulas, with
errors relating the two being suppressed by powers of $b_{\rm min}/b_{\rm max} \sim 1 /Q$. Importantly, the exact definitions of the TMD pdfs and ffs are unmodified from the usual ones of factorization derivations
(e.g., Eqs.~(13.42,13.106) of Ref.~\cite{Collins:2011qcdbook}).
We preserve transverse-coordinate space version of the $W$ term,
but only modify the way in which it is used. Thus the derivation
of TMD factorization is preserved, and we have only changed the way
in which the ingredients are assembled into a formula for the cross
section.
Finally, the standard CSS formalism, with its
more standard $W +Y$ construction, is automatically
recovered in the limit of very large $Q$. Having organized a systematic treatment of the matching between $W$ and $Y$ terms, we may begin
to incorporate physically motivated considerations (e.g., similar to the momentum rescaling of Ref.~\cite{Guzzi:2013aja}) into the construction of
specific functional forms for $\Xi$ and the choice of $C_5$.
This paper has dealt only with unpolarized cross sections. However, we expect analogous reasoning to apply
when polarization is taken into account. In such cases, the connection between large and small $\Tsc{q}$
is more subtle because power counting can differ at large and small $\Tsc{q}$ depending on the specific polarization
observable under consideration. This is discussed extensively in Ref.~\cite{Bacchetta:2008xw}. To implement steps analogous to those we have presented here, one most likely needs to
consider $\Tsc{q}$-weighted integrals of cross sections or weighting by Bessel functions~\cite{Boer:2011xd}. Such studies may
prove to be especially interesting in how they relate correlation functions of different twist.
Many planned applications of TMD factorization depend crucially on the ability to control matching between
perturbatively large and non-perturbatively small $\Tsc{q}/Q$. This is especially the case for phenomenological studies where the
shape of the distribution and the possible presence of a non-perturbative $\Tsc{q}$-tail is a central question, such as in
studies of flavor-dependence in TMDs~\cite{Signori:2013mda}, or a potential difference between sea
and valence quark intrinsic transverse distributions~\cite{Schweitzer:2012hh}. (See also Fig.~1 of Ref.~\cite{Aidala:2014hva}.)
With the method of this paper, it is possible in principle to interface
the full $W+Y$ TMD construction with generalized parton model approaches to phenomenology like Refs.~\cite{Signori:2013mda,Anselmino:2013lza,Melis:2014pna} -- a step that
we leave for future work.
We plan to next apply our enhanced $W + Y$ construction in phenomenological studies.
In particular, given the rather low $Q$-values typical of SIDIS experiments, we expect analyses of unpolarized SIDIS
to benefit from the greater control over the transition from small $\Tsc{q}$ to $\Tsc{q} \sim Q$.
\begin{acknowledgments}
D.~B.~Clark provided numerical
help on calculations performed in an earlier version of this paper. We thank D.~Boer and M.~Diehl for many useful
comments and discussions regarding the text. We also thank C.~Aidala, C.~Courtoy, O.~Garcia and P.~Nadolsky for general conversations regarding factorization.
This work was supported by DOE contracts No.\ DE-AC05-06OR23177
(A.P., T.R., N.S., B.W.),
under which Jefferson Science Associates, LLC operates Jefferson
Lab,
No.\ DE-FG02-07ER41460 (L.G.), and No.\ DE-SC0008745 (J.C.),
and by the National Science Foundation under Contract No. PHY-1623454 (A.P.).
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,262 |
\section{Introduction}
Providing an empirical basis for gas giant planet formation models and theories requires the detection of young objects in their natal environment, i.e., when they are still embedded in the gas and dust-rich circumstellar disk surrounding their host star. The primary scientific goals of studying planet formation are as follows: To understand where gas giant planet formation takes place, for example, at what separations from the host star and under which physical and chemical conditions in the disk; how formation occurs, i.e., via the classical core accretion process \citep{Pollack1996} or a modified version of that process \citep[e.g., pebble accretion,][]{Lambrechts2012} or direct gravitational collapse \citep{Boss1997}); and the properties of the suspected circumplanetary disks (CPDs).\\
While in recent years high-contrast, high spatial resolution imaging observations of circumstellar disks have revealed an impressive diversity in circumstellar disk structure and morphology, the number of directly detected planet candidates embedded in those disks is still small \citep[LkCa15\,b, HD100546\,b, HD169142\,b, MWC\,758\,b, PDS\,70\,b;][]{KrausIreland2012,Quanz2013_discovery,reggiani2014,biller2014,Reggiani2017, Keppler2018}. To identify these objects, high-contrast exoplanet imaging can be used. These observations are typically performed at near- to mid-infrared wavelengths using an adaptive optics-assisted high-resolution camera. In addition to the intrinsic luminosity of the still contracting young gas giant planet, the surrounding CPD, if treated as a classical accretion disk, contributes significantly to fluxes beyond 3$\,\mu$m wavelength \citep{zhu2015, Eisner2015}, potentially easing the detection of young forming gas giants at these wavelengths. While the majority of the forming planet candidates mentioned above were detected in this way, it has also been realized that the signature from a circumstellar disk itself can sometimes mimic that of a point source after PSF subtraction and image post-processing \citep[e.g.,][]{Follette2017, ligi2017}. As a consequence, it is possible that some of the aforementioned candidates are false positives.
Another approach is to look for direct signatures of the suspected CPDs, such as their dust continuum emission or their kinematic imprint in high-resolution molecular line data \citep{Perez2015,Szulagyi2018}. In one case, spectro-astrometry using CO line emission was used to constrain the existence and orbit of a young planet candidate \citep{Brittain2013, Brittain2014}. Moreover, \cite{Pinte2018} and \cite{Teague2018} suggested the presence of embedded planets orbiting HD163296 from local deviations from Keplerian rotation in the protoplanetary disk.
A further indirect way to infer the existence of a young, forming planet is to search for localized differences in the gas chemistry of the circumstellar disk, as the planet provides extra energy to the chemical network in its vicinity \citep{Cleeves2015}.
Finally, it is possible to look for accretion signatures from gas falling onto the planet and its CPD. Accretion shocks are able to excite or ionize the hydrogen atoms, which then radiate recombination emission lines, such as H$\alpha$, when returning to lower energy states \citep[e.g.,][]{calvetgullbring1998, Szulagyi2017, Marleau2017}.
High-contrast imaging using H$\alpha$ filters was already successfully applied in three cases.
Using angular spectral differential imaging (ASDI) with the Magellan Adaptive Optics System (MagAO), \cite{close2014} detected H$\alpha$ excess emission from the M-star companion orbiting the Herbig Ae/Be star HD142527, and \cite{sallum2015} also used MagAO to identify at least one accreting companion candidate located in the gap of the transition disk around LkCa15. The accretion signature was found at a position very similar to the predicted orbital position of one of the faint point sources detected by \cite{KrausIreland2012}, attributed to a forming planetary system. Most recently, \cite{Wagner2018} have claimed the detection of H$\alpha$ emission from the young planet PDS70\,b using MagAO, albeit with comparatively low statistical significance (3.9$\sigma$).
In this paper we present a set of H$\alpha$ high-contrast imaging data for six young stars, aiming at the detection of potential accretion signatures from the (suspected) young planets embedded in the circumstellar disks of the stars.
The paper is structured as follows: In Section \ref{sec:sample} we discuss the observations and target stars. We explain the data reduction in Section \ref{sec:data_reduction} and present our analyses in Section \ref{sec:analysis}. In Section \ref{sec:discussion} we discuss our results in a broader context and conclude in Section \ref{sec:conclusions}.
\section{Observations and target sample}
\label{sec:sample}
\subsection{Observations}
\begin{table*}[h!]
\caption{\label{tab_obs} Summary of observations.}
\centering
\begin{tabular}{llllllllll}
\hline\hline\noalign{\smallskip}
Object & H$\alpha$ & Obs. date & Prog. ID & DIT\tablefootmark{b} & \# of & Field & Mean & $\tau_0$\tablefootmark{c} & Mean \\
& Filter\tablefootmark{a} &[dd.mm.yyyy] & & [s] & DITs & rotation [$^\circ$] & airmass & [ms] & seeing\tablefootmark{d} [as] \\\hline
\noalign{\smallskip}
\multirow{2}{*}{HD142527} & B\_Ha & 31.03.2016 & 096.C-0248(B) & 30 & 70 & 47.8 & 1.06 &$2.7\pm0.2$ & $0.71\pm0.06$ \\ \cline{2-10}
\noalign{\smallskip}
& N\_Ha & 31.03.2016 & 096.C-0248(B) & 30 & 70 & 48.6 & 1.05 &$2.7\pm0.3$ & $0.69\pm0.07$ \\\hline
\noalign{\smallskip}
HD135344 B &N\_Ha & 31.03.2016 & 096.C-0248(B) & 50 & 107 & 71.7 & 1.04 &$4.4\pm1.2$ & $0.47\pm0.17$ \\ \hline
\noalign{\smallskip}
TW Hya &B\_Ha & 23.03.2016 & 096.C-0267(B) & 80 & 131 & 134.1 & 1.16 &$1.4\pm0.4$ & $1.33\pm0.53$ \\ \hline
\noalign{\smallskip}
HD100546 &B\_Ha & 23.04.2015 & 095.C-0273(A) & 10 & 1104\tablefootmark{e} & 68.3\tablefootmark{e} & 1.46 &$1.7\pm0.2$ & $0.98\pm0.28$ \\ \hline
\noalign{\smallskip}
HD169142 & B\_Ha & 09.05.2015 & 095.C-0298(A) & 50 & 90 & 123.2 & 1.01 &$1.4\pm0.1$ & $1.24\pm0.04$ \\ \hline
\noalign{\smallskip}
MWC\,758 &B\_Ha & 30.12.2015 & 096.C-0267(A) & 60 & 194 & 54.8 & 1.63 &$3.2\pm0.8$ & $1.39\pm0.24$ \\
\noalign{\smallskip}\hline\hline
\end{tabular}
\tablefoot{\tablefoottext{a}{Each dataset consists of data obtained in one of the two H$\alpha$ filters and simultaneous data taken with the continuum filter inserted in the other ZIMPOL camera.}\tablefoottext{b}{DIT = Detector integration time, i.e., exposure time per image frame.}\tablefoottext{c}{Coherence time}.\tablefoottext{d}{Mean DIMM seeing measured during the observation.}\tablefoottext{e}{As we explain in Section~\ref{sec:Analysis_HD100546} and Appendix~\ref{App_3}, for this dataset a frame selection was applied, which reduced the number of frames to 366 and the field rotation to $20.7^\circ$.}
}
\end{table*}
The data were all obtained with the ZIMPOL sub-instrument of the adaptive optics (AO) assisted high-contrast imager SPHERE \citep{Beuzit2008, Petit2008, Fusco2016}, which is installed at the Very Large Telescope (VLT) of the European Southern Observatory (ESO) on Paranal in Chile. A detailed description of ZIMPOL can be found in \cite{Schmid2018}. Some of the data were collected within the context of the Guaranteed Time Observations (GTO) program of the SPHERE consortium; others were obtained in other programs and downloaded from the ESO data archive (program IDs are listed in Table \ref{tab_obs}). We focused on objects that are known from other observations to host forming planet candidates that still need to be confirmed (HD100546, HD169142, and MWC\,758)\footnote{In the discussion (Section \ref{sec:discussion}) we also include the analysis of a dataset of LkCa15 (PI: Huelamo) to set our results in context, but the data were poor in quality and hence not included in the main part of the paper.}, objects known to host accreting stellar companions (HD142527), and objects that have well-studied circumstellar disks with spatially resolved substructures (gaps, cavities, or spiral arms), possibly suggesting planet formation activities (HD135344 B and TW Hya).
All data were taken in the noncoronagraphic imaging mode of ZIMPOL
using an H$\alpha$ filter in one camera arm and a nearby continuum filter simultaneously in the other
arm (Cont\_Ha; $\lambda_c=644.9$ nm, $\Delta\lambda=3.83$ nm). As the data were observed in different programs, we sometimes used the narrow H$\alpha$ filter (N\_Ha; $\lambda_c=656.53$ nm, $\Delta\lambda=0.75$ nm) and sometimes the broad H$\alpha$ filter (B\_Ha; $\lambda_c=655.6$ nm, $\Delta\lambda=5.35$ nm). A more complete description of these filters can be found in \cite{schmid2017}. To establish which filter allows for the highest contrast performance, we used HD142527 and its accreting companion \citep{close2014} as a test target and switched between the N\_Ha and the B\_Ha filter every ten frames within the same observing sequence. All datasets were observed in pupil-stabilized mode to enable angular differential imaging \citep[ADI;][]{marois2006}. The fundamental properties of the target stars are given in Table~\ref{tab_stars}, while a summary of the datasets is given in Table~\ref{tab_obs}. \\
We note that because of the intrinsic properties of the polarization beam splitter used by ZIMPOL, polarized light might preferentially end up in one of the two arms, causing a systematic uncertainty in the relative photometry between the continuum and H$\alpha$ frames. The inclined mirrors in the telescope and the instrument introduce di-attenuation (e.g., higher reflectivity for $I_\perp$ than $I_\parallel$) and polarization cross talks, so that the transmissions in imaging mode to the $I_\perp$ and $I_\parallel$ arm depend on the telescope pointing direction. This effect is at the level of a few percent (about $\pm 5~\%$), but unfortunately the dependence on the instrument configuration has not been determined yet.
We discuss its potential impact on our analyses in Appendix \ref{App:Beamsplitter}, even though we did not take this effect into account since it is small and could not be precisely quantified.
\subsection{Target sample}
\textit{HD142527}\\\\
HD142527 is known to have a prominent circumstellar disk \citep[e.g.,][]{fukagawa2006, Canovas2013, Avenhaus2014_HD142527} and a close-in M star companion \citep[HD142527 B;][]{biller2012,rodigas2014,lacour2016,Christiaens2018, Claudi2018} that shows signatures of ongoing accretion in H$\alpha$ emission \citep{close2014}. This companion orbits in a large, optically thin cavity within the circumstellar disk stretching from $\sim0\farcs07$ to $\sim1\farcs0$ \citep[e.g.,][]{Fukagawa2013, Avenhaus2014_HD142527}, and it is likely that this companion is at least partially responsible for clearing the gap by accretion of disk material \citep{biller2012, Price2018}. \cite{Avenhaus2017} obtained polarimetric differential imaging data with SPHERE/ZIMPOL in the very broad band \cite[VBB, as defined in ][]{Schmid2018} optical filter, revealing new substructures, and resolving the innermost regions of the disk (down to $0\farcs025$). In addition, extended polarized emission was detected at the position of HD142527 B, possibly due to dust in a circumsecondary disk. \cite{Christiaens2018} extracted a medium-resolution spectrum of the companion and suggested a mass of $0.34\pm0.06\,M_\odot$. This value is a factor of $\sim3$ larger than that estimated by spectral energy distribution (SED) fitting \citep[][$M=0.13\pm0.03\,M_\odot$]{lacour2016}. Thanks to the accreting close-in companion, this system is the ideal target to optimize the H$\alpha$ observing strategy with SPHERE/ZIMPOL and also the data reduction. \\\\
\textit{HD135344 B}\\\\
HD135344 B (SAO206462) is surrounded by a transition disk that was spatially resolved at various wavelengths. Continuum (sub-)millimeter images presented by \cite{Andrews2011} and \cite{vanderMarel2016} revealed a disk cavity with an outer radius of $0\farcs32$. In polarimetric differential imaging (PDI) observations in the near-infrared (NIR), the outer radius of the cavity appears to be at $0\farcs18$, and the difference in apparent size was interpreted as a potential indication for a companion orbiting in the cavity \citep{garufi2013}. Data obtained in PDI mode also revealed two prominent, symmetric spiral arms \citep{muto2012,garufi2013, stolker2016}.
\cite{Vicente2011} and \cite{maire2017} searched for planets in the system using NIR NACO and SPHERE high-contrast imaging data, but did not find any. Using hot start evolutionary models these authors derived upper limits for the mass of potential giant planets around HD135344 B (3 M$_J$ beyond $0\farcs7$).\\
\\
\textit{TW Hya}\\\\
TW Hya is the nearest T Tauri star to Earth. Its almost face-on transitional disk \citep[$i\sim7\pm1^\circ$;][]{qi2004} shows multiple rings and gaps in both dust continuum and scattered light data. Hubble Space Telescope (HST) scattered light images from \cite{Debes2013} first allowed the identification of a gap at $\sim1\farcs48$. Later, \cite{Akiyama2015} observed in $H$-band polarized images a gap at a separation of $\sim0\farcs41$. Using Atacama Large Millimeter Array (ALMA), \cite{Andrews2016} identified gaps from the radial profile of the 870 $\mu$m continuum emission at $0\farcs41$, $0\farcs68$ and $0\farcs80$.
Finally, \cite{vanboekel2017} obtained SPHERE images in PDI and ADI modes at optical and NIR wavelengths,
and identified three gaps at $0\farcs11$, $0\farcs39,$ and $1\farcs57$ from the central star. A clear gap was also identified by \cite{Rapson2015} at a separation of $0\farcs43$ in Gemini/GPI polarimetric images and the largest gap at $r\simeq1\farcs52$ has also been observed in CO emission with ALMA \citep{Huang2018}.
\\
\begin{table*}[t!]
\caption{\label{tab_stars} Target sample.}
\centering
\begin{tabular}{lllllll}
\hline\hline\noalign{\smallskip}
Object & RA & DEC & Spec. type & $m_R$ [mag] & Distance [pc] & Age [Myr]\\\hline
\noalign{\smallskip}
HD142527 & 15$^h$56$^m$41.89$^s$ & -42$^\circ$19$'$23\farcs27 & F6III & 7.91 &$157.3\pm1.2$ & $8.1^{+1.9}_{-1.6}$ \\
\noalign{\smallskip}
HD135344 B & 15$^h$15$^m$48.44$^s$ & -37$^\circ$09$'$16\farcs03 & F8V & 8.45 & $135.9\pm1.4$& $9\pm2$ \\
\noalign{\smallskip}
TW Hya & 11$^h$01$^m$51.90$^s$ & -34$^\circ$42$'$17\farcs03 & K6Ve & $10.43\pm0.1$ & $60.1\pm0.1$ & $\sim10$ \\
\noalign{\smallskip}
HD100546 & 11$^h$33$^m$25.44$^s$ & -70$^\circ$11$'$41\farcs24& B9Vne & 8.78 & $110.0\pm0.6$ & $7\pm1.5$ \\
\noalign{\smallskip}
HD169142 & 18$^h$24$^m$29.78$^s$ & -29$^\circ$46$'$49\farcs32 & B9V & 8.0 & $114.0\pm0.8$ & $\sim6$ \\
\noalign{\smallskip}
MWC\,758 & 05$^h$30$^m$27.53$^s$ & -25$^\circ$19$'$57\farcs08 & A8Ve & $9.20\pm0.01$ & $160.3\pm1.7$ & $3.5\pm2$ \\
\noalign{\smallskip}
\noalign{\smallskip}\hline\hline
\end{tabular}
\tablefoot{Coordinates and spectral types are taken from SIMBAD, R-magnitudes are taken from the NOMAD catalog \citep{Zacharias2004} for HD142527 and HD169142, from the APASS catalog \citep{Henden2016} for HD135344\,B, and from the UCAC4 catalog \citep{Zacharias2012} for the other targets. Distances are from GAIA data release 2 \citep{Gaia2018}. The ages -- from top to bottom -- are taken from \cite{Fairlamb2015}, \cite{mueller2011}, \cite{Weinberger2013}, \cite{Fairlamb2015}, \cite{Grady2007}, and \cite{Meeus2012}.}\\
\end{table*}
\textit{HD100546}\\\\
The disk around HD100546 was also spatially resolved in scattered light and dust continuum emission in different bands \citep[e.g.,][]{Augereau2001,Quanz2011,Avenhaus2014,Walsh2014, Pineda2014}. The disk appears to be almost, but not completely, devoid of dusty material at radii between a few and 13 AU. This gap could be due to the interaction with a young forming planet, and \cite{Brittain2013,Brittain2014} suggested the presence of a companion orbiting the star at $0\farcs13$, based on high-resolution NIR spectro-astrometry of CO emission lines. Another protoplanet candidate was claimed by \cite{Quanz2013_discovery} using $L'$ band high-contrast imaging data. The object was found at $0\farcs48\pm0\farcs04$ from the central star, at a position angle (PA) of $(8.9\,\pm0.9)^\circ$, with an apparent magnitude of $L'$=$13.2\pm0.4$ mag. \cite{quanz2015} reobserved HD100546 in different bands ($L'$, $M'$, $K_s$) and detected the object again in the first two filters.
Based on the colors and observed morphology these authors suggested that the data are best explained by a forming planet surrounded by a circumplanetary disk. Later, \cite{Currie2015} recovered HD100546 b from $H$-band integral field spectroscopy (IFS) with the Gemini Planet Imager \citep[GPI;][]{Macintosh2006} and identified a second putative point source c closer to the star ($r_\mathrm{proj}\sim0\farcs14$) potentially related to the candidate identified by \citet{Brittain2013,Brittain2014}. More recently, \cite{Rameau2017} demonstrated that the emission related to HD100546 b appears to be stationary and its spectrum is inconsistent with any type of low temperature objects. Furthermore, they obtained H$\alpha$ images with the MagAO instrument to search for accretion signatures, but no point source was detected at either the b or c position, and they placed upper limits on the accretion luminosity ($L_\mathrm{acc}<1.7\times10^{-4}\;L_\odot$). The same data were analyzed by \cite{Follette2017}, together with other H$\alpha$ images (MagAO), $H$ band spectra (GPI), and Y band polarimetric images (GPI). Their data exclude that HD100546\,c is emitting in H$\alpha$ with $L_{H\alpha}>1.57\times10^{-4} L_\odot$.\\
\\
\textit{HD169142}\\\\
HD169142 is surrounded by a nearly face-on pre-transitional disk.
Using PDI images, \cite{quanz2013} found an unresolved disk rim at $0\farcs17$ and an annular gap between $0\farcs28$ and $0\farcs49$. These results were confirmed by \cite{osorio2014}, who investigated the thermal emission ($\lambda=7$ mm) of large dust grains in the HD169142 disk, identifying two annular cavities ($\sim0\farcs16-0\farcs21$ and $\sim0\farcs28-0\farcs48$). The latter authors also identified a point source candidate in the middle of the outer cavity at a distance of $0\farcs34$ and PA $\sim175^\circ$.
\cite{biller2014} and \cite{reggiani2014} observed a point-like feature in NaCo $L'$ data at the outer edge of the inner cavity (separation = $0\farcs11-0\farcs16$ and PA=$0^\circ-7.4^\circ$).
Observations in other bands ($H$, $K_S$, $z_p$) with the Magellan Clay Telescope (MagAO/MCT) and with GPI in the $J$ band failed to confirm the detection \citep{biller2014,reggiani2014}, but revealed another candidate point source albeit with low signal-to-noise ratio \citep[S/N;][]{biller2014}.
In a recent paper, \cite{ligi2017} explained the latter \cite{biller2014} detection with a bright spot in the ring of scattered light from the disk rim, potentially following Keplerian motion.
\cite{Pohl2017} and \cite{Bertrang2018} compared different disk and dust evolutionary models to SPHERE $J$-band and VBB PDI observations. Both works tried to reproduce and explain the complex morphological structures observed in the disk and conclude that planet-disk interaction is occurring in the system, even though there is no clearly confirmed protoplanet identified to date.\\
\\
\textit{MWC\,758}\\\\
MWC\,758 is surrounded by a pre-transitional disk \citep[e.g.,][]{Grady2013}. \cite{Andrews2011} found an inner cavity of $\sim$55 AU based on dust continuum observations, which was, however, not observed in scattered light \citep{Grady2013, Benisty2015}. Nevertheless, PDI and direct imaging from the latter studies revealed two large spiral arms. A third spiral arm has been suggested based on VLT/NaCo $L'$ data by \cite{Reggiani2017}, together with the claim of the detection of a point-like source embedded in the disk at $(111\pm4)$ mas. This object was observed in two separate datasets from 2015 and 2016 at comparable separations from the star, but different PAs, which was possibly due to orbital motion. The contrast of this object relative to the central star in the $L'$ band is $\sim7$ mag, which, according to the BT-Settl atmospheric models \citep{Allard2012}, corresponds to the photospheric emission of a 41-64 $M_J$ object for the age of the star.
More recently, ALMA observations from \cite{Boehler2018} traced the large dust continuum emission from the disk. Two rings at $0\farcs37$ and $0\farcs53$ were discovered that are probably related to two clumps with large surface density of millimeter dust and a large cavity of $\sim0\farcs26$ in radius. Finally, \cite{Huelamo2018} observed MWC 758 in H$\alpha$ with SPHERE/ZIMPOL, reaching an upper limit for the line luminosity of $L_\mathrm{H_\alpha}\lesssim5\times10^{-5}L_\odot$ (corresponding to a contrast of 7.6 mag) at the separation of the protoplanet candidate. No other point-like features were detected.\\
\section{Data reduction}
\label{sec:data_reduction}
The basic data reduction steps were carried out with the ZIMPOL pipeline developed and maintained at ETH Z\"urich. The pipeline remapped the original 7.2 mas / pixel $\times$ 3.6 mas / pixel onto a square grid with an effective pixel scale of 3.6 mas / pixel $\times$ 3.6 mas / pixel (1024 $\times$ 1024 pixels). Afterward, the bias was subtracted and a flat-field correction was applied. We then aligned the individual images by fitting a Moffat profile to the stellar point spread functions (PSFs) and shifting the images using bilinear interpolation. The pipeline also calculated the parallactic angle for each individual frame and added the information to the image header. Finally, we split up the image stacks into individual frames and grouped them together according to their filter, resulting in two image stacks for each object: one for an H$\alpha$ filter and one for the continuum filter\footnote{For HD142527 we have four image stacks as we used both the N\_Ha and the B\_Ha filter during the observing sequence.}. In general, all images were included in the analysis if not specifically mentioned in the individual subsections. The images in these stacks were cropped to a size of $1\farcs08\,\times1\,\farcs08$ centered on the star. This allowed us to focus our PSF subtraction efforts on the contrast dominated regime of the images. The removal of the stellar PSF was performed in three different ways: ADI, spectral differential imaging (SDI), and ASDI (a two-step combination of SDI and ADI).
To perform ADI, we fed the stacks into our {\tt PynPoint} pipeline \citep{amaraquanz2012,amara2015, Stolker2018}. The {\tt PynPoint} package uses principal component analysis (PCA) to model and subtract the stellar PSF in all individual images before they are derotated to a common field orientation and mean-combined. To investigate the impact on the final contrast performance for all
objects, we varied the number of principal components (PCs) used to fit the stellar PSF and the size of the inner mask that is used to cover the central core of the stellar PSF prior to the PCA. No frame selection based on the field rotation was applied, meaning that all the images were considered for the analysis, regardless of the difference in parallactic angle.
The SDI approach aims at reducing the stellar PSF using the fact that all features arising from the parent star (such as Airy pattern and speckles) scale spatially with wavelength $\lambda$, while the position of a physical object on the detector is independent of $\lambda$. The underlying assumption is that, given that $\lambda_c$ is similar in all filters, the continuum flux density is the same at all wavelengths. To this end, modified versions of the continuum images were created. First, they were multiplied with the ratio of the effective filter widths to normalize the throughput of the continuum filter relative to the H$\alpha$ filter\footnote{This approach ignores any potential color effects between the filters, which, given their narrow band widths, should, however, not cause any significant systematic offsets.}. Then, they were spatially stretched using spline interpolation in radial direction, going out from the image center, by the ratio of the central wavelengths of the filters to align the speckle patterns. Because of the possibly different SED shapes of our objects with respect to the standard calibration star used in \cite{schmid2017} to determine the central wavelengths $\lambda_c$ of the filters, it is possible that $\lambda_c$ is slightly shifted for each object. This effect, however, is expected to alter the upscaling factor by at most 0.4\% for B\_Ha (assuming the unrealistic case in which $\lambda_c$ is at the edge of the filter), which is the broadest filter we used. This is negligible at very small separations from the star, where speckles dominate the noise. Values for filter central wavelengths and filter equivalent widths can be found in Table 5 of \cite{schmid2017}. The modified continuum images were then subtracted from the images taken simultaneously with the H$\alpha$ filter, leaving only H$\alpha$ line flux emitted from the primary star and potential companions. As a final step, the images resulting from the subtraction are derotated to a common field orientation and mean-combined. It is worth noting that if, as a result of the stretching, a potential point-source emitting a significant amount of continuum flux moves by more than $\lambda/D$, the signal strength in the H$\alpha$ image is only marginally changed in the SDI subtraction step, and only the speckle noise is reduced. If this is not the case, this subtraction step yields a significant reduction of the source signal in addition to the reduction of the
speckle noise. For SPHERE/ZIMPOL H$\alpha$ imaging, a conservative SDI subtraction without substantial signal removal is achieved for angular separations $\gtrsim0\farcs90$ ($\sim250$ pixels). Nevertheless, this technique is expected to enhance the S/N of accreting planetary companions even at smaller separations, since young planets are not expected to emit a considerable amount of optical radiation in the continuum. In this case, the absence of a continuum signal guarantees that the image subtraction leaves the H$\alpha$ signal of the companion unchanged and only reduces the speckle residuals. Therefore, for this science case, there is no penalty for using SDI.\\
To perform ASDI, the SDI (H$\alpha$-Cnt\_H$\alpha$) subtracted images are fed into the PCA pipeline to subtract any remaining residuals. During the analysis we varied the same parameters as described for simple ADI.
The HD142527 dataset was used to compare the different sensitivities achieved when applying ADI, SDI, and ASDI. The results are discussed in Section \ref{sec:setup_performance} and Appendix \ref{App_1}.\\
With ZIMPOL in imaging mode, there is a constant offset of $(135.99\pm0.11)^\circ$ between the parallactic angle and the PA of the camera in sky coordinates \citep{Maire2016}. A preliminary astrometric calibration showed, however, that this reference frame has to be rotated by $(-2.0\pm 0.5)^\circ$ to align images with north pointing to the top (Ginski et al., in preparation). This means that overall, for every PSF subtraction technique, the final images have to be rotated by $(134\pm0.5)^\circ$ in the counterclockwise direction.
\begin{figure*}[t!]
\centering
\includegraphics[width=\hsize]{image_5_filters.pdf}
\caption{Final ADI and ASDI reduced images of HD142527. \textit{Top row:} B\_Ha, Cnt\_Ha, and N\_Ha filter images resulting in the lowest FPFs ($1.5\times10^{-11}$, $2.2\times10^{-9}$, and $<10^{-17}$, corresponding to S/Ns of 13.1, 9.8, and 26.6, respectively). \textit{Bottom row:} final images after ASDI reduction for B\_Ha-Cnt\_Ha and N\_Ha-Cnt\_Ha frames ($4.4\times10^{-16}$ and $<10^{-17}$, corresponding to S/Ns of 22.7 and 27.6). We give the number of subtracted PCs and the radius of the central mask in milliarcseconds in the top left corner of each image. The color scales are different for the two rows. Because all images of the top row have the same color stretch, the detection appears weaker in the continuum band.}
\label{hd142527b}
\end{figure*}
\begin{figure}[h!]
\centering
\includegraphics[width=\hsize]{Det_Lim_HD142527.png}
\caption{Contrast curves for HD142527. The colored shaded regions around each curve represent the standard deviation of the achieved contrast at the 6 azimuthal positions considered at each separation. The markers (red diamond, orange circle, and violet star) represent the contrast of HD142527 B.}
\label{contrast_hd142527}
\end{figure}
\section{Analysis and results}
\label{sec:analysis}
\subsection{HD142527 B: The accreting M-star companion}
\subsubsection{Comparing the performance of multiple observational setups}
\label{sec:setup_performance}
In this section, we quantitatively compare the detection performance for multiple filter combinations and PSF subtraction techniques and establish the best strategy for future high-contrast H$\alpha$ observations with SPHERE/ZIMPOL. For the analysis, the HD142527 dataset was used; during the data reduction, no further frame selection was applied. The final images of HD142527 clearly show the presence of the M-star companion east of the central star. The signal is detected in all filters with ADI (B\_Ha, N\_Ha, and Cnt\_Ha) and ASDI (in both continuum-subtracted B\_Ha and N\_Ha images) over a broad range of PCs and also for different image and inner mask sizes (see Figure~\ref{hd142527b}). \\
We used the prescription from \cite{mawet2014} to compute the false positive fraction (FPF) as a metric to quantify the confidence in the detection. The flux is measured in apertures of diameter $\lambda /D$ (16.5 mas) at the position of the signal and in equally spaced reference apertures placed at the same separation but with different PAs, so that there is no overlap between these angles and the remaining azimuthal space is filled. These apertures sample the noise at the separation of the companion. Since the apertures closest to the signal are dominated by negative wings from the PSF subtraction process, they were ignored. Then, we used Equation 9 and Equation 10 from \cite{mawet2014} to calculate S/N and FPF from these apertures. This calculation takes into account the small number of apertures that sample the noise and uses the Student t-distribution to calculate the confidence of a detection. The wider wings of the t-distribution enable a better match to a non-Gaussian residual speckle noise than the normal distribution.
However, the true FPF values could be higher if the wings of the true noise distribution are higher than those of the t-distribution\footnote{As an example, Figure 7 of \cite{mawet2014} shows how the t-distribution produces lower FPF values than the case where speckle noise follows more closely a modified Rician distribution. Nevertheless, it has been shown that applying ADI removes the correlated component of the noise leaving quasi-Gaussian residuals \citep{Marois2008}.}. \\
The narrow N\_Ha filter delivers a significantly lower FPF than the broader B\_Ha filter over a wide range of PCs (see Figure~\ref{fpf_HD142527} in Appendix \ref{App_1}). Figure~\ref{fpf_HD142527} also shows that the combination of SDI and ADI yields lower FPF values than only ADI for both filters. Applying ASDI on N\_Ha images is hence the preferred choice for future high-contrast imaging programs with SPHERE/ZIMPOL in the speckle-limited regime close to the star. Furthermore, as shown in Figure \ref{fig:obs_param} and explained in Appendix~\ref{App_2}, it is crucial to plan observations maximizing the field rotation to best modulate and subtract the stellar PSF and to achieve higher sensitivities.
In Figure~\ref{contrast_hd142527} we show the resulting contrast curves for the three filters for a confidence level (CL) of 99.99995\%. For each dataset (B\_Ha, N\_Ha, and Cnt\_Ha) and technique (ADI and ASDI), we calculated the contrast curves for different numbers of PCs (between 10 and 30 in steps of 5) after removing the companion (see Section \ref{sec:characterization}). From each set of curves, we only considered the best achievable contrast at each separation from the central star. The presence of H$\alpha$ line emission from the central star made SDI an inefficient technique to search for faint objects at small angular separations.
To derive the contrast curves, artificial companions with varying contrast were inserted at six different PAs (separated by 60$^\circ$) and in steps of $0\farcs03$ in the radial direction. As the stellar PSF was unsaturated in all individual frames, the artificial companions were obtained by shifting and flux-scaling the stellar PSFs and then adding these companions to the original frames. Also for the calculation of the ASDI contrast curves, the original H$\alpha$ filter images, containing underlying continuum and H$\alpha$ line emission, were used to create artificial secondary signals. For each reduction run only one artificial companion was inserted at a time to keep the PCs as similar as possible to the original reduction. The brightness of the artificial signals was reduced/increased until their FPF corresponded to a detection with a CL of 99.99995\% (i.e., a FPF of 2.5$\times10^{-7}$), corresponding to $\approx$5$\sigma$ whether Gaussian noise was assumed. An inner mask with a radius of $0\farcs02$ was used to exclude the central parts dominated by the stellar signal. The colored shaded regions around each curve represent the standard deviation of the contrast achieved at that specific separation within the six PAs.
It is important to note that, while in Figure~\ref{fpf_HD142527} the N\_Ha filter provides the lowest FPF for the companion, Figure~\ref{contrast_hd142527} seems to suggest that the B\_Ha filter provides a better contrast performance. However, this is an effect from the way the contrast analysis is performed. As described above, the stellar PSF was used as a template for the artificial planets, as it is usually done in high-contrast imaging data analysis. The flux distribution within a given filter can vary significantly depending on the object. In this specific case, HD142527 B is known to have H$\alpha$ excess emission, hence the flux within either H$\alpha$ filter is strongly dominated by line emission ($\sim$50\% in B\_Ha and $\sim$83\% in N\_Ha filter) and a contribution from the optical continuum can be neglected. The primary shows, however, strong and non-negligible optical continuum emission that contributes to the flux observed in the H$\alpha$ filters. Indeed, for the primary, only 10\% and 56\% of the flux in the B\_Ha and N\_Ha filters are attributable to line emission. Hence, when using the stellar PSF as template for artificial planets, we obtain a better contrast performance for the B\_Ha filter as it contains overall more flux. In reality, however, if the goal is to detect H$\alpha$ line emission from low-mass accreting companions, the N\_Ha filter is to be preferred. Finally, as found by \cite{sallum2015} for the planet candidate LkCa15 b, the fact that ASDI curves reach a deeper contrast confirms that this technique, in particular close to the star, is more effective and should be preferred to search for H$\alpha$ accretion signals.
\subsubsection{Quantifying the H$\alpha$ detection}
\label{sec:characterization}
The clear detection of the M-star companion in our images allows us to determine its contrast in all the filters and its position relative to the primary at the epoch of observation. For this purpose, we applied the Hessian matrix approach \citep{quanz2015} and calculated the sum of the absolute values of the determinants of Hessian matrices in the vicinity of the companion's signal. The Hessian matrix represents the second derivative of an n-dimensional function and its determinant is a measure for the curvature of the surface described by the function. This method allows for a simultaneous determination of the position and the flux contrast of the companion and we applied a Nelder-Mead \citep{NelderMead1965} simplex algorithm to minimize the curvature, i.e., the determinants of the Hessian matrices. We inserted negative, flux-rescaled stellar PSFs at different locations and with varying brightness in the input images and computed the resulting curvature within a region of interest (ROI) around the companion after PSF subtraction\footnote{For this analysis we used an image size of $0\farcs36\times0\farcs36$ to speed up the computation and an inner mask of 10.8 mas (radius).}. To reduce pixel-to-pixel variations after the PSF-subtraction step and allow for a more robust determination of the curvature, we convolved the images with a Gaussian kernel with a full width at half maximum (FWHM) of 8.3 mas ($\approx0.35$ of the FWHM of the stellar PSF, which was calculated to be 23.7 mas on average). To fully include the companion's signal, the ROI was chosen to be $(43.2\times43.2)$ mas around the peak flux detected in the original set of PSF subtracted images. Within the ROI, the determinants of the Hessian matrices in 10,000 evenly spaced positions on a fixed grid (every 0.43 mas) were calculated and summed up.
For the optimization algorithm to converge, we need to provide a threshold criterion: if the change in the parameters (position and contrast) between two consecutive iterations is less than a given tolerance, the algorithm has converged and the optimization returns those values for contrast and position. The absolute tolerance for the convergence was set to be 0.1\footnote{This is an absolute value, meaning that if the sum of the determinants can be lowered only using steps in pixels and contrast lower than 0.1, then the algorithm stops.}, as this value is the precision to which artificial signals can be inserted into the image grid. This value applies for all the investigated parameters (position and contrast). Errors in the separation and PA measurements take into account the tolerance given for the converging algorithm and the finite grid. Errors in the contrast magnitude only consider the uncertainty due to the tolerance of the optimization.
To account for systematic uncertainties in the companion's location and contrast resulting from varying self-subtraction effects in reductions with different numbers of PCs, we ran the Hessian matrix algorithm for reductions with PCs in the range between 13 and 29 and considered the average of each parameter as final result. This range of PCs corresponds to FPF values below $2.5\times10^{-7}$ (see Figure~\ref{fpf_HD142527}).
To quantify the overall uncertainties in separation, PA, and contrast in a conservative way, we considered the maximum/minimum value (including measurement errors) among the set of results for the specific parameter and computed its difference from the mean.
In Figure \ref{fig:Hesse_result}, we present the results from this approach for the N\_Ha dataset and show the comparison between the original residual image and the image with the companion successfully removed.
\begin{figure}[t!]
\centering
\includegraphics[width=7cm]{subtraction.pdf}
\caption{Image of HD142527 before (top panel) and after (bottom panel) the insertion of the negative companion resulting from the Hessian matrix algorithm. The image flux scale is the same in both images. In this case 14 PCs were subtracted and a mask of 10.8 mas (radius) was applied on the $101\times$ 101 pixels images of the N\_Ha stack.}
\label{fig:Hesse_result}
\end{figure}
\begin{table*}[h!]
\caption{\label{tab:Ha_star} Summary of the stellar fluxes measured in the different filters in our ZIMPOL data and the derived H$\alpha$ line fluxes for our targets (last column). The extinction values $A_{H\alpha}$ were estimated as described in Section \ref{sec:photometry_HD142527} from $A_V$.}
\centering
\begin{tabular}{llllll}
\hline\hline\noalign{\smallskip}
Object & $A_V$ [mag] &$A_{H\alpha}$ [mag] & $F^*_{\text{F\_H}\alpha}$ [erg/s/cm$^2$] & $F^*_{\text{Cnt\_H}\alpha}$ [erg/s/cm$^2$] & $F^*_{H\alpha}$ [erg/s/cm$^2$]\\\hline
\noalign{\smallskip}
HD142527 (N\_Ha) & <0.05\tablefootmark{a} & 0.04 & $3.0\pm0.8\times10^{-11}$ & $6.1\pm0.2\times10^{-11}$ & $1.7\pm0.8\times10^{-11}$ \\ \hline
\noalign{\smallskip}
HD142527 (B\_Ha) & <0.05\tablefootmark{a} &0.04 & $9.7\pm0.8\times10^{-11}$ & $6.1\pm0.2\times10^{-11}$ & $1.0\pm0.5\times10^{-11}$ \\ \hline
\noalign{\smallskip}
HD142527 B (N\_Ha) & <0.05\tablefootmark{a} & 0.04 & $9.1^{+3.5}_{-2.9}\times10^{-14}$ & $7.4^{+1.4}_{-2.1}\times10^{-14}$ & $7.6^{+3.5}_{-2.9}\times10^{-14}$ \\ \noalign{\smallskip}\hline
\noalign{\smallskip}
HD142527 B (B\_Ha) & <0.05\tablefootmark{a} &0.04 & $2.0\pm0.4\times10^{-13}$ & $7.4^{+1.4}_{-2.1}\times10^{-14}$ & $1.0^{+0.5}_{-0.4}\times10^{-13}$ \\ \noalign{\smallskip}\hline
\noalign{\smallskip}
HD135344 B & 0.23 \tablefootmark{a} &0.19 & $3.1\pm1.0\times10^{-11}$ & $4.9\pm0.6\times10^{-11}$ & $1.8\pm0.8\times10^{-11}$ \\ \hline
\noalign{\smallskip}
TW Hya & 0.0\tablefootmark{b} & 0.0 & $9.9\pm0.4\times10^{-11}$ & $1.5\pm0.05\times10^{-11}$ &$7.8\pm0.3\times10^{-11}$ \\ \hline
\noalign{\smallskip}
HD100546 &<0.05\tablefootmark{a} & 0.04 & $4.2\pm0.2\times10^{-10}$ & $1.6\pm0.1\times10^{-10}$ & $1.7\pm0.2\times10^{-10}$ \\ \hline
\noalign{\smallskip}
HD169142 & 0.43\tablefootmark{c} & 0.35 & $1.1\pm0.1\times10^{-10}$& $7.4\pm0.2\times10^{-11}$ & $3.2\pm4.4\times10^{-12}$ \\ \hline
\noalign{\smallskip}
MWC758 & 0.22\tablefootmark{d} & 0.18 & $8.1\pm0.7\times10^{-11}$ & $5.3\pm0.2\times10^{-11}$ &$6.3\pm3.7\times10^{-12}$ \\
\noalign{\smallskip}\hline\hline
\end{tabular}
\tablebib{\tablefoottext{a}{\cite{Fairlamb2015}.}\tablefoottext{b}{\cite{uyama2017}.}\tablefoottext{c}{\cite{Fedele2017}.}\tablefoottext{d}{\cite{vandenAncker1998}.}}
\end{table*}
\subsubsection{Astrometry}
\label{sec:HD142527_astrometry}
The previously described algorithm was used to determine the best combination of separation, PA, and magnitude contrast for HD142527 B. In the N\_Ha data the companion is located at $63.3^{+1.3}_{-1.0}$ mas from the primary star, in the B\_Ha dataset at $62.3^{+1.7}_{-2.2}$ mas, and in the Cnt\_Ha data at $62.8^{+2.1}_{-1.9}$ mas. The corresponding PAs are $(97.8\pm0.9)^\circ$, $(99.4^{+1.1}_{-1.5})^\circ$ and $(99.0^{+1.5}_{-1.6})^\circ$, respectively. Errors in the PA measurements also take into account the above mentioned uncertainty in the astrometric calibration of the instrument, which was added in quadrature to the PA error bars.
As within the error bars all filters gave the same results, we combined them and found that HD142527 B is located at a projected separation of $62.8^{+2.1}_{-2.7}$ mas from the primary star ($9.9^{+0.3}_{-0.4}$ AU at $157.3\pm1.2$ pc) and has a PA of $(98.7\pm1.8)^\circ$. The final values result from calculating the arithmetic mean of all the values obtained from the three different datasets, while their errors are calculated identically to those for each single dataset.
In Figure \ref{fig:orbit} we compare the positions previously estimated \citep{close2014, rodigas2014, lacour2016,Christiaens2018} and that resulting from our analysis. \cite{lacour2016} used a Markov chain Monte Carlo analysis to infer the orbital parameters of HD142527 B. Because the past detections were distributed over a relatively small orbital arc ($\sim 15^\circ$), it was difficult to constrain the parameters precisely. The high precision measurement added by our SPHERE/ZIMPOL data extends the arc to a range of $\sim30^\circ$.
An updated orbital analysis is provided in \citep{Claudi2018}. Figure \ref{fig:orbit} shows that HD142527 B is clearly approaching the primary in the plane of the sky.
\begin{figure}[b!]
\centering
\includegraphics[width=\hsize]{orbit_HD142527B.pdf}
\caption{Position of HD142527 B based on NaCo sparse aperture masking (red pentagons), MagAO (cyan triangles), GPI non-redundant masking (dark green diamonds) and VLT/SINFONI (blue circle) data from \cite{rodigas2014}, \cite {close2014}, \cite{lacour2016}, and \cite{Christiaens2018}, together with the SPHERE/ZIMPOL observation presented in this work (light green square). The position of HD142527 A is shown with the yellow star at coordinates (0,0).}
\label{fig:orbit}
\end{figure}
\begin{table*}[h!]
\caption{\label{tab:limits} Summary of our detection limits for each target. While for HD100546, HD169142, and MWC\,758 we consider the specific locations (separation and PA) of previously claimed companion candidates, we focused our analyses for HD135344B and TW Hya on separations related to disk gaps (hence no specific PA). Columns 5 and 6 give the mass and radius assumed for the accretion rate calculations, column 7 gives the contrast magnitude at the specific location and columns 8--11 report the values for the H$\alpha$ line flux, H$\alpha$ line luminosity, accretion luminosity, and mass accretion rate ignoring any possible dust around the companion.}
\centering
\small
\begin{tabular}{lllllllllll}
\hline\hline\noalign{\smallskip}
Target & Sep. & PA & Ref. & Mass & Radius & $\Delta$H$_\alpha$ & $F^p_{H\alpha}$ & $L_{H\alpha}$ [$L_\odot$] & $L_\mathrm{acc}$ [$L_\odot$] & $\dot{M}$ [$M_\odot\text{ yr}^{-1}$] \\
& [mas] & [$^\circ$] & & [$M_J$] & [$R_J$] & [mag] & [erg/s/cm$^2$] & \\\hline
\noalign{\smallskip}
HD135344B & $180$ & & (a) & 10.2\tablefootmark{(h)} & 1.6\tablefootmark{(j)} & $>9.8$ & $<3.8\times10^{-15}$ & $<2.0\times10^{-6}$ & $<3.7\times10^{-6}$ & $<2.4\times10^{-12}$ \\ \hline
\noalign{\smallskip}
TW Hya & $390$ & & (b) & 2\tablefootmark{(k)} & 1.3\tablefootmark{(j)} & $>9.3$ & $<1.9\times10^{-14}$ & $<2.2\times10^{-6}$ & $<3.5\times10^{-6}$ & $<1.0\times10^{-11}$ \\ \hline
\noalign{\smallskip}
\multirow{2}{*}{HD100546} & $480\pm4$ & $8.9\pm0.9$ & (c) & 15\tablefootmark{(c)} & 2\tablefootmark{(j)} & $>11.4$ & $<1.1\times10^{-14}$ & $<4.7\times10^{-6}$ & $<1.1\times10^{-5}$ & $<6.4\times10^{-12}$ \\ \cline{2-11}
\noalign{\smallskip} & $\sim140$ & $\sim133$ & (d) & 15\tablefootmark{(l)} & 2\tablefootmark{(j)} & $>9.3$ & $<7.9\times10^{-14}$ & $<3.3\times10^{-5}$ & $<2.0\times10^{-4}$ & $<1.1\times10^{-10}$ \\ \hline
\noalign{\smallskip}
\multirow{2}{*}{HD169142} & $\sim340$ & $\sim175$ & (e) & 0.6\tablefootmark{(e)} & 1.4\tablefootmark{(j)} & $>10.7$ & $<5.7\times10^{-15}$ & $<2.5\times10^{-6}$ & $<4.3\times10^{-6}$ & $<4.4\times10^{-11}$ \\ \cline{2-11}
\noalign{\smallskip} & $156\pm32$ & $7.4\pm11.3$ & (f) &10\tablefootmark{(f)} & 1.7\tablefootmark{(j)} & $>9.9$ & $<1.2\times10^{-14}$ & $<5.2\times10^{-6}$ & $<1.3\times10^{-5}$ & $<7.6\times10^{-11}$ \\ \hline
\noalign{\smallskip}
MWC 758 & $111\pm4$ & $162\pm5$ & (g) & 5.5\tablefootmark{(m)} & 1.7\tablefootmark{(n)} &$>9.4$ & $<1.4\times10^{-14}$ & $<1.2\times10^{-5}$ & $<4.3\times10^{-5}$ & $<5.5\times10^{-11}$ \\
\noalign{\smallskip}\hline\hline
\end{tabular}
\tablebib{\tablefoottext{a}{ \cite{Andrews2011}}; \tablefoottext{b}{\cite{garufi2013}}; \tablefoottext{c}{\cite{quanz2015}}; \tablefoottext{d}{\cite{Brittain2014}}; \tablefoottext{e}{\cite{osorio2014}}; \tablefoottext{f}{\cite{reggiani2014}}; \tablefoottext{g}{\cite{Reggiani2017}}; \tablefoottext{h}{\cite{maire2017}}, \tablefoottext{j}{AMES-Cond \citep{Allard2001, baraffe2003}}, \tablefoottext{k}{\cite{ruane2017}}, \tablefoottext{l}{\cite{Mendigutia2017}}, \tablefoottext{m}{\cite{Pinilla2015}}, \tablefoottext{n}{BT-Settl \citep{Allard2012}}.}
\end{table*}
\subsubsection{Photometry}
\label{sec:photometry_HD142527}
The Hessian matrix approach yields the contrasts between HD142527 A and B in every filter: $\Delta$N\_Ha $ = 6.3^{+0.2}_{-0.3} \text{ mag}$ in the narrow band, $\Delta$B\_Ha $ = 6.7\pm0.2 \text{mag}$ in the broad band, and $\Delta$Cnt\_Ha$ = 7.3 ^{+0.3}_{-0.2} \text{ mag}$ in the continuum filter. To quantify the brightness of the companion and not only its contrast with respect to the central star, we determined the flux of the primary in the multiple filters. We measured the count rate ($cts$) in the central circular region with radius $\sim1\farcs5$ in all frames of each stack and computed the mean and its uncertainty $\sigma/\sqrt{n}$, where $\sigma$ is the standard deviation of the count rate within the dataset and $n$ is the number of frames. No aperture correction was required because the same aperture size was used by \cite{schmid2017} to determine the zero points for the flux density for the three filters from photometric standard star calibrations. To estimate the continuum flux density we used their Equation 4
\begin{equation}
F^*_\lambda(Cnt\_Ha)=cts\cdot10^{0.4\,(am\cdot k_1+m_{mode})}\cdot c_{zp}^{cont}(Cnt\_Ha),
\end{equation}
where $c_{zp}^{cont}(Cnt\_Ha)$ is the zero point of the Cnt\_Ha filter, $\textit{cts}=1.105\,(\pm0.001)\times10^5$ ct/s is the count rate measured from our data, $am=1.06$ is the average airmass, $k_1$ is the atmospheric extinction at Paranal \citep[$k_1(\lambda)=0.085$ mag/airmass for Cnt\_Ha, $k_1(\lambda)=0.082$ mag/airmass for B\_Ha and N\_Ha; cf.][]{Patat2011}, and $m_{mode}=-0.23$ mag is the mode dependent transmission offset, which takes into account the enhanced throughput of the R-band dichroic with respect to the standard gray beam splitter. The flux density of the primary star in the continuum filter $F^*_\lambda(Cnt\_Ha)$ was then used to estimate the fraction of counts in the line filters due to continuum emission via
\begin{equation}
cts(F\_Ha)=\frac{F^*_\lambda(Cnt\_Ha)}{c_{zp}^{cont}(F\_Ha)}\times10^{-0.4(am\cdot k_1+m_{mode})},
\end{equation}
where $c_{zp}^{cont}(F\_Ha)$ is the continuum zero point of the H$\alpha$ filter used in the observations \citep[cf.][]{schmid2017}. During this step, we assumed that the continuum flux density was the same in the three filters. The continuum count rate was subtracted from the total count rate in B\_Ha and N\_Ha, $\text{cts}(B\_Ha)=1.631\,(\pm0.001)\times10^5$ ct/s and $\text{cts}(N\_Ha)=3.903\,(\pm0.003)\times10^4$ ct/s, leaving only the flux due to pure H$\alpha$ emission. These were used, together with Equation (1) with line zero points, to determine the pure H$\alpha$ line fluxes (see fifth column in Table \ref{tab:Ha_star}). For each filter, the continuum flux density was multiplied by the filter equivalent width, and the flux contribution from line emission was added for the line filters. As in \cite{sallum2015}, we assumed the B object to have the same extinction as A, ignoring additional absorption from the disk. Indeed, we considered an extinction of $A_V=0.05$ mag \citep{Fairlamb2015} and, interpolating the standard reddening law of \cite{Mathis1990} for $R_V=3.1$, we estimated the extinction at $\sim650$ nm to be A$_{H\alpha}=0.04$ mag. The stellar flux was found to be $6.1\,\pm0.2\times 10^{-11}$ erg/s/cm$^2$ in the Cnt\_Ha filter, $9.7\,\pm0.8\times10^{-11}$ erg/s/cm$^2$ in the B\_Ha filter and $3.0\,\pm0.8\times10^{-11}$ erg/s/cm$^2$ in the N\_Ha filter (see Table \ref{tab:Ha_star}).\\
With the empirically estimated contrasts, we calculated the companion flux, i.e., line plus continuum emission or continuum only emission, in the three filters as follows:
$$F^p_{Cnt\_Ha}=7.4^{+1.4}_{-2.1}\times10^{-14}\text{ erg/s/cm}^2,$$ $$F^p_{B\_Ha}=2.0\,\pm0.4\times10^{-13}\text{ erg/s/cm}^2,$$ $$F^p_{N\_Ha}=9.1^{+3.5}_{-2.9}\times10^{-14}\text{ erg/s/cm}^2.$$\\
We note that the contrast we calculated in the continuum filter is very similar to that obtained by \cite{close2014} of $\Delta \text{mag} = 7.5\pm0.25$ mag. The direct estimation of the brightness of the primary in each individual ZIMPOL filter led to a larger difference when comparing the companion's apparent magnitude in our work ($m^B_{Cnt\_Ha}=15.4\pm0.2$ mag) with that from \cite{close2014} ($m^B_{\text{Close}}=15.8\pm0.3$ mag). Such values are possibly consistent within the typical variability of accretion of the primary and secondary at these ages. However, given the different photometry sources and filters used for the estimation of the stellar flux densities in the two works, the results cannot be easily compared.
\subsubsection{Accretion rate estimates}
\label{sec:accretion_HD142527}
The difference between the flux in the line filters and the continuum filter (normalized to the H$\alpha$ filter widths) represents the pure H$\alpha$ line emission for which we find for HD142527 B $f^{line}_{B\_Ha}=1.0^{+0.5}_{-0.4}\times10^{-13}$ erg/s/cm$^2$ and $f^{line}_{N\_Ha}=7.6^{+3.5}_{-2.9}\times10^{-14}$ erg/s/cm$^2$, respectively. The line flux is then converted into a line luminosity multiplying it by the GAIA distance squared (see Table \ref{tab_stars}), yielding $L_{B\_Ha}=7.7^{+4.0}_{-3.6}\times10^{-5}\,L_\odot$ and $L_{N\_Ha}=6.0^{+2.8}_{-2.4}\times10^{-5}\,L_\odot$. We then estimated the accretion luminosity with the classical T Tauri stars (CTTS) relationship from \cite{rigliaco2012}, in which the logarithmic accretion luminosity grows linearly with the logarithmic H$\alpha$ luminosity
\begin{equation}
\log(L_\mathrm{acc}) = b+a\log(L_{H\alpha}),
\end{equation}
and $a = 1.49\pm0.05$ and $b=2.99 \pm 0.16$ are empirically determined. We calculated the accretion luminosity for both datasets, yielding $L^\mathrm{acc}_{B\_Ha}=7.3^{+6.8}_{-6.4}\times10^{-4} L_\odot$ and $L^\mathrm{acc}_{N\_Ha}=5.0^{+4.4}_{-4.0}\times10^{-4} L_\odot$.\\
Following \citet{gullbring1998} we finally used
\begin{equation}
\dot{M}_\mathrm{acc}=\left(1-\frac{R_c}{R_{in}}\right)^{-1} \frac{L_\mathrm{acc}R_c}{GM_c}\sim1.25\,\frac{L_\mathrm{acc}R_c}{GM_c}
\label{eq:accretion_rate}
\end{equation}
to constrain the mass accretion rate. The quantity $G$ is the universal gravitational constant, and $R_c$ and $M_c$ are the radius and mass of the companion, respectively. Assuming that the truncation radius of the accretion disk $R_{in}$ is $\sim5R_c$, we obtain $\left(1-\frac{R_c}{R_{in}}\right)^{-1}\sim1.25 $. For the companion mass and radius, two different sets of values were considered: \cite{lacour2016} fitted the SED of HD142527 B with evolutionary models \citep{baraffe2003} and calculated $M_c=0.13\pm0.03\;M_\odot$ and $R_c=0.9\pm0.15\;R_\odot$, while \cite{Christiaens2018} estimated from H+K band VLT/SINFONI spectra $M_c=0.34\pm0.06\;M_\odot$ and $R_c=1.37\pm0.05\;R_\odot$, in the presence of a hot circumstellar environment\footnote{They considered two different cases in which the companion may or may not be surrounded by a hot environment contributing in $H$+$K$. Because of the presence of accreting material shown in this work, we decided to consider the first case.}. The accretion rates obtained from the H$\alpha$ emission line are $\dot{M}_{B\_Ha}=2.0^{+2.0}_{-1.9}\times 10^{-10}M_\odot/\text{yr}$ and $\dot{M}_{N\_Ha}=1.4^{+1.3}_{-1.2}\times 10^{-10}M_\odot/\text{yr}$ in the first case and $\dot{M}_{B\_Ha}=1.2\pm1.1\times 10^{-10}M_\odot/\text{yr}$ and $\dot{M}_{N\_Ha}=0.8\pm0.7\times 10^{-10}M_\odot/\text{yr}$ in the second case.
Some H$\alpha$ flux loss from the instrument when the N\_Ha filter is used might explain the lower value of $\dot{M}_{N\_Ha}$ compared to $\dot{M}_{B\_Ha}$. Indeed, according to Figure 2 and Table 5 from \cite{schmid2017}, the N\_Ha filter is not perfectly centered on the H$\alpha$ rest wavelength, implying that a fraction of the flux could be lost, in particular if the line profile is asymmetric. Moreover, high temperature and high velocities of infalling material cause H$\alpha$ emission profiles of CTTS to be broad \citep{Hartmann1994,White_Basri2003}. Also, line broadening from the rotation and line shift of the object due to possible radial motion might be important, even though it is not expected to justify the $\sim$40\% H$\alpha$ flux difference of HD142527B. We argue, therefore, that with the available data it is very difficult to estimate the amount of line flux lost by the N\_Ha filter, and that the value given by the B\_Ha filter is expected to be more reliable, since all line emission from the accreting companion is included.\\
As shown in PDI images from \cite{Avenhaus2017}, dust is present at the separation of the secondary possibly fully embedding the companion or in form of a circumsecondary disk. During our calculations, we neglected any local extinction effects due to disk material. It is therefore possible that on the one hand some of the intrinsic H$\alpha$ flux gets absorbed/scattered and the actual mass accretion rate is higher than that estimated in this work; on the other hand, the material may also scatter some H$\alpha$ (or continuum) emission from the central star, possibly contributing in very small amounts to the total detected flux.\\
Although the results obtained in this work are on the same order of magnitude as those obtained by \cite{close2014}, who derived a rate of $6\times10^{-10}\,M_\odot \text{ yr}^{-1}$, it is important to point out some differences in the applied methods. Specifically, \cite{close2014} used the flux estimated in the H$\alpha$ filter to calculate $L_{H\alpha}$, while we subtracted the continuum flux and considered only the H$\alpha$ line emission. Moreover, we combined the derived contrast with the stellar flux in the H$\alpha$ filters obtained from our data, while \cite{close2014} used the $R$-band magnitude of the star. As HD142527 A is also accreting and therefore emitting H$\alpha$ line emission, this leads to a systematic offset.
Finally, \cite{close2014} used the relationship found by \cite{Fang2009} and not that from \cite{rigliaco2012}, leading to a difference in the $L_\mathrm{H\alpha}-L_\mathrm{acc}$ conversion.
\subsection{HD135344 B}
Visual inspection of the final PSF-subtracted ADI images of HD135344B showed a potential signal north to the star. Given the weakness of the signal and the low statistical significance, we analyze and discuss it further in Appendix \ref{app:HD135344B_companion}. \\
In Figure \ref{fig:HD135344B_contrast_curves} we plot the contrast curves obtained as explained in section \ref{sec:setup_performance} using the N\_Ha and the Cnt\_Ha datasets and applying ASDI. In addition to the $1\farcs08\times1\farcs08$ images we also examined $2\farcs88\times2\farcs88$ images to search for accreting companions beyond the contrast limited region and beyond the spiral arms detected on the surface layer of the HD135344 B circumstellar disk. However, no signal was detected. We paid special attention to the separations related to the reported disk cavities \citep{Andrews2011, garufi2013}. We chose to investigate specifically the cavity seen in scattered light at $0\farcs18$. The outer radius of the cavity seen in millimeter continuum is larger, but small dust grains are expected to be located inside of this radius increasing the opacity and making any companion detection more difficult. Neglecting the small inclination \citep[$i\sim11^\circ$,][]{Lyo2011}, the disk is assumed to be face-on and the contrast value given by the curve of Figure \ref{fig:HD135344B_contrast_curves} at $0\farcs18$ is considered ($\Delta$N\_Ha = 9.8 mag). We derived the H$\alpha$ flux from the star in the N\_Ha filter as presented in section \ref{sec:photometry_HD142527} using the stellar flux values for the different filters given in Table \ref{tab:Ha_star}, and calculated the upper limits for the companion flux, accretion luminosity, and mass accretion rate following Section \ref{sec:photometry_HD142527} and Section \ref{sec:accretion_HD142527}. The accretion rate is given by Equation \ref{eq:accretion_rate}, assuming a planet mass of $M_c=10.2\,M_J$, the maximum mass that is nondetectable at those separations according to the analysis of \cite{maire2017}. Being consistent with their approach, we then used AMES-Cond\footnote{AMES-Cond and BT-Settl models used through the paper where downloaded on Feb. 06, 2018, from https://phoenix.ens-lyon.fr/Grids/AMES-Cond/ISOCHRONES/ and https://phoenix.ens-lyon.fr/Grids/BT-Settl/CIFIST2011\_2015/ISOCHRONES/, respectively.} evolutionary models \citep{Allard2001, baraffe2003} to estimate the radius of the object $R_c=1.6\,R_J$ based on the age of the system. All values, sources, and models used are summarized in Table \ref{tab:Ha_star} and in Table \ref{tab:limits} together with all the information for the other objects. The final accretion rate upper limit has been calculated to be $<2.4\times10^{-12}\,M_\odot\text{ yr}^{-1}$ at an angular separation of $0\farcs18$, i.e., the outer radius of the cavity seen in scattered light.
\begin{figure}[b!]
\centering
\includegraphics[width=\hsize]{Det_Lim_HD135344B.pdf}
\caption{Contrast curves for HD135344 B. The vertical lines indicate the outer radii of the cavities in small and large dust grains presented in \cite{garufi2013} and \cite{Andrews2011}, respectively.}
\label{fig:HD135344B_contrast_curves}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=\hsize]{Other_objects.pdf}
\caption{Final PSF subtracted ADI images of TW Hya, HD100546, HD169142, and MWC\,758. We applied a central mask with radius 32.4 mas and 18 PCs were removed. No companion candidates were detected. All images have a linear, but slightly different, color scale.}
\label{fig:others}
\end{figure*}
\subsection{TW Hya}
The TW Hya dataset does not show any point source either in the $1\farcs08\times1\farcs08$ images (see Figure \ref{fig:others}) or in the $2\farcs88\times2\farcs88$ images, which are large enough to probe all the previously reported disk gaps. The final contrast curves are shown in Figure~\ref{fig:TWHya_contrast_curves}. We also looked specifically at detection limits within the gaps observed by \cite{vanboekel2017} and focused in particular on the dark annulus at 20 AU ($0\farcs39$) from the central star, which has a counterpart approximately at the same position in 870 $\mu$m dust continuum observations \citep{Andrews2016}
Since the circumstellar disk has a very small inclination, we considered the disk to be face-on and assumed the gaps to be circular. At $0\farcs39$, planets with contrast lower than 9.3 mag with respect to TW Hya would have been detected with the ASDI technique (cf. Figure~\ref{fig:TWHya_contrast_curves}). This value was then combined with the stellar flux calculated as described in section \ref{sec:photometry_HD142527}, to obtain the upper limit of the companion flux in the B\_Ha filter. This yielded $\dot{M}<1.0\times10^{-11}\,M_\odot\text{ yr}^{-1}$ (see Table~\ref{tab:limits}) as the upper limit for the mass accretion rate based on our SPHERE/ZIMPOL dataset.
\subsection{HD100546}
\label{sec:Analysis_HD100546}
The HD100546 dataset suffered from rather unstable and varying observing conditions, which resulted in a large dispersion in the recorded flux (see Figure \ref{fig:HD100546_frame_sel} in Appendix~\ref{App_3}).
We hence selected only the last 33\% of the observing sequence, which had relatively stable conditions, for our analysis (see Appendix~\ref{App_3}).
The H$\alpha$ data did not confirm either of the two protoplanet candidates around HD100546 (see Figure \ref{fig:others}) and we show the resulting detection limits in Figure \ref{fig:HD100546_contrast_curves}.
In order to investigate the detection limits at the positions of the protoplanet candidates, we injected artificial planets with increasing contrast starting from $\Delta$B\_Ha = 8.0 mag until the signal was no longer detected with a CL of at least 99.99995\%, and we repeated the process subtracting different numbers of PCs (from 10 to 30). At the position where \cite{quanz2015} claimed the presence of a protoplanetary companion, we would have been able to detect objects with a contrast lower than 11.4 mag (using PC=14 and the ADI reduction). Consequently, if existing, a 15 M$_J$ companion \citep{quanz2015} located at the position of HD100546 b must be accreting at a rate $<6.4\times10^{-12}\,M_\odot\text{ yr}^{-1}$ in the framework of our analysis and assuming no dust is surrounding the object. We note that, in comparison to the accretion luminosity $L_\mathrm{acc}$ estimated by \cite{Rameau2017}, our upper limit is one order of magnitude lower (cf. Table \ref{tab:limits}).
\begin{figure}[t!]
\centering
\includegraphics[width=\hsize]{Det_Lim_TWHya.pdf}
\caption{Contrast curves for TW Hya. The vertical line indicates the gap at $0\farcs39$ detected in both scattered light \citep{Akiyama2015,vanboekel2017} and submillimeter continuum \citep{Andrews2016}.}
\label{fig:TWHya_contrast_curves}
\end{figure}
For the position of HD100546\,c, we used the orbit given in \cite{Brittain2014} to infer the separation and PA of the candidate companion at the epoch of our observations, i.e., $\rho\simeq0\farcs14$ and $\text{PA}\simeq 133^\circ$. At this position our data reach a contrast of 9.3 mag (using PC=14 on the continuum-subtracted dataset), implying an upper limit for the companion flux in the H$\alpha$ filter of $7.9\times10^{-14}$ erg/s/cm$^2$ and a mass accretion rate $<1.1\times10^{-10}\,M_\odot\text{ yr}^{-1}$. This puts $\sim2$ orders of magnitude stronger constraints on the accretion rate of HD100546\,c than the limits obtained from the polarimetric H$\alpha$ images presented in \cite{Mendigutia2017} for a $15\,M_J$ planet. We note that owing to its orbit, HD100546\,c is expected to have just disappeared or to disappear quickly behind the inner edge of the disk \citep{Brittain2014}. Therefore, extinction could play a major role in future attempts to detect this source.
\subsection{HD169142}
We analyzed the data with ADI and ASDI reductions (see Figure \ref{fig:others} for the ADI image). The latter was particularly interesting in this case because the stellar flux density in the continuum and H$\alpha$ filter is very similar and the continuum subtraction almost annihilated the flux from the central PSF, indicating that the central star has limited to no H$\alpha$ line emission (cf. Table \ref{tab:Ha_star} and see \cite{Grady2007}). We calculated the detection limits as explained in section \ref{sec:setup_performance} for both filters for a confidence level of 99.99995\%, as shown in Figure~\ref{fig:HD169142_contrast_curves}.\\
\begin{figure}[t!]
\centering
\includegraphics[width=\hsize]{Det_Lim_HD100546.pdf}
\caption{Contrast curves for HD100546. The gray dashed vertical line shows the separation of the outer gap edge cavity presented in \cite{Avenhaus2014}, while the solid blue lines indicate the separations of the forming planet candidates around HD100546 \citep{Quanz2013_discovery,Brittain2014}.}
\label{fig:HD100546_contrast_curves}
\end{figure}
We investigated with particular interest the positions of the candidates mentioned in Section \ref{sec:sample} and derived specific detection limits at their locations, independent from the azimuthally averaged contrast curve. At the position of the compact source found by \cite{osorio2014} (we call this potential source HD169142\,c), our data are sensitive to objects 10.7 mag fainter than the central star (obtained by subtracting 16 PCs with ASDI reduction). At the position of HD169246\,b \citep{reggiani2014,biller2014} an object with a contrast as large as 9.9 mag could have been detected (PC=19; ASDI). For the compact source from \cite{osorio2014} we found $\dot{M}<4.4\times10^{-11}\,M_\odot\text{ yr}^{-1}$.
Similarly, for the object detected by \cite{biller2014} and \cite{reggiani2014}\footnote{Within the uncertainties in the derived positions, these objects are indistinguishable and hence we assume it is the same candidate.} we found an upper limit for the mass accretion rate of $\dot{M}<7.6\times10^{-11}\,M_\odot\text{ yr}^{-1}$.
\subsection{MWC\,758}
Our analysis of the SPHERE/ZIMPOL images did not show an H$\alpha$ counterpart to the MWC\,758 companion candidate detected by \citet{Reggiani2017} as shown in Figure \ref{fig:others}. This is consistent with the recently published results from \cite{Huelamo2018}. Nonetheless, we provide a detailed analysis and discussion of the same MWC\,758 data to allow a comparison with the other datasets.
In Figure \ref{fig:MWC758_contrast_curves} we show the detection limits obtained with ADI for the B\_Ha and Cnt\_Ha dataset, and the results of the ASDI approach. At separations larger than $0\farcs25$, companions with a contrast smaller than 10 mag could have been detected. At the specific position of the candidate companion\footnote{For our analysis we considered the position obtained from the first dataset in \cite{Reggiani2017} because the observing date was close to the epoch of the H$\alpha$ observations.} we can exclude objects with contrasts lower than 9.4 mag (obtained subtracting 15 PCs using ASDI).
To explain the presence of a gap in dust-continuum emission without a counterpart in scattered light, a steady replenishment of $\mu$m-sized particle is required, which implies that a companion in the inner disk should not exceed a mass of $M_c=5.5\,M_J$ \citep{Pinilla2015, Reggiani2017}. In line with the analysis of \cite{Reggiani2017}, we used the BT-Settl model to estimate the radius of the companion and we derived an upper limit for the mass accretion rate of $\dot{M}<5.5\times10^{-11}\,M_\odot\text{ yr}^{-1}$ (see Table~\ref{tab:limits}). Our analysis puts slightly stronger constraints on the mass accretion rate in comparison to that in \cite{Huelamo2018}.
\begin{figure}[t!]
\centering
\includegraphics[width=\hsize]{Det_Lim_HD169142.png}
\caption{Contrast curves for HD169142. The shaded region represents the annular gap observed in scattered light \citep{quanz2013} and in millimeter continuum \citep{osorio2014}. The blue vertical lines represent the separation of the companion candidates \citep{reggiani2014,biller2014,osorio2014}.}
\label{fig:HD169142_contrast_curves}
\end{figure}
\section{Discussion}
\label{sec:discussion}
\subsection{SPHERE/ZIMPOL as hunter for accreting planets}
The SPHERE/ZIMPOL H$\alpha$ filters allow for higher angular resolution compared to filters in the infrared regime and can, in principle, search for companions closer to the star. For comparison, a resolution element is 5.8 times smaller in the H$\alpha$ filter than in the $L'$ filter, meaning that the inner working angle (IWA) is smaller by the same amount so that closer-in objects could be observed, if bright enough\footnote{We note that SPHERE does not operate at similarly high Strehl ratios in the optical regime as it is able to do in the infrared.}. An instrument with similar capabilities is MagAO \citep{MagAO2014, MagAO2016}, but as the Magellan telescope has a primary mirror of 6.5 m diameter, it has a slightly larger IWA than SPHERE at the 8.2 m VLT/UT3 telescope. A direct comparison of the HD142527\,B detection shows that ZIMPOL reaches a factor $\sim2.5$ higher S/N in one-third of total integration time and field rotation of MagAO under similar seeing conditions, even if the companion is located $\gtrsim20$ mas closer to the star.
The VAMPIRES instrument combined with Subaru/SCExAO will soon be a third facility able to perform H$\alpha$ imaging in SDI mode \citep{Norris2012}
In terms of detection performance using different filters and reduction techniques, we re-emphasize that the N\_Ha filter is more efficient in detecting H$\alpha$ signals in the contrast limited regime. The smaller filter width reduces the contribution of the continuum flux, which often dominates the signal in the B\_Ha filter, particularly for the central star. Hence, assuming the planetary companion emits only line radiation, the N\_Ha filter reduces the contamination by the stellar signal in the remaining speckles. Moreover, the subtraction of the stellar continuum from H$\alpha$ images reduces the speckles in both B\_Ha and N\_Ha filters. Hence, ASDI enhances the signal of potential faint companions, in particular at separations $<0\farcs3$ (cf. Figures \ref{fig:TWHya_contrast_curves}, \ref{fig:HD169142_contrast_curves}, \ref{fig:MWC758_contrast_curves}), where companions 0.7 mag fainter appear accessible in comparison to using simple ADI. ASDI should always be applied during the analysis of SPHERE/ZIMPOL H$\alpha$ data.
\begin{figure}[t!]
\centering
\includegraphics[width=\hsize]{Det_Lim_MWC758.png}
\caption{Contrast curves for MWC\,758. The gray dashed line shows the outer edge of the dust cavity observed by \cite{Andrews2011}. The blue solid line indicates the separation at which \cite{Reggiani2017} found a candidate companion.}
\label{fig:MWC758_contrast_curves}
\end{figure}
What remains to be quantified is how longer detector integration times (DITs) or the broad band filter could improve the detection limits in the background limited regime (i.e., $>0\farcs3$ where the contrast curves are typically flattening out) or for fainter natural guide stars. At these separations narrow band data can be detector read noise limited and the B\_Ha filter might be more suitable because of its higher throughput. However, as we show in Figure \ref{fig:Broad_or_Narrow}, it seems that at least for our HD142527 dataset this does not seem to be the case. Future studies conducted in both filters and on several objects are required to derive a more comprehensive understanding. Finding the sweetspot between longer integration times and the smearing of the PSF because of field rotation is also warranted. At least for the object considered in Figure \ref{fig:Broad_or_Narrow}, at large separations (usually $>0\farcs3$, in the background limited region) it is even possible to ignore completely ADI and simply apply field stabilized observations.
\subsection{Constraining planet accretion}
For our mass accretion rate estimates of HD142527 B we assumed that 100\% of the H$\alpha$ flux originates from accretion processes involving circumstellar material. We note, however, that the values may be overestimated if we consider that chromospheric activity of the M star \citep{White_Basri2003, Fang2009} can also contribute to the measured line flux. Furthermore, as mentioned in Section \ref{sec:accretion_HD142527}, we warn that the narrow width of the N\_Ha filter might be too narrow to fully encompass all H$\alpha$ line emission from fast-moving, accreting material, and therefore the results may be underestimated. Finally, given the presence of dusty material at the projected position of HD142527
B \citep{Avenhaus2017}, H$\alpha$ flux might have been partially absorbed. It is beyond the scope of this paper to properly estimate a value for intrinsic extinction due to disk material and consider this value in the $\dot{M}$ estimation. Nevertheless, in Figure \ref{fig:extinction} we show the fraction of H$\alpha$ flux that is potentially lost because of extinction as a function of $A_V$, converted into $A_{H\alpha}$ as explained in Section \ref{sec:photometry_HD142527}. Only 2\% of the H$\alpha$ signal remains if the disk material causes an extinction of $A_V=5$ mag. This plot quantifies the impact of dust on the measured flux and the detectability of H$\alpha$ emission from embedded objects.
For the other five objects studied in this work we were not able to detect any clear accretion signature located in the disks. Therefore, our data were not able to support the scenario in which protoplanets are forming in those disks. We put upper limits on the accretion luminosity and mass accretion rate. Two notes have to be made: (1) the fundamental quantities \emph{directly derived from the data} are $F_\mathrm{H\alpha}$ and $L_\mathrm{H\alpha}$; they should be used for future comparisons with other datasets or objects; (2) the presented upper limits on $\dot{M}$ are only valid for an object with the mass and radius given in Table \ref{tab:limits}, while the $L_\mathrm{acc}$ upper limits refer to objects of any mass. In particular, assuming lower mass objects implies larger $\dot{M}$, as shown in Figure~\ref{fig:mass_accretion_rates}: on the y-axis the mass accretion rate upper limits decrease as a function of the companion mass, for which the corresponding radius was calculated using the evolutionary models reported in Table \ref{tab:limits} and assuming the age listed in Table \ref{tab_stars}. The plot highlights that the assumed mass of the companion may change the final $\dot{M}_{acc}$ by more than one order of magnitude. Moreover, we overplot in violet the mass accretion rates of the three objects presented in \citet[][see also Section \ref{sec:discussion_objects}]{Zhou2014} as well as LkCa15\,b and PDS70\,b \citep{sallum2015,Wagner2018}, and in gray the range of mass accretion rates for HD142527.
\begin{figure}[t!]
\centering
\includegraphics[width=\hsize]{Bkg_contrast_regime.pdf}
\caption{Apparent flux detection limits as a function of the angular separation from HD142527 for both B\_Ha and N\_Ha filters.}
\label{fig:Broad_or_Narrow}
\end{figure}
We stress that, similar to HD142527 B, we always assumed that the flux limit is completely due to H$\alpha$ line emission without any contribution from continuum or chromospheric activity. Furthermore, for our analysis we always neglected intrinsic extinction effects from disk material, which likely weaken the signal. In particular, at locations where no gap in small dust grains has been identified the extinction $A_{H\alpha}$ can be significant (see Figure \ref{fig:extinction}). Models and precise measurements of the dust content in the individual disks would be required to properly include local extinction into our analysis.
Finally, investigating the H$\alpha$ luminosity upper limits for the specific positions as a function of the separation from the central star, it can be noticed that the constraints are stronger at larger separations. The only exception is HD100546, for which higher upper limits were achieved. The combination of suboptimal weather conditions, under which the dataset was taken, and the small field rotation of the subsample analyzed in this work made those limits worse. A more stable dataset with larger field rotation should provide more constraining limits.
\begin{figure}[b!]
\centering
\includegraphics[width=\hsize]{Extinction_flux.png}
\caption{Fraction of H$\alpha$ flux absorbed as a function of the disk extinction $A_V$ assuming the extinction law of \cite{Mathis1990} as explained in Section \ref{sec:photometry_HD142527}.}
\label{fig:extinction}
\end{figure}
\begin{figure}[b!]
\centering
\includegraphics[width=\hsize]{Macc_upper_limits.pdf}
\caption{Mass accretion rate upper limits as a function of the planetary mass for all the candidate forming planets investigated in this work. The violet stars represent the values reported in \cite{Zhou2014}, while the violet squares indicate PDS70\,b \citep{Wagner2018} and LkCa15\,b \citep{sallum2015}. The gray shaded area represents the mass accretion rate of HD142527 B and is shown for mass accretion rate comparison purposes only. Indeed, the mass of the object is much larger than what is reported on the x-axis of the plot.}
\label{fig:mass_accretion_rates}
\end{figure}
\subsection{Comparison with other objects}
\label{sec:discussion_objects}
The accretion rate of HD142527 B is in good agreement with the mass accretion rates found in \cite{rigliaco2012} for low-mass TTauri stars in the $\sigma$ Ori star-forming region ($5\times10^{-11}\,M_\odot\text{ yr}^{-1}<\dot{M}_{CTTS}<10^{-9}\,M_\odot\text{ yr}^{-1}$). A slightly broader mass accretion rate range was found by \cite{Alcala2014}, with $2\times10^{-12}\,M_\odot\text{ yr}^{-1}<\dot{M}_{CTTS}<4\times10^{-8}\,M_\odot\text{ yr}^{-1}$ in the Lupus star-forming region.
\cite{Zhou2014} reported three very low-mass objects (GSC 06214-00210 b, GQ Lup b and DH Tau b), which exhibit H$\alpha$ emission from accretion. Those objects have separations of 100-350 AU from their parent stars and $\dot{M}\sim10^{-9}-10^{-11}\,M_\odot\text{ yr}^{-1}$ (see violet stars in Figure \ref{fig:mass_accretion_rates}). The accretion rates measured in the paper are of the same order as the limits we found in our work. At projected distances similar to those of the three objects mentioned above, ZIMPOL would have been able to observe and detect H$\alpha$ emitting companions. However, closer to the star in the contrast limited regime, our data would not have detected accretion processes occurring with $\dot{M}\lesssim10^{-11}\,M_\odot\text{ yr}^{-1}$.
The mass accretion rate of PDS70\,b was estimated by \cite{Wagner2018} without considering any extinction effects and it is slightly lower than the limits we achieve for our sample (see violet square in Figure~\ref{fig:mass_accretion_rates} and black star in Figure \ref{fig:LkCa15_comparison}). The flux was calculated from the contrast in \cite{Wagner2018} assuming $R_{\text{PDS70\,b}}=11.7$ mag and estimating the MagAO H$\alpha$ filter widths assuming a flat SED\footnote{ \url{https://visao.as.arizona.edu/software_files/visao/html/group__reduction__users__guide.html\#visao_filters}}. In order to properly compare our limits and their H$\alpha$ detection, the same confidence levels should be considered. We therefore estimated the contrast limit for a CL corresponding to a $4\sigma$ detection for HD142527 at the separation of PDS70\,b, which was 0.3 mag lower than the limits corresponding to a CL of 99.99995\%. Hence, to bring all the contrast curves from Figure \ref{fig:LkCa15_comparison} to a 4$\sigma$ confidence level at $\sim0\farcs19$, a multiplication by a factor 0.76 is required. We note, however, that this scaling is just an approximation to provide a more direct comparison between the two studies.
We also compared the H$\alpha$ line luminosity upper limits obtained from our ZIMPOL H$\alpha$ sample with that estimated by \cite{sallum2015} for LkCa15 b ($L_{H\alpha}\sim6\times10^{-5}\,L_\odot$). Our specific limits for the candidates around HD169142, HD100546, and MWC 758 are slightly lower, but, except for HD100546 b and the compact source in HD169142 found by \cite{osorio2014}, of the same order of magnitude. LkCa15 itself was observed with SPHERE/ZIMPOL during the science verification phase in ESO period P96. We downloaded and analyzed the data, which were, however, poor in quality and also in terms of integration time and field rotation. Only $\sim1$ hr of data is available with a field rotation of $\sim16^\circ$, a coherence time of $2.6\pm0.8$ ms, and a mean seeing of $1\farcs64\pm0\farcs37$. As we show in Figure \ref{fig:LkCa15_comparison}, with deeper observations including more field rotation, ZIMPOL can potentially detect the signal produced by LkCa15\,b \citep{sallum2015} with a CL of 99.99995\%. However, the higher airmass at the Paranal Observatory and the fact that LkCa15 is a fainter guide star may complicate the redetection of the companion candidate, and therefore exceptional atmospheric conditions are required.
In addition to H$\alpha$ also other spectral features like Pa$\beta$ and Br$\gamma$ lines may indicate ongoing accretion processes onto young objects. As an example, \cite{Daemgen2017} used the absence of those lines in the spectrum of the low-mass companion HD106906\,b to infer its mass accretion rate upper limits ($\dot{M}<4.8\times10^{-10}\,M_J/\text{yr}^{-1}$). Their constraint is stronger than the ones we were able to put with our ZIMPOL H$\alpha$ data. Several other studies also detected hydrogen emission lines like Pa$\beta$ from low-mass companions \citep[e.g.,][]{Seifahrt2007, Bowler2011, Bonnefoy2014}, but unfortunately they did not calculate mass accretion rates.
\subsection{Comparison with existing models}
Two models for planetary accretion are currently used to explain the accreting phase of planet formation: magnetospheric accretion \citep{zhu2015} and boundary layer accretion \citep{Owen_Menou2016}. During magnetospheric accretion, the magnetic field truncates the CPD and hot ionized hydrogen in the closest regions of the disk falls onto the planet following the magnetic field lines. Recombination on the planet surface then produces H$\alpha$ flux.
For protoplanets, these models predict H$\alpha$ luminosities at least three orders of magnitudes lower than in CTTS, according to equation 22 in \cite{zhu2015},
$$ L_{H\alpha}=4.7\times10^{-6}L_\odot\left(\frac{R_T}{R_J}\right)^2\left(\frac{v_s}{59\text{km s}^{-1}}\right). $$
This is mainly owing to a one order of magnitude smaller infall velocity $v_s$ and a one order of magnitude smaller truncation radius $R_T$ (squared in the $L_{H\alpha}$ equation) due to weaker magnetic fields than in stars.
We combined the magnetospheric accretion models \citep{zhu2015} with existing detections in the infrared and evolutionary models. As an example, we present the case of HD100546\,b. According to models \citep{zhu2015}, the observed $L'$ brightness could be emitted by a CPD with inner radius of $1-4\,R_J$ and $M_p\dot{M}$ of $0.2-2.9\times10^{-6}\,M_J^2\text{ yr}^{-1}$. The mass accretion constraints obtained from H$\alpha$ ZIMPOL data would therefore imply that $M_p \gtrsim 31\,M_J$. This result is in conflict with that obtained by \cite{quanz2015} and the AMES-Cond evolutionary models, since the object $L'$ brightness excludes masses larger than $\sim15\,M_J$. This is the mass expected in the case in which the $L'$ flux is only from photospheric emission. Moreover, a 30 $M_J$ object would have significantly shaped the disk morphology and would have been clearly visible in other bands, such as the $K_s$-band, where \cite{quanz2015} could only put upper limits to the companion brightness.
\begin{figure}[t!]
\centering
\includegraphics[width=\hsize]{Comparison_limits_LkCa15.pdf}
\caption{Detection limits in apparent flux obtained for a 99.99995\% CL in this work, together with limits achieved with the available ZIMPOL dataset for LkCa15\,b (red dashed line) and the result presented in \cite{sallum2015} and \cite{Wagner2018}. A deeper dataset is required to redetect LkCa15\,b with ZIMPOL, but this detection is feasible.}
\label{fig:LkCa15_comparison}
\end{figure}\\
\cite{Szulagyi2017} found that only a minimal fraction of the hydrogen in CPDs might be thermally ionized if the planet is massive and hot enough. Consequently, the disk does not get truncated and ionized material does not get accreted through magnetospheric accretion along the field lines.
Then, disk material falls directly onto the planet (boundary layer accretion).
The same authors showed that material falling from the circumstellar disk onto the CPD and the protoplanet shocks, and eventually produces H$\alpha$ line emission both from the CPD and the planet. The contribution to the H$\alpha$ flux is larger from the CPD than from the planet \citep{Szulagyi2017}. These authors also showed that the majority of the accreted gas, however, remains neutral, especially for planets $<10\,M_J$. Hence, the H$\alpha$ flux can only estimate the ionized gas accretion rate and not the total accreted material. According to their simulations, a 10 $M_J$ planet would be accreting at a rate of $5.7\times10^{-8}\,M_J\text{ yr}^{-1}$, producing $L_{H\alpha}\sim7\times10^{-6}\,L_\odot$. This value is on the same order of the limits our data allow us to put on the H$\alpha$ luminosity from known forming protoplanet candidates. Since considering lower planetary masses enhances the mass accretion rate (see equation \ref{eq:accretion_rate}) and higher masses should be visible in other infrared bands, we conclude that either extinction from disk material plays a major role in the nondetection of the existing candidates, or they are false positives resulting from image post-processing.
The comparison of $L_{H\alpha}$ limits from Table \ref{tab:limits} with Figure 7 from \cite{Mordasini2017} indicates that, assuming completely cold accretion, the observed objects may be low-mass ($0.1-1 M_J$) medium accreters ($\dot{M}\sim10^{-10}-10^{-9} M_\odot/\text{yr}$) or higher mass objects ($1-15 M_J$) showing very little accretion ($\dot{M}<10^{-10.5}M_\odot/\text{yr}$). \cite{Mordasini2017} also suggested another possible reason for some of the nondetections in H$\alpha$. If some of the planets, such as HD100546\,b, have not yet completely detached from the disk, they would be cooler and would not be accreting at high accretion rates. In a later phase, they will possibly be able to open a gap and accrete a large amount of material. \\
Another aspect that we did not consider is the effect of the circumplanetary disk inclination on the flux that is emitted. \cite{zhu2015} considered the disk inclination including a factor $1/\cos(i)$, where $i$ is the CPD inclination. Detailed accretion models should investigate the consequences of a tilted protoplanetary disk on $L_{H\alpha}$.
\section{Conclusions}
\label{sec:conclusions}
Imaging in H$\alpha$ is one of the promising techniques to detect forming planets at very small separations. In this context, the SPHERE/ZIMPOL instrument will play a major role in investigating local accretion signatures in circumstellar disks. An important next step is to redetect the previous discoveries of MagAO of H$\alpha$ emission from LkCa15\,b and PDS 70\,b and to study potential accretion variability.
None of the possible protoplanet candidates discovered in the infrared (HD169142\,b, MWC758\,b, and HD100546\,b and c) could be confirmed in this study searching for accretion signatures, implying several possible scenarios. Their mass accretion rates could be lower than our limits and therefore they are currently not detectable. Other explanations are that protoplanetary accretion shows variability and some of the objects are currently going through a period of quiescence, or that extinction effects from disk material absorb a considerable fraction of the light. The study of NIR line diagnostics might reduce the effects of absorption and allow the detection of accretion processes. Furthermore, it is possible that the observed candidates are disk features that have been enhanced by image post-processing \citep{Follette2017,ligi2017}, or our understanding of accretion processes during the formation of giant planets is not correct and, as an example, the use of the CTTS scaling relation is not correct. In order to investigate this, precise simulations of protoplanetary accretion, as well as of disk intrinsic effects (via full radiative transfer), have to be developed and combined with multiwavelength observations spanning from the optical to the (sub)millimeter.
The estimation of upper limits are of particular importance for the study of accretion variability of protoplanets in the future. Continuing surveys for accreting planets could possibly detect H$\alpha$ signatures and combine these with detection limits provided by this work to investigate variability in the accretion processes.
Finally, we emphasize that although a lot of effort was put into the calculation of mass accretion rate upper limits, those values are model and parameter dependent. The H$\alpha$ flux upper limits are, however, the fundamental quantities that were measured from the data and can be directly compared with future observations.
\begin{acknowledgements}
SPHERE is an instrument designed and built by a consortium consisting of IPAG (Grenoble, France), MPIA (Heidelberg, Germany), LAM (Marseille, France), LESIA (Paris, France), Laboratoire Lagrange (Nice, France), INAF - Osservatorio di Padova (Italy), Observatoire de Gen\'eve (Switzerland), ETH Zurich (Switerland), NOVA (Netherlands), ONERA (France), and ASTRON (Netherlands), in collaboration with ESO. SPHERE also received funding from the European Commission Sixth and Seventh Framework Programmes as part of the Optical Infrared Coordination Network for Astronomy (OPTICON) under grant number RII3-Ct-2004-001566 for FP6 (2004-2008), grant number 226604 for FP7 (2009-2012), and grant number 312430 for FP7 (2013-2016). This work has been carried out within the frame of the National Center for Competence in Research PlanetS supported by the Swiss National Science Foundation. SPQ and HMS acknowledge the financial support of the SNSF. GC and SPQ thank the Swiss National Science Foundation for financial support under grant number 200021\_169131. FMe and GvdP acknowledge fundings from ANR of France under contract number ANR-16-CE31-0013. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This work has made use of data from the European Space Agency (ESA)
mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by
the {\it Gaia} Data Processing and Analysis Consortium (DPAC,
\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding
for the DPAC has been provided by national institutions, in particular
the institutions participating in the {\it Gaia} Multilateral Agreement.
The authors thank Arianna Musso-Barcucci for the preliminary analysis on HD142527.
\end{acknowledgements}
\bibliographystyle{aa}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,315 |
{"url":"https:\/\/almostoriginality.wordpress.com\/2019\/05\/30\/marcinkiewicz-type-multiplier-theorem-for-q-variation\/","text":"Marcinkiewicz-type multiplier theorem for q-variation (q >\u00a01)\n\nNot long ago we discussed one of the main direct applications of the Littlewood-Paley theory, namely the Marcinkiewicz multiplier theorem. Recall that the single-variable version of this theorem can be formulated as follows:\n\nTheorem 1 [Marcinkiewicz multiplier theorem]: Let ${m}$ be a function on $\\mathbb{R}$ such that\n\n1. $m \\in L^\\infty$\n2. for every Littlewood-Paley dyadic interval $L := [2^k, 2^{k+1}] \\cup [-2^{k+1},-2^k]$ with $k \\in \\mathbb{Z}$\n\n$\\displaystyle \\|m\\|_{V(L)} \\leq C,$\n\nwhere $\\|m\\|_{V(L)}$ denotes the total variation of ${m}$ over the interval $L$.\n\nThen for any ${1 < p < \\infty}$ the multiplier ${T_m}$ defined by $\\widehat{T_m f} = m \\widehat{f}$ for functions $f \\in L^2(\\mathbb{R})$ extends to an $L^p \\to L^p$ bounded operator,\n\n$\\displaystyle \\|Tf\\|_{L^p} \\lesssim_p (\\|m\\|_{L^\\infty} + C) \\|f\\|_{L^p}.$\n\nYou should also recall that the total variation $V(I)$ above is defined as\n\n$\\displaystyle \\sup_{N}\\sup_{\\substack{t_0, \\ldots, t_N \\in I : \\\\ t_0 < \\ldots < t_N}} \\sum_{j=1}^{N} |m(t_j) - m(t_{j-1})|,$\n\nand if ${m}$ is absolutely continuous then ${m'}$ exists as a measurable function and the total variation over interval $I$ is given equivalently by $\\int_{I} |m'(\\xi)|d\\xi$. We have seen that the \u201cdyadic total variation condition\u201d 2.) above is to be seen as a generalisation of the pointwise condition $|m'(\\xi)|\\lesssim |\\xi|^{-1}$, which in dimension 1 happens to coincide with the classical differential H\u00f6rmander condition (in higher dimensions the pointwise Marcinkiewicz conditions are of product type, while the pointwise H\u00f6rmander(-Mihklin) conditions are of radial type; see the relevant post). Thus the Marcinkiewicz multiplier theorem in dimension 1 can deal with multipliers whose symbol is somewhat rougher than being differentiable. It is an interesting question to wonder how much rougher the symbols can get while still preserving their $L^p$ mapping properties (or maybe giving up some range \u2013 recall though that the range of boundedness for multipliers must be symmetric around 2 because multipliers are self-adjoint).\n\nCoifman, Rubio de Francia and Semmes came up with an answer to this question that is very interesting. They generalise the Marcinkiewicz multiplier theorem (in dimension 1) to multipliers that have bounded ${q}$-variation with ${q}$ > 1. Let us define this quantity rigorously.\n\nDefinition: Let $q \\geq 1$ and let $I$ be an interval. Given a function $f : \\mathbb{R} \\to \\mathbb{R}$, its ${q}$-variation over the interval ${I}$ is\n\n$\\displaystyle \\|f\\|_{V_q(I)} := \\sup_{N} \\sup_{\\substack{t_0, \\ldots t_N \\in I : \\\\ t_0 < \\ldots < t_N}} \\Big(\\sum_{j=1}^{N} |f(t_j) - f(t_{j-1})|^q\\Big)^{1\/q}$\n\nNotice that, with respect to the notation above, we have $\\|m\\|_{V(I)} = \\|m\\|_{V_1(I)}$. From the fact that $\\|\\cdot\\|_{\\ell^q} \\leq \\|\\cdot \\|_{\\ell^p}$ when $p \\leq q$ we see that we have always $\\|f\\|_{V_q (I)} \\leq \\|f\\|_{V_p(I)}$, and therefore the higher the ${q}$ the less stringent the condition of having bounded ${q}$-variation becomes (this is linked to the H\u00f6lder regularity of the function getting worse). In particular, if we wanted to weaken hypothesis 2.) in the Marcinkiewicz multiplier theorem above, we could simply replace it with the condition that for any Littlewood-Paley dyadic interval $L$ we have instead $\\|m\\|_{V_q(L)} \\leq C$. This is indeed what Coifman, Rubio de Francia and Semmes do, and they were able to show the following:\n\nTheorem 2 [Coifman-Rubio de Francia-Semmes, \u201988]: Let $q\\geq 1$ and let ${m}$ be a function on $\\mathbb{R}$ such that\n\n1. $m \\in L^\\infty$\n2. for every Littlewood-Paley dyadic interval $L := [2^k, 2^{k+1}] \\cup [-2^{k+1},-2^k]$ with $k \\in \\mathbb{Z}$\n\n$\\displaystyle \\|m\\|_{V_q(L)} \\leq C.$\n\nThen for any ${1 < p < \\infty}$ such that ${\\Big|\\frac{1}{2} - \\frac{1}{p}\\Big| < \\frac{1}{q} }$ the multiplier ${T_m}$ defined by $\\widehat{T_m f} = m \\widehat{f}$ extends to an $L^p \\to L^p$ bounded operator,\n\n$\\displaystyle \\|Tf\\|_{L^p} \\lesssim_p (\\|m\\|_{L^\\infty} + C) \\|f\\|_{L^p}.$\n\nThe statement is essentially the same as before, except that now we are imposing control of the ${q}$-variation instead and as a consequence we have the restriction that our Lebesgue exponent ${p}$ satisfy ${\\Big|\\frac{1}{2} - \\frac{1}{p}\\Big| < \\frac{1}{q} }$. Taking a closer look at this condition, we see that when the variation parameter is $1 \\leq q \\leq 2$ the condition is empty, that is there is no restriction on the range of boundedness of $T_m$: it is still the full range ${1}$ < ${p}$ < $\\infty$, and as ${q}$ grows larger and larger the range of boundedness restricts itself to be smaller and smaller around the exponent $p=2$ (for which the multiplier is always necessarily bounded, by Plancherel). This is a very interesting behaviour, which points to the fact that there is a certain dichotomy between variation in the range below 2 and the range above 2, with $2$-variation being the critical case. This is not an isolated case: for example, the Variation Norm Carleson theorem is false for ${q}$-variation with ${q \\leq 2}$; similarly, the L\u00e9pingle inequality is false for 2-variation and below (and this is related to the properties of Brownian motion).\n\nToday, as a natural continuation to the posts on Littlewood-Paley theory and its applications, I am going to present the interesting proof of this very nice theorem. I found it impossible to get a hold of the original paper without taking a trip to the library, so being lazy I am going to follow instead Lacey\u2019s nice presentation (which I suppose is very close to the original one anyway). The reasons why I believe the proof to be interesting are several: for starters, the proof is surprisingly simple; moreover, it relies on a generalisation of the Littlewood-Paley theorem due to Rubio de Francia which is worth seeing once you have studied the basic theory of square functions; finally, one very important ingredient in the proof is a partition of $\\mathbb{R}$ dependent on the symbol ${m}$ which is very much alike that beautiful idea which is used to prove the maximal Hausdorff-Young inequality of Christ-Kiselev.\nTo summarise the proof before we start: we will build sub-partitions of the partition of $\\mathbb{R}$ given by the Littlewood-Paley dyadic intervals, in such a manner that on each subpartition we have a uniform control of the $L^\\infty$ norm of (part of) the multiplier symbol ${m}$; then we will use square functions adapted to these partitions to control the resulting multipliers, somewhat analogously to what is done in the Marcinkiewicz case (but conceptually even simpler). Finally we will combine everything together to conclude. In the next section we will present the necessary inequalities of Rubio de Francia (without proof) before presenting the proof.\n\n1. Rubio de Francia square functions\n\nRecall that the Littlewood-Paley theorem says the following: if $\\Delta_I$ denotes the frequency projection given by $\\widehat{\\Delta_I f} = \\mathbf{1}_I \\widehat{f}$ and $L = [2^k, 2^{k+1}] \\cup [-2^{k+1},-2^k]$ are the Littlewood-Paley dyadic intervals, the collection of which we denote by $\\mathbb{L}$, then we have for any ${1 < p < \\infty}$\n\n$\\displaystyle \\|f\\|_{L^p (\\mathbb{R})} \\sim_p \\Big\\|\\Big(\\sum_{L \\in \\mathbb{L}} |\\Delta_{L} f|^2\\Big)^{1\/2}\\Big\\|_{L^p (\\mathbb{R})}.$\n\nThe first person to go beyond this statement was Lennart Carleson, who investigated the square function\n\n$\\displaystyle f \\mapsto \\Big(\\sum_{n \\in \\mathbb{Z}} |\\Delta_{[n,n+1]} f|^2\\Big)^{1\/2}.$\n\nAs you can see, the intervals on which we are taking the frequency projections are no longer dyadic \u2013 rather, they all have the same (unit) length. If you recall, the heuristic motivation behind the Littlewood-Paley theory is that, since $f = \\sum_{L \\in \\mathbb{L}} \\Delta_{L} f$ and the different frequency pieces are dyadically separated, we should expect a random cancellation between the different terms of the frequency decomposition \u2013 which means that most of the time the sum should have magnitude approximately $\\big(\\sum_{L \\in \\mathbb{L}} |\\Delta_{L} f|^2\\big)^{1\/2}$. This heuristic no longer works in the case where the frequency intervals are the $[n,n+1]$\u2018s, because there are now many many terms in the frequency decomposition that have comparable frequencies and will therefore be \u201caligned\u201d for longer periods of time, or \u201ccorrelated\u201d if you prefer \u2013 there will not be much cancellation between them. This leads us to suspect that the analogue of the Littlewood-Paley theorem for Carleson\u2019s square function should fail, at least partially. This is indeed the case. In fact, Carleson showed that the inequality $\\Big\\|\\Big(\\sum_{n \\in \\mathbb{Z}} |\\Delta_{[n,n+1]} f|^2\\Big)^{1\/2}\\Big\\|_{L^p (\\mathbb{R})} \\lesssim_p \\|f\\|_{L^p(\\mathbb{R})}$ is FALSE when $p$ < 2. This is actually very simple to check: just take ${f}$ such that $\\widehat{f} = \\mathbf{1}_{[0,N]}$ for ${N}$ a large integer (${f}$ is a continuous version of the Dirichlet kernel). It is easy to estimate that $\\|f\\|_{L^p} = \\|\\check{\\mathbf{1}}_{[0,N]}\\|_{L^p} \\sim N^{1\/{p'}}$ (the contribution from the peak around the origin dominates). As for the square function, we simply have\n\n$\\displaystyle \\Big(\\sum_{n \\in \\mathbb{Z}} |\\Delta_{[n,n+1]} \\check{\\mathbf{1}}_{[0,N]}|^2\\Big)^{1\/2} = |\\check{\\mathbf{1}}_{[0,1]}| N^{1\/2},$\n\nand therefore the inequality can only be true if $N^{1\/2} \\lesssim N^{1\/{p'}}$ for any large ${N}$, which in turn is only possible if $p \\geq 2$. Carleson then went on to prove that in this range the inequality is actually true. His paper, titled \u201cOn the Littlewood-Paley theorem\u201d and published in 1967 on the Report of the Mittag-Leffler Institute seems to have vanished \u2013 not even MathSciNet has it in their records. Indeed, I think the result went largely unnoticed, since more than a decade later (1981) C\u00f3rdoba re-proved the theorem in his work on Bochner-Riesz multipliers.\n\nNevertheless, this work was later subsumed by work of Rubio de Francia, who generalised it to arbitrary collections of disjoint intervals. Indeed, he proved the following:\n\nTheorem 3 [Rubio de Francia, \u201985]: Let $\\mathcal{I}$ be a collection of disjoint intervals and let $S_{\\mathcal{I}}$ denote the associated square function\n\n$\\displaystyle S_{\\mathcal{I}}f := \\Big(\\sum_{I \\in \\mathcal{I}} |\\Delta_I f|^2 \\Big)^{1\/2}.$\n\nFor any ${2 \\leq p < \\infty}$ we have\n\n$\\displaystyle \\|S_{\\mathcal{I}}f \\|_{L^p} \\lesssim_p \\|f\\|_{L^p}. \\ \\ \\ \\ \\ (1)$\n\nImportantly, the constant is independent of the collection $\\mathcal{I}$.\n\nSome remarks are in order:\n\n\u2022 The sharp reader will have already noticed that, since the intervals are disjoint, the $p = 2$ case of Theorem 3 is actually a trivial consequence of Plancherel.\n\u2022 As seen in the proof of the Littlewood-Paley theorem, if the collection $\\mathcal{I}$ partitions $\\mathbb{R}$ then we have by duality and Cauchy-Schwarz that inequality (1) for exponent ${p \\geq 2}$ has the consequence that\n\n$\\displaystyle \\|f\\|_{L^{p'}} \\lesssim \\|S_{\\mathcal{I}}f \\|_{L^{p'}},$\n\nwhere now ${1 < p' \\leq 2}$. Thus we can see Rubio de Francia's result as saying that the heuristic above still applies to arbitrary partitions of $\\mathbb{R}$, but only in the range $p \\in (1,2]$ \u2013 the heuristic being that most of the time ${f} \\text{ }\\lesssim \\text{'' }S_{\\mathcal{I}}f$.\n\n\u2022 Of course, depending on the collection of intervals, the condition $p\\geq 2$ might not be sharp, as is the case when we take the Littlewood-Paley dyadic intervals and can thus go below 2. It is not currently known (despite the result being nearly 40 years old) when the condition is sharp, that is, there is no characterisation of the collections for which the theorem fails in the range $p$ < 2. It is conjectured that the condition is sharp essentially for all collections that are not \"lacunary\" in some sense, though it is even hard to understand what the correct notion of lacunarity to state a conjecture should be.\n\nThere are a number of proofs of Rubio de Francia\u2019s theorem, though we will not see a single one in here. Rubio de Francia\u2019s original proof worked roughly as follows: first of all, using classical Littlewood-Paley theory and Whitney decompositions of the intervals, one reduces to the case where the intervals are all well-separated, meaning that if $I,J \\in \\mathcal{I}$ are disjoint then $5I,5J$ are also disjoint; then, the theorem is reduced to proving the same statement for a square function $G$ with the same frequency information but smoother frequency projections (much like in the proof of the Littlewood-Paley theorem); finally, the boundedness of $G$ is deduced by interpolation between the trivial $p=2$ case and the endpoint inequality $\\|Gf\\|_{\\mathrm{BMO}}\\lesssim \\|f\\|_{L^\\infty}$ \u2013 which is not hard to prove as a consequence of some simple vector-valued kernel estimates.\nBourgain reproved the theorem by proving the endpoint of the dual inequality, thus extending the result somewhat; that is, he proved that when $\\mathcal{I}$ is a partition of the real line then\n\n$\\displaystyle \\|f\\|_{H^1} \\lesssim \\|S_{\\mathcal{I}}f\\|_{L^1},$\n\nwhere $\\|\\cdot\\|_{H^1}$ denotes the quasi-norm of the (real) Hardy space $H^1(\\mathbb{R})$. Bourgain\u2019s paper \u201cOn square functions on the trigonometric system\u201d is very hard to get, having been published in a journal that does not exist in that form anymore. The proof he gives is beautiful (not a surprise) and I will probably talk about it in the future.\nAnother vividly distinct proof of Theorem 3 is given by Lacey in the aforementioned notes, in which the theorem is reformulated in time-frequency language (using wavepackets) and a time-frequency proof is given, with plenty of details. These notes include a discussion of many issues related to the Rubio de Francia inequalities and are a must-read for whoever is interested in such inequalities.\nFinally, there is also a not-yet-published paper of Benea and Muscalu in which they re-prove Rubio de Francia\u2019s theorem in yet another time-frequency way (distinct from Lacey\u2019s).\nOther inputs, variations and extensions to higher dimensions have been given by Soria, Sj\u00f6lin, Journ\u00e9, Sato, Zhu and maybe others I am forgetting at the moment; I will not discuss these here.\n\n2. Proof of Theorem 2\nWith the Rubio de Francia inequalities at hand, we are ready to prove Theorem 2.\n\nProof: The idea is to reduce to the simpler case of multipliers which are simple functions on each Littlewood-Paley dyadic interval. In particular, assume that ${m}$ is of the form $\\sum_{L \\in \\mathbb{L}} \\sum_{I \\in \\mathcal{I}_L} m_I \\mathbf{1}_I$, where each collection $\\mathcal{I}_L$ is a partition of the Littlewood-Paley interval ${L}$ and $m_I$ is a complex coefficient. Assume furthermore that these subpartitions of Littlewood-Paley intervals are bounded in cardinality, that is assume that for every $L \\in \\mathbb{L}$ we have $\\# \\mathcal{I}_L \\leq N$. Then we can argue as follows, following the footsteps of the proof of the Marcinkiewicz multiplier theorem (Theorem 1 above): if we let ${S}$ denote the Littlewood-Paley square function, we have by Littlewood-Paley theorem that\n\n$\\displaystyle \\|T_m f\\|_{L^p} \\sim_p \\|ST_m f\\|_{L^p},$\n\nwhere\n\n$\\displaystyle ST_m f = \\Big( \\sum_{L \\in \\mathbb{L}} | \\Delta_L T_m f|^2 \\Big)^{1\/2}.$\n\nDue to the simple form the symbol ${m}$ has, we see that\n\n$\\displaystyle |\\Delta_L T_m f| = \\Big|\\sum_{I \\in \\mathcal{I}_L} m_I \\Delta_I f \\Big| \\leq \\|m\\|_{L^\\infty} \\sum_{I \\in \\mathcal{I}_L} |\\Delta_I f|;$\n\nhaving an $\\ell^1$ sum is inconvenient in this context, and therefore we apply Cauchy-Schwarz to the latter to get a nice square function instead,\n\n\\displaystyle \\begin{aligned} |\\Delta_L T_m f| \\leq & \\|m\\|_{L^\\infty} (\\# \\mathcal{I}_L)^{1\/2} \\Big(\\sum_{I \\in \\mathcal{I}_L} |\\Delta_I f|^2 \\Big)^{1\/2} \\\\ \\leq & \\|m\\|_{L^\\infty} N^{1\/2} S_{\\mathcal{I}_L}f. \\end{aligned}\n\nPerforming the $\\ell^2$-summation in $L$ we see that we have shown\n\n$\\displaystyle |S T_m f| \\leq \\|m\\|_{L^\\infty} N^{1\/2} S_{\\mathcal{I}}f,$\n\nwhere $\\mathcal{I}$ is the collection of all intervals, that is $\\mathcal{I} = \\bigcup_{L} \\mathcal{I}_L$. Now the object on the RHS is a Rubio de Francia square function, which we know is bounded at least when $p \\geq 2$. Assume therefore that this is the case (which is not a limitation, because multipliers have range of boundedness symmetric about exponent 2), and as a consequence of (1) we have therefore that\n\n$\\displaystyle \\|T_m f\\|_{L^p} \\lesssim \\|m\\|_{L^\\infty} N^{1\/2} \\|f\\|_{L^p}$\n\nfor all $p\\geq 2$. This bound is not so great because there is a large loss in the parameter ${N}$, but at this point we should observe something: when $p=2$ this factor is not there! Indeed, in that case we have simply $\\|T_m f\\|_{L^2} \\leq \\|m\\|_{L^\\infty} \\|f\\|_{L^2}$ by Plancherel; but this means that for any $p\\geq 2$ we can (complex) interpolate between all these estimates and lower the exponent $1\/2$ somewhat. Indeed, for a fixed exponent $p\\geq 2$, we can write ${p}$ as an interpolation exponent between 2 and any extremely large exponent ${r}$; in practice, the result will be the same as if we had assumed $r=\\infty$ (once we take a limit), although obviously we are not allowed to use precisely this exponent. The result is the following: if $\\theta \\in (0,1)$ is such that\n\n$\\displaystyle \\frac{1}{p} = \\frac{1 - \\theta}{2} + \\frac{\\theta}{\\infty} = \\frac{1 - \\theta}{2}$\n\nthen we have by interpolation that $\\|T_m f\\|_{L^p} \\lesssim (N^{1\/2})^{\\theta} \\|f\\|_{L^p}$, and a simple computation shows that $(N^{1\/2})^{\\theta} = N^{1\/2 - 1\/p}$ (recall that $p\\geq 2$ in this argument), that is we have shown\n\n$\\displaystyle \\|T_m f\\|_{L^p} \\lesssim \\|m\\|_{L^\\infty} N^{|1\/2 - 1\/p|} \\|f\\|_{L^p} \\ \\ \\ \\ \\ (2)$\n\nfor all $p \\in (1,\\infty)$. We have improved the constant a little! This small improvement will go a long way though.\n\nNow that we have some partial result, how can we exploit it? Can we reduce the multiplier symbol in Theorem 2 to a symbol of the type just considered above? It turns out that it is not at all hard to reduce the symbol ${m}$ to a sum of symbols of the above form, in such a way that (2) will give a summable contribution. The argument is an ingenious decomposition of ${m}$ in martingale differences where the martingale is dictated from ${m}$ itself (by its ${q}$-variation, precisely). Let us see how.\nWe assume for simplicity that the constant $C$ is 1, that is we normalise the symbol so that for any Littlewood-Paley interval ${L}$ we have $\\|m\\|_{V_q(L)} \\leq 1$. Fix such an $L \\in \\mathbb{L}$ and let $j \\in \\mathbb{N}$; we want to partition ${L}$ into intervals that carry uniform \u201c${q}$-variation-mass\u201d, so to speak. This is easy to achieve in the following way: let $\\mu_L \\, : \\, L \\to [0,1]$ denote the function\n\n$\\displaystyle \\mu_L(\\xi) := (\\| m \\|_{V_q (L \\cap (-\\infty,\\xi])})^q;$\n\nthat is, $\\mu_L(\\xi)$ is the ${q}$-variation of ${m}$ in the interval from the left endpoint of ${L}$ to the point $\\xi \\in L$, raised to the power ${q}$ (so that we have additivity). Function $\\mu_L$ is clearly a monotone increasing function, and therefore has a well-defined inverse function. Split therefore the interval $[0,1]$ into $2^j$ equal disjoint intervals, that is,\n\n$\\displaystyle [0,1] = J_1 \\sqcup \\ldots \\sqcup J_{2^j}$\n\nwith $J_k := [2^{-j}(k-1), 2^{-j}k)$; we define then for any $k = 1, \\ldots, 2^j$\n\n$\\displaystyle I_{k, L} := \\mu_L^{-1}(J_k),$\n\nwhich is an interval, by the monotonicity of $\\mu_L$. With this definition, we have $\\|m\\|_{V_q(I_{k,L})} \\leq 2^{-j\/q}$ by construction.\nDefine the collection $\\mathcal{J}_j$ to be the collection of all the intervals $I_{k,L}$ resulting from this procedure:\n\n$\\displaystyle \\mathcal{J}_j := \\{ I_{k,L} \\; : \\; L \\in \\mathbb{L}, k = 1,\\ldots, 2^j\\};$\n\nwe have that:\n\n1. each $\\mathcal{J}_j$ collection is a partition of $\\mathbb{R}$;\n2. each $\\mathcal{J}_j$ collection is a refinement1 of the Littlewood-Paley partition and each $L \\in \\mathbb{L}$ is partitioned by $\\mathcal{J}_j$ into at most $2^j$ intervals;\n3. the collections $\\mathcal{J}_j$ are refinements of each other: if $j' > j$ then $\\mathcal{J}_{j'}$ is a refinement of $\\mathcal{J}_j$. Notice that each interval of $\\mathcal{J}_{j}$ is split into (at most) 2 subintervals by $\\mathcal{J}_{j+1}$.\n\nObserve that property 2.) above points in the direction of (2), in that we have uniform control on the cardinality of the (sub-)partitions, but we still have to decompose the symbol. Using properties 1.) and 3.), we do a martingale decomposition of ${m}$ adapted to these collections: let $\\mathcal{F}_j = \\sigma(\\mathcal{J}_j)$ (the sigma-algebra generated by the collection $\\mathcal{J}_j$) and notice that $(\\mathcal{F}_j)_{j\\in\\mathbb{Z}}$ is an increasing sequence of sigma-algebras; therefore, if we introduce the martingale differences\n\n$\\displaystyle \\mathbf{D}_j f := \\mathbf{E}[f | \\mathcal{F}_{j+1}] - \\mathbf{E}[f | \\mathcal{F}_{j}],$\n\nwe can decompose every function ${f}$ into\n\n$\\displaystyle f = \\mathbf{E}[f | \\mathbb{L}] + \\sum_{j \\in \\mathbb{N}} \\mathbf{D}_j f.$\n\nNow it is worth observing what happens when we apply the martingale decomposition to ${m}$ itself. Consider a fixed $j$ and a fixed $\\xi_0$ and observe that\n\n$\\displaystyle \\mathbf{D}_j m(\\xi_0) = \\frac{1}{|I|} \\int_{I} m \\,d\\xi - \\frac{1}{|\\widehat{I}|} \\int_{\\widehat{I}} m \\,d\\xi,$\n\nwhere $I$ is the unique interval in $\\mathcal{J}_{j+1}$ such that $\\xi_0 \\in I$ and $\\widehat{I}$ is the unique interval in $\\mathcal{J}_{j}$ that contains $I$. Observe that there is a unique affine map $\\varphi$ that maps $I$ to $\\widehat{I}$ while preserving the ordering; using this map as a change of variables we can thus write\n\n$\\displaystyle \\mathbf{D}_j m(\\xi_0) = \\frac{1}{|I|} \\int_{I} m(\\xi) - m(\\varphi(\\xi)) \\,d\\xi.$\n\nNotice though that for any $\\xi \\in I$ we have that trivially\n$|m(\\xi) - m(\\varphi(\\xi))| \\leq \\|m\\|_{V_q(\\widehat{I})}$; but by construction this quantity is controlled by $2^{-j\/q}$! Therefore, since the above expression is an average, we have the pointwise bound\n\n$\\displaystyle \\| \\mathbf{D}_j m\\|_{L^\\infty} \\leq 2^{-j\/q}.$\n\nCombining this bound with property 2.) of the collections $\\mathcal{J}_j$ we see that for the symbols $m_j := \\mathbf{D}_j m$ we have by inequality (2) that\n\n$\\displaystyle \\| T_{m_j} f\\|_{L^p} \\lesssim \\|m_j\\|_{L^\\infty} (\\sup_{L \\in \\mathbb{L}}\\#\\{I \\subset L \\, : \\, I \\in \\mathcal{J}_j\\})^{|1\/2 - 1\/p|} \\|f\\|_{L^p} \\leq 2^{-j\/q} 2^{j|1\/2 - 1\/p|} \\|f\\|_{L^p}.$\n\nIf $|1\/2 - 1\/p| < 1\/q$ the overall exponent of $2^j$ above is negative and therefore the quantity is summable; by triangle inequality we have therefore that\n\n$\\displaystyle \\| T_{m} f\\|_{L^p} \\lesssim (\\|m\\|_{L^\\infty} + C)\\|f\\|_{L^p}$\n\nprovided the condition $|1\/2 - 1\/p| < 1\/q$ is satisfied (the $\\|m\\|_{L^\\infty}$ term comes from bounding $\\mathbf{E}[f | \\mathbb{L}]$ using the standard Littlewood-Paley square function). This concludes the nice proof of Theorem 2. $\\Box$\n\nWe close this post with a remark and a question:\n\nRemark: If we had not improved the exponent of $N$ in inequality (2), we would have only concluded that $\\| T_{m_j} f\\|_{L^p} \\lesssim 2^{-j\/q} 2^{j\/2} \\|f\\|_{L^p}$. This is still summable in ${j}$ when $q \\in [1,2)$, so we would still have concluded something, but we would have missed the $q\\geq 2$ part of Theorem 2.\n\nA question: Is the range $\\Big| \\frac{1}{2} - \\frac{1}{p}\\Big|$ 2, can we find a symbol ${m}$ with ${\\sup_{L \\in \\mathbb{L}} \\|m\\|_{V_q(L)} }$ finite but such that the multiplier $T_m$ is unbounded for ${p}$ such that ${ \\Big| \\frac{1}{2} - \\frac{1}{p}\\Big| \\geq \\frac{1}{q} }$ ?\n\n1: In the interest of clarity, by refinement we mean the following: for every $I \\in \\mathcal{J}_j$ there exists an $L \\in \\mathbb{L}$ such that $I \\subset L$, and for each $L \\in \\mathbb{L}$ there exist $I_1, \\ldots, I_n \\in \\mathcal{J}_j$ such that $L = \\bigcup_{\\ell} I_\\ell$. [go back]","date":"2021-06-20 06:36:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 251, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9508515000343323, \"perplexity\": 218.32746929593978}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487658814.62\/warc\/CC-MAIN-20210620054240-20210620084240-00588.warc.gz\"}"} | null | null |
Q: MongoDB Chart $multiply I have a data like this.
"products": [{
"_id": {
"$oid": "5ffd0a8f6273740017cc5fca"
},
"name": "Banana",
"price": 65,
"createdAt": {
"$date": "2021-01-12T02:33:51.648Z"
},
"updatedAt": {
"$date": "2021-01-12T02:33:51.648Z"
},
"quantity": 3
}, {
"_id": {
"$oid": "5ffd09326273740017cc5fb3"
},
"name": "Apple",
"price": 79,
"createdAt": {
"$date": "2021-01-12T02:28:02.412Z"
},
"updatedAt": {
"$date": "2021-01-12T02:28:02.412Z"
},
"quantity": 2
}]
What I'm trying to do is multiply the price over the quantity.
Implementation:
{ $reduce: {
input: '$products', initialValue: 0,
in: { $multiply: ["$products.price",
"$products.quantity"] } }
I'm having an error $multiply only supports numeric types, not array
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,088 |
University of British Columbia and Simon Fraser University confer honorary degrees on Mawlana Hazar Imam
In an historic joint ceremony in Vancouver, the University of British Columbia (UBC) and Simon Fraser University (SFU) each conferred Mawlana Hazar Imam with an honorary Doctor of Laws degree in recognition of his lifelong service to humanity.
Video: Mawlana Hazar Imam honoured by University of British Columbia and Simon Fraser University
On 19 October, Mawlana Hazar Imam was conferred with honorary Doctor of Laws degrees from the University of British Columbia and Simon Fraser University at a joint ceremony, the first of its kind.
Mawlana Hazar Imam arrives in Vancouver to accept two honorary degrees at historic ceremony
Mawlana Hazar Imam arrived in Vancouver today, accompanied by Prince Aly Muhammad, for the final leg of his visit to Canada. President of the Ismaili Council for British Columbia Samir Manji and Zulie Sachedina, Chair, International Conciliation and Arbitration Board (ICAB) greeted Hazar Imam and Prince Aly upon their arrival.
Mawlana Hazar Imam meets with President of Ireland
His Excellency President Michael Higgins of Ireland and his wife Sabina Mary Coyne, today received Mawlana Hazar Imam at their official residence Áras an Uachtaráin. The occasion honoured Hazar Imam's Diamond Jubilee, which concluded last month.
Quest For Balance: Simple Ways to Engage With Spirituality Everyday
Spirituality doesn't have to be a separate quest - it can be part of our daily lives. Hussain Rajwani shares three easy ways to think about spirituality differently, and try to incorporate it into our everyday actions.
(-) Remove Vancouver filter Vancouver
(-) Remove Dushanbe filter Dushanbe
(-) Remove Non-communicable diseases filter Non-communicable diseases
(-) Remove Carnivale Ya Kenya filter Carnivale Ya Kenya
(-) Remove Dushanbe Serena Hotel filter Dushanbe Serena Hotel | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,309 |
{"url":"https:\/\/www.mersenneforum.org\/showthread.php?s=6611e5a07a0787c83e681835aa86f410&t=4353","text":"mersenneforum.org Combined Sieving?\n User Name Remember Me? Password\n Register FAQ Search Today's Posts Mark Forums Read\n\n 2005-07-14, 00:32 #1 jaat \u00a0 \u00a0 Jul 2005 2\u00b732 Posts Combined Sieving? Hi, I want to know whether I can do combined sieving for a bunch of k together with newpgen or some other tool? The issue is not to save time but save a lot of bother doing it for each k independently. jaat\n 2005-07-16, 03:04 #2 geoff \u00a0 \u00a0 Mar 2003 New Zealand 48516 Posts I don't know of any sieving program that will do this yet. But with pfgw you can use an ABC2 file to do trial factoring and prp testing for a number of k values. As an example, put this in myfile.txt: Code: ABC2 $a*5^$b-1 \/\/{number_primes,$a,1} a: in { 1002 2004 } b: from 100 to 1000 and starting pfgw as 'pfgw -f myfile.txt' will trial factor and\/or prp test 1002*5^100-1, 2004*5^100-1, 1002*5^101-1, ... etc. When one prp is found for a value of$a then testing will stop for that value and continue just for the remaining values. This is convenient, but will be much slower than sieving with newpgen unless the range for each k is quite small.\n 2005-10-12, 10:34 #3 axn \u00a0 \u00a0 Jun 2003 4,919 Posts Maybe its time we gave serious though to combined sieving. Can someone get in touch with Mikael Klasson to get a modified version of proth_sieve that can handle base 5? No idea how tough it'll be to do this, but i guess its worth a shot.\n2005-10-26, 17:55 \u00a0 #4\njaat\n\nJul 2005\n\n100102 Posts\n\nQuote:\n Originally Posted by axn1 Maybe its time we gave serious though to combined sieving.\n\nIf there is any hope for this project to gather some momentum, this is a must.\n\njaat\n\n 2005-11-30, 19:05 #5 Greenbank \u00a0 \u00a0 Jul 2005 2\u00b7193 Posts Even if you can get this done it's going to be much slower. You lose all of the Quadratic Residue filtering. All of the small-steps will have to be calculated with powmod instead of simple bit shifts. The order(2) filtering will be gone (although you might be able to adapt this to base 5. Better off finding a program that implements base 5 sieving now as there would have been some attempts at optimising this sieving. proth_sieve only implements base 2, and therefore it is only optimised for base 2.\n 2006-04-18, 02:26 #6 geoff \u00a0 \u00a0 Mar 2003 New Zealand 13\u00b789 Posts Riesel5 candidate k=151026 is the only one satisfying k = 0 (mod 3). This makes it the only remaining candidate that could have primes for both odd and even exponents. The possibilities for any other candidates are: If k = 1 (mod 3) then k*5^even-1 and k*5^odd+1 are composite If k = 2 (mod 3) then k*5^odd-1 and k*5^even+1 are composite [3 | k*5^n+\/-1 whenever 3 | k*5^(n-2)+\/-1 because 5^2 = 1 (mod 3)]. This fact could be exploited by a siever, for example to halve the memory required for a bitmap of candidate n (assuming k=151026 is not in the sieve). NewPGen does not seem to take advantage of this.\n\n Thread Tools\n\n Similar Threads Thread Thread Starter Forum Replies Last Post Christenson Hardware 12 2011-10-27 03:41 ltd Prime Sierpinski Project 76 2008-07-25 11:44 Joe O Prime Sierpinski Project 7 2006-09-22 09:34 Joe O Prime Sierpinski Project 35 2006-09-01 13:44 Citrix Prime Sierpinski Project 14 2005-12-31 19:39\n\nAll times are UTC. The time now is 14:09.\n\nTue Apr 20 14:09:36 UTC 2021 up 12 days, 8:50, 0 users, load averages: 3.35, 3.90, 3.75\n\nPowered by vBulletin\u00ae Version 3.8.11\nCopyright \u00a92000 - 2021, Jelsoft Enterprises Ltd.\n\nThis forum has received and complied with 0 (zero) government requests for information.\n\nPermission is granted to copy, distribute and\/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.\nA copy of the license is included in the FAQ.","date":"2021-04-20 14:09:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.26572343707084656, \"perplexity\": 3723.1326898404773}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618039398307.76\/warc\/CC-MAIN-20210420122023-20210420152023-00288.warc.gz\"}"} | null | null |
Q: Why isn't pdb for ntdll.dll cached I'm developing C++ under Visual Studio Community 2013. It is C++/Win32 project not UWP and it is run as Debug/x64 platform.
My config is:
Tools -> Options -> Debugging -> Symbols -> All module, unless excluded
Tools -> Options -> Debugging -> Symbols -> Cache symbol in this directory
Cache symbol directory contains bunch of *.pdb files and it was never touched.
Problem:
When I run debugger occasionally I get message about contacting MS symbol server to download ntdll.dll and some other files which are I guess system related because I do not recognize them as a part of my project.
Why is ntdll.dll being downloaded? This article says:
Some common reasons symbols aren't loaded include:
*
*Symbol paths don't point to the correct location
*The symbol file is from a different version of the module than the one loaded in the process - Visual Studio requires that the symbol file come from the exact same build as the module. It cannot load symbols that come from a different build even if the source code was identical
I haven't rebuild ntdll.dll, how could I? It is not possible because it is part of Windows kernel, so why download debug symbols for ntdll.dll again and again instead of caching it?
Edit:
When I run debug (F5) then under Debug -> Windows -> Output I see following line:
'MyProject.exe' (Win32): Loaded 'C:\Windows\System32\ntdll.dll'. Loading disabled by Include/Exclude setting.
Output of Debug -> Windows -> Modules:
ntdll.dll C:\Windows\System32\ntdll.dll N/A No Loading disabled by Include/Exclude setting. 2 10.0.14393.447 (rs1_release_inmarket.161102-0100) 11/2/2016 11:13 AM 00007FFFDF410000-00007FFFDF5E1000 [12772] MyProject.exe
As I said it happens occasionally so I do not know how exactly I can reproduce this. Seems that in the past time it is working and ntdll.dll is not downloaded from MS servers anymore. I cannot change x64 to x86 because our project does not run on x86.
My C:\Users\wakatana\AppData\Local\Temp\SymbolCache contains following folders/files (I guess one might be for x86 and another for x64 but I'm not sure):
ntdll.pdb\41C94DD545BD4FCBA2E8F404185B97DC1\ntdll.pdb
ntdll.pdb\77A5329C3B1E425FAA9519DA285D8DA71\ntdll.pdb
Is it possible to explicitly tell VS to use already cached version of ntdll.dll in C:\Users\wakatana\AppData\Local\Temp\SymbolCache for all new projects or what is the default VS behavior?
My OS is Windows10 Home with latest updates
A: It would be related to this specific assembly.
If I use the X86 Target, I will get the wntdll.pdb file.
If I use the X64 target, I will get the ntdll.pdb file, but like yours, they are all in temp\SymbolCache folder. I also get the information "source information stripped messages" in X64 target. It really has no the pdb file in the default symbol folder under TOOLS->Options->Debugging->Symbols.
Since I got the same issue as yours, I submitted a connect report to the product team:
https://connect.microsoft.com/VisualStudio/feedbackdetail/view/3113479/why-isnt-pdb-for-ntdll-dll-cached
You could also add your comment and vote it.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,107 |
{"url":"https:\/\/www.nature.com\/articles\/s41598-022-12204-6","text":"Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.\n\nSynthesis and robocasting of YAG xerogel: one-step conversion of ceramics\n\nAbstract\n\nAn optimized sol\u2013gel protocol was carried out to produce an yttrium aluminum garnet (YAG) xerogel from aluminum alkoxide and an yttrium salt on a semi-pilot scale. This xerogel was successfully used without prior pyrolysis as a solid load with the aid of additives in the preparation of pastes. Thermal treatment of the green bodies, obtained by robocasting of the paste, led to cohesive single-phase YAG ceramics. Manufacturing ceramic pieces by additive methods will allow shaping complex forms, while the single step conversion\/consolidation would simplify the technological process, reducing global energy costs. Since YAG possesses high strength and good creep behavior at high temperatures, these refractory pieces could replace the metal alloys used in turbine blades for deep space exploration. Structural, thermal and chemical characterizations were performed on xerogel powders, pastes, and YAG ceramics.\n\nIntroduction\n\nThe French Space Agency (CNES) has carried out research and development into oxide ceramics with the aim of improving the design of crucial subsystems for space propulsion. The maximum allowable turbine temperature, imposed by the resistance of metallic alloys, represents a performance limitation for liquid propulsion rocket engine cycles. The introduction of oxide ceramics for stator\/rotor turbine parts could be a promising solution to increase the cycle temperature and achieve performance gains accordingly. From a lifetime standpoint, creep-resistant ceramics would be the key technology for the development of onboard power production systems for deep-space exploration1. Yttrium aluminum garnet (YAG, Y3Al5O12) was chosen for this purpose. Besides being known as a laser gain host material2,3,4 for solid-state lasers5, it can also be utilized for its mechanical characteristics. Indeed, it presents interesting mechanical properties at high temperature6, due to its high strength, good creep behavior at high temperatures (>\u20091000\u00a0\u00b0C), good physical and chemical stability, low thermal conductivity, and good water vapor corrosion resistance7. It is also used in oxidizing environments for thermal barrier coatings8 or in applications requiring long-term retention9 as well.\n\nAmong all the reported protocols for YAG preparation including solid-state10,11, sol\u2013gel-based synthesis has proven to be a good method to prepare single-phased YAG, as the homogeneous mixing of precursors in the sol\u2013gel method guarantees the chemical uniformity of the product and a lower processing temperature12. For example, following this process, Gowda13 prepared gels of yttria and aluminum tri-sec-butoxide acetate, which crystallized into YAG when thermally treated between 800 and 1400\u00a0\u00b0C. Furthermore, Manalert and Rahaman14 obtained amorphous YAG from a mixture of aluminum tri-sec-butoxide and yttrium acetate hydrate using the sol\u2013gel method and supercritical drying with extraction of CO2. Finally, Singlard et al.15 developed a sol\u2013gel synthesis of single-phased YAG from aluminum tri-sec-butoxide and anhydrous yttrium chloride and subsequent heat treatment.\n\nIn any case, these powders must be manufactured and shaped while maintaining their properties as ceramics. Currently, due to its low cost and ease of use, extrusion is one of the most widely used technologies for the direct shaping of ceramics16,17. In the case of YAG manufacturing, just a few examples can be found in the literature, namely the 3D printing using a mixed powder aqueous slurry18 and the 3D direct ink writing of YAG nanoparticles19. However, most of these innovations belong to the optics field, where YAG is doped with rare-earth metal elements and the desired properties are related to refractive index17, photoluminescence20, etc. and nothing is dealing with the extrusion of xerogel.\n\nFrom a technological point of view, the 3D printing process requires a large quantity of solid load. Nevertheless, as often mentioned in the literature, chemical routes for YAG powders tend to be limited to laboratory-scale quantities, and it could be a challenge to yield larger amounts. Scaling up YAG powder production is far from straightforward, as enlarging can lead to the formation of impurities, influence the reproducibility of the process, or alter the microstructure of the products. Moreover, the direct use of xerogel as a solid load in the paste can offer an alternative way to simplify the heat treatment profile. Indeed, it is possible to take advantage of the debinding step to promote the conversion of the xerogel into crystalline YAG, avoiding the usual prepyrolysis of the xerogel.\n\nThe aim of this study was to improve and scale up the process for preparing a YAG xerogel. Then, the printability of the xerogel-based paste was studied to shape consolidated YAG pieces in one-step process. Thermal, structural, and microstructural characterizations were performed on the samples.\n\nMaterials preparation\n\nScaling up the sol\u2013gel synthesis of YAG xerogel and YAG powder\n\nThe metal precursors used for the sol\u2013gel synthesis were anhydrous yttrium chloride (99.99%, Sigma\u2013Aldrich) and aluminum tri-sec-butoxide (97%, Sigma\u2013Aldrich), while the solvents used were anhydrous ethanol (94\u201396%, Alfa Aesar) and isopropanol (99.9% Fisher Scientific). For hydrolysis, ammonia (28%, Alfa Aesar) was employed. We produced YAG xerogel following the protocol described by Singlard et al.15 but decreasing the maturation temperature from 60\u00a0\u00b0C to room temperature. This protocol was denoted \u201claboratory scale synthesis\u201d and noted as \u201cL\u201d. For enlarging the production of xerogel, keeping the same characteristics of xerogel produced by L, a second protocol called \u201csemi-pilot scale synthesis\u201d and named \u201cSP\u201d was carried out. In this protocol 0.27\u00a0mol of yttrium chloride was dissolved in 330\u00a0mL of anhydrous ethanol. On the other hand, 0.25\u00a0mol of aluminum tri-sec-butoxide was dissolved in 330\u00a0mL of isopropanol. Both solutions were mixed, in a 2L-reactor, inside a glove box, especially to preserve the anhydrous character of the yttrium chloride powder. Then, hydrolysis was completed consuming 83\u00a0mL ammonia as catalyzer. The solution was aged during 15\u00a0h at room temperature to mature the sol and centrifuged at 6 000\u00a0rpm. A detailed protocol for L and SP is shown in Fig.\u00a01. For both syntheses, three washes in deionized water were necessary. The centrifuged xerogel was dried at 120\u00a0\u00b0C\/15\u00a0h under 115\u00a0mbar of pressure. To verify that the YAG phase is formed from L and SP xerogels, a subsequent calcination was performed. L and SP were heated in a first step at 300\u00a0\u00b0C for 2\u00a0h with a heating rate of 2\u00a0\u00b0C\/min followed by a second step at 1000\u00a0\u00b0C for 1\u00a0h with a heating rate of 5\u00a0\u00b0C\/min and finally natural cooling. After calcination, the sample corresponding to L was called L1000, and that for SP was named SP1000.\n\nPreparation and extrusion of YAG xerogel pastes\n\nThe YAG xerogel paste designed for extrusion is composed of a mixture of SP xerogel as the solid load and poly vinyl alcohol, PVA (Rhodoviol 25\/140, VWR chemicals, Leuven, Belgium), in aqueous solution (97\u00a0g\/L) as the unique additive. The paste was prepared as follows: a specific volume of polyvinyl alcohol solution was vigorously mixed with 68.75\u00a0wt% SP xerogel, resulting in the formation of a slurry that was stirred until a homogeneous paste was obtained.\n\nThe paste was then extruded to form cords\u2019 structures. Before any heat treatment, the extruded pieces were exposed to 50% relative humidity (RH) for at least 15\u00a0h at room temperature. Then, the pieces were debinded at 600\u00a0\u00b0C for 2\u00a0h with a heating rate of 2\u00a0\u00b0C\/min to eliminate the aqueous polyvinyl alcohol additive and organic remnants from the sol\u2013gel synthesis.\n\nFinally, these pieces were treated at 700\u00a0\u00b0C, 800\u00a0\u00b0C, 1000\u00a0\u00b0C, 1400\u00a0\u00b0C, 1550\u00a0\u00b0C and 1700\u00a0\u00b0C. All these thermal treatments were performed with a dwelling time of 1\u00a0h following a heating rate of 5\u00a0\u00b0C\/min under static conditions, as shown in Fig.\u00a02.\n\nResults and discussions\n\nScaling-up the YAG xerogel sol\u2013gel synthesis\n\nThe diagrams resulting from X-ray diffraction analysis are shown in Fig.\u00a03 for the L and SP xerogels, as well as calcined L1000 and SP1000, thus allowing to compare the powders obtained following the two synthesis methods. As expected, L and SP xerogels are amorphous. When calcined at 1000\u00a0\u00b0C (L1000 and SP1000), the organic residues were eliminated, leading to a polycrystalline ceramic powder. According to the PDF file 04-007-2667, both XRD patterns for the calcined xerogels match with a pure YAG structure, without any notable extra phase. Even if the synthesis protocol was slightly modified for L, compared to that of Singlard\u2019s and collaborators15, we observe the same features for the xerogel and calcined xerogel.\n\nFigure\u00a04 displays the particle size distribution in number for L, SP, L1000 and SP1000 samples. In all cases, these distributions are very similar; there is a single population between 2 and 3\u00a0\u00b5m of diameter. D50 and D90 values can be found in the Table 1. Regarding the volume distribution, in the inset, the presence of few, larger agglomerates (greater than 30\u00a0\u00b5m) is evidenced. The low probability of presence of these agglomerates was verified during the extrusion of the SP-based cords since the nozzle was not clogged and a clear printing of the YAG xerogel cords was performed.\n\nFinally, the density of the powders is another important parameter to consider when scaling up the sol\u2013gel synthesis. Table 1 displays the density values for the xerogels and calcined samples. The densities of the xerogels were approximately 2.20\u00a0g\/cm3, whether prepared with the laboratory or the enlarged scale synthesis protocols. This relatively low value is due to the high amount of organic phase in the samples, which appears to be similar for both cases. After calcination at 1000\u00a0\u00b0C, the density of the samples reached 4.47\u00a0g\/cm3 and 4.46\u00a0g\/cm3 for L1000 and SP1000, respectively. This increment is due to the thermal conversion of the xerogel into an inorganic network. Once the organic residues were eliminated, the amorphous phase was allowed to crystallize into the YAG structure, which does not mean that the arrangement between the grains was optimized. Nevertheless, considering the measurement errors and the YAG theoretical density value of 4.55\u00a0g\/cm3, it can be noted that the relative differences between the theoretical and experimental densities are 1.8% for L1000 and 1.9% for SP1000, meaning that the samples are close to pure YAG.\n\nIn conclusion, both xerogels led to the formation of pure single-phase YAG samples upon calcination at 1000\u00a0\u00b0C, regardless of the scale (L or SP) of the syntheses. Moreover, the particle size distribution and density of xerogels and powders possess similar characteristics as well, which confirms relatively good similarities for the products after scaling up the laboratory scale procedure and allows the use of a semi-pilot scale synthesis for the subsequent preparation of pastes and ceramics.\n\nPreparation and thermal behavior of the xerogel paste\n\nIn all the following experiments, the SP xerogel was used as the solid load in the paste formulation. The powder and paste thermal behaviors were thus studied through thermal analyses, as shown in Fig.\u00a05 (full thermograms are provided in the additional information section), which displays the weight loss for the xerogel and for the paste prepared with SP and aqueous polyvinyl alcohol solution. The powder, denoted for a dotted line, exhibits a global weight loss of 38.6%, which can be divided into three zones. The first zone, from 20 to 120\u00a0\u00b0C, presents a loss of 6.8% associated with the evaporation of organic solvents and water. The second zone, between 120 and 800\u00a0\u00b0C, exhibits a weight loss of 28.5%. It is well known that between 200 and 500\u00a0\u00b0C, decomposition and\/or combustion of organic residues occurs. The final zone, from 800 to 1200\u00a0\u00b0C, corresponds to a very small weight loss of 3.3%. This can be connected to residual decarbonization and crystallization of the amorphous network into the YAG structure, as already shown in Fig.\u00a03. The different weight losses are in good agreement with the results reported by Singlard et al.15 in terms of the number of defined zones and the nature of related thermal events. On the other hand, the thermogram for the paste, denoted by a continuous line, shows the same global features as observed in the powder thermogram. However, the total weight loss is largely increased to 61.3%, since the sample contains aqueous polyvinyl alcohol in addition to the organic residues issued from the sol\u2013gel synthesis. The three weight losses for the first, second and third zones are 29.4%, 29.7% and 2.2%, respectively. Thus, the presence of aqueous polyvinyl alcohol does modify essentially the evaporation and decomposition zones and barely affects the last decarbonization\/crystallization event.\n\nExtrusion and thermal transformation of the xerogel paste\n\nXRD patterns for the extruded cords calcined at different temperatures are shown in Fig.\u00a06. At room temperature, no peak is clearly observed, as the samples exhibit an amorphous structure characteristic of xerogels. At 600\u00a0\u00b0C, the amorphous phase remains predominant. However, once the temperature reaches 700\u00a0\u00b0C, crystallization occurs. The same XRD reflections become more defined and intense at 800\u20131000\u00a0\u00b0C. Using the reference card PDF n\u00b004-007-2667, it was found that all the peaks can be indexed with respect to the garnet structure. From 1400 up to 1700\u00a0\u00b0C, the same reflections are visible, although they appear to be much sharper. However, between 1550 and 1700\u00a0\u00b0C, the presence of impurities is barely distinguished in the X-ray patterns. The identification of this minor impurity is not possible, as it is mostly present as peak shoulders and very low intensity signatures. One has to keep in mind that despite the purity of the aluminum precursor, yttrium aluminum monoclinic, YAM, and yttrium aluminum perovskite, YAP, intermediate phases were reported to form during YAG synthesis, and to co-exist after prolonged heating in the range between 1000 and 1800\u00a0\u00b0C21,22.\n\nMoreover, the broadening of the main peak (4 2 0) in the diagrams was measured from 700 to 1700\u00a0\u00b0C to further investigate the ordering of the YAG structure in the particles. These results are gathered in Table 2, which shows that the broadening is quite stable at 0.4\u00b0 up to 1000\u00a0\u00b0C. Then, starting at 1400 up to 1700\u00a0\u00b0C, a sharp decrease in the (4 2 0) peak down to 0.1\u00b0 is observed suggesting a better organization of the YAG and the presence of a lower amount of microstructural defects in these samples. In summary, the formation of garnet from an amorphous inorganic network obtained after complete combustion of the organic residue and polyvinyl alcohol content starting at 700\u00a0\u00b0C is achieved at 1000\u00a0\u00b0C. Then, between 1000 and 1400\u00a0\u00b0C, an increase in the size of the coherent domains is observed, which shows an activation of the material diffusion and a decrease in the density of defects. The broadening between 1400 and 1700\u00a0\u00b0C is stable at around 0.1\u00b0 which does not give more information about the organization of coherent domains.\n\nFurthermore, high-resolution images of the layer-by-layer manufacturing cords calcined at different temperatures were taken, Fig.\u00a07(a-f). These captures showed the stacking of the printed paste. Definition, shape, and consistence of the as-printed cords (a) are retained even after debinding (b), calcination (c-d) and consolidation (e-f) steps. Note that at 700\u00a0\u00b0C the presence of carbon residues is visible in the dark gray color of the sample. To better observe the evolution of their microstructure, SEM micrographs of fractured cords were analyzed, Fig.\u00a07(g-l). From room temperature (g) to 700\u00a0\u00b0C (h), the microstructure is typical of a xerogel with poorly arranged small grains. At 1000\u00a0\u00b0C (i), the packing of the grains improves, however they remain quite small. In the temperature range from 1400 to 1700\u00a0\u00b0C (j-l), the better crystallization of the grains, suggested from the lower broadening of the XRD peaks, is visible as their size increases up to 2\u00a0\u00b5m at 1700\u00a0\u00b0C.\n\nUnderlining that the fresh paste was formulated from an aqueous poly vinyl alcohol solution and xerogel, it should be noted that the global cohesion between the printed cords was attained in the green pieces and was retained after thermal treatments. The sintering of the material was also found to be effective between 1550 and 1700\u00a0\u00b0C since the coalescence was thermally activated without abnormal grain growth.\n\nFinally, relative density was measured at different temperatures considering 4.55\u00a0g\/cm3 as theoretical density of YAG, see Fig.\u00a08. At 700\u00a0\u00b0C, the relative density was about 60% due to the internal porosity and the uncomplete conversion of the xerogel into YAG. With the rise of temperature, organic phases are fully eliminated: for example, at 1000\u00a0\u00b0C, the relative density increased by 10%. This increment coincides with the full crystallization of the xerogel, nevertheless internal porosity remains. From 1400 to 1700\u00a0\u00b0C a noticeable improvement in the densification is appreciable. At 1400\u00a0\u00b0C, the relative density is around 80%. Then, at 1550\u00a0\u00b0C, cords reached the highest observed relative density of around 90% indicating that the packing of the grains was optimized, and the internal porosity was partially eliminated. Finally heating samples as high as 1700\u00a0\u00b0C, did not provide a further elimination of the internal porosity, but an activation of the grain growth.\n\nConclusions\n\nWe successfully enlarged the production of YAG xerogel by modifying a protocol designed for a \u201claboratory scale\u201d synthesis. Using dried YAG xerogel without prior pyrolysis as a solid load, a xerogel paste was formulated and then printed by robocasting. The printed structures of cords were calcined at different temperatures to monitor the transformation of the xerogel paste into a crystalline YAG ceramic. We have shown that it was possible to sinter and to obtain cohesive pieces after thermal treatments in the range of 1550\u20131700\u00a0\u00b0C, despite a partial remnant of the internal porosity.\n\nFinally, the direct printing of xerogel paste without the usual prior pyrolysis, which implies larger organic departures, was not detrimental to the fabrication process. In addition, it reduces costs and would be appreciated by the industrial sector as an energy saving process. The fabrication of turbine parts for space exploration from YAG xerogel seems to be a promising approach.\n\nMethods\n\nThe structures of cords were extruded with a commercial 3D ceramic printer (Delta WASP 2040 clay) and a liquid deposit modeling extruder with a 1.2\u00a0mm diameter nozzle. Experimental conditions of 4 bars of compressed air flow, 4\u00a0mm\/s printing speed, 1.5\u00a0mm high layers and a temperature of 20\u00a0\u00b0C with 50% of relative humidity (RH) were applied for all the extrusion tests. Green cords were dried at room temperature during 15\u00a0h in air with 50% RH.\n\nThe particle size distribution of the powders was measured with an LA-950 laser particle size analyzer (Horiba Ltd, Kyoto, Japan), in which a particle from the sample will scatter light at a defined angle determined by its size. A group of particles will thus produce a pattern of scattered light defined by its intensity and angle, which can be processed into a particle size distribution product. The measurements were carried out using the Fraunhofer-Kernel method, which is used to analyze the reflected and diffracted beams for the alumina particles.\n\nXRD analyses for the powders and extruded cords were carried out with a Bruker-D8 Advance with a Bragg\u2013Brentano geometry and a Cu K\u03b1 source, with an angular measurement range (2\u03b8) of 15\u201390\u00b0, a step size of 0.012\u00b0 and an equivalent time per step of 49.92\u00a0s. The identification of the crystalline phases refers to Joint Committee Powder Diffraction Standard (JCPDS) cards. The broadening of the highest peak (4 2 0) was measured to quantify the ordering degree of the crystalline YAG particles, with the help of a Voigt function taking into account the K\u03b11-K\u03b12 doublet of the source, which was used to determine the peak profile and extract its integral broadening.\n\nMicrostructures of the cords were observed with a scanning electron microscope (FEI quanta 450 FEG, Thermo Fisher Scientific, Eindhoven, The Netherlands) using a large field detector with a 5-kV beam voltage and a chamber pressure of 10\u22125\u00a0Pa. For the samples without thermal treatment, the extruded pieces were dried at room temperature for 72\u00a0h and then cut off to place them inside the sample holder. The samples were not metallized prior to observation. High-resolution captures were taken using a micro-imaging lens system Optem Fusion (camera mount 35-08-70-000) with a camera tube 35-41-10-000 and a fixed magnification of 12.5:1.\n\nThermogravimetric analyses (TGA) were conducted for the xerogels and paste with an SDT Q600, TA Instruments, where the samples were heated at 1200\u00a0\u00b0C in a platinum crucible at a heating rate of 5\u00a0\u00b0C\/min under a dry airflow of 100\u00a0mL\/min. It should be noted that every sample had an initial mass of approximately 50\u00a0mg.\n\nThe density of the powders was measured with a helium pycnometer (AccuPycII 1340, Micromeritics), in which the samples were placed into a 1\u00a0cm3 chamber. Helium gas was admitted and then expanded into another precision internal volume. The pressure before and after expansion was registered and it was used to measure the sample volume. This operation was repeated 10 times. On other hand, densities of the calcined 3D structures were evaluated employing the Archimedes\u2019 principle, using deionized water and a digital analytical balance, operating with accuracy of 0.0001\u00a0gm. Density measurements were replicated three times and the average value was used to compare the different samples. Therefore, densities were calculating using the formula (1):\n\n$$\\rho = \\frac{{m_{1} }}{{m_{1} - m_{2} }} \\times \\rho_{w}$$\n(1)\n\nwhere \u03c1 is the density (g\/cm3), m1 is the weight of the sample, m2 is the weight of the suspended sample inside of water-filled container and \u03c1w is distilled water density (g\/cm3).\n\nReferences\n\n1. Koroteev, A. S., Andrianov, D. I., Karevskiy, A. V., Kiryushin, E. N., Popov, A. V., Semenkin, A. V., Solodukhin, A. E., Zakharenkov, L. E., Jansen, F., Brandt, T., Maiwald, V., Bauer, W., Gomez, A. M., Jahnke, S. S., Hillebrandt, M., Richter, M., Ferraris, S., Tosi, M. C., Masson, F., Combettes, J., Oriol, S., Worms, J.-C., Detsis, E., Muszynski, M., Lassoudi\u00e8re, F., Granjon, R., Tinsley, T., Hodgson, Z., Findlay, J. A. P., & Guimar\u00e3es L. N. F. Test bench for key components of megawatt class international power and propulsion system ground demonstration. In 7th European Conference for Aeronautics and Space Sciences (EUCASS). Milan, Italy. DOI: https:\/\/doi.org\/10.13009\/EUCASS2017-198 (2017).\n\n2. Dong, J. et al. Composite Yb:YAG\/Cr4+:YAG ceramics picosecond microchip lasers. Opt. Express 15, 14516\u201314523. https:\/\/doi.org\/10.1364\/OE.15.014516 (2007).\n\n3. Doroshenko, A. G. et al. Effect of the sintering temperature on the microstructure and optical properties of YAG:Cr, Mg ceramics. Opt. Mater. 98, 109505. https:\/\/doi.org\/10.1016\/j.optmat.2019.109505 (2019).\n\n4. Mah, T.-I., Parthasarathy, T. A. & Lee, H. D. Polycrystalline YAG; structural or functional. J. Ceram. Process. Res. 5, 369\u2013379 (2004).\n\n5. Petersen, A. et al. Focus issue introduction: advanced solid-state lasers 2020. Opt. Mater. Express 11, 952\u2013954. https:\/\/doi.org\/10.1364\/OME.423641 (2021).\n\n6. Xie, Y. et al. Lightweight, high-strength, flexible YAG fibrous membrane for efficient heat insulation. J. Alloys Compd. 876, 159978. https:\/\/doi.org\/10.1016\/j.jallcom.2021.159978 (2021).\n\n7. Corman, G. S. High-temperature creep of some single crystal oxides. Ceram. Eng. Sci. Proc. 12, 1745\u20131766 (1991).\n\n8. Armani, C. J., Ruggles-Wrenn, M. B., Hay, R. S., Fair, G. E. & Keller, K. A. Creep of polycrystalline yttrium aluminum garnet (YAG) at elevated temperature in air and in steam. Mater. Sci. Eng. A 589, 125\u2013131. https:\/\/doi.org\/10.1016\/j.msea.2013.09.083 (2014).\n\n9. Lu, Q., Dong, W., Wang, H. & Wang, X. A novel way to synthesize yttrium aluminum garnet from metal-inorganic precursors. J. Am. Ceram. Soc. 85, 490\u2013492. https:\/\/doi.org\/10.1111\/j.1151-2916.2002.tb00119.x (2002).\n\n10. Nyman, M., Caruso, J., Hampden-Smith, M. J. & Kodas, T. T. Comparison of solid-state and spray-pyrolysis synthesis of yttrium aluminate powders. J. Am. Ceram. Soc. 80, 1231\u20131238. https:\/\/doi.org\/10.1111\/j.1151-2916.1997.tb02969.x (1997).\n\n11. Ivanauskas, F., Kareiva, A. & Lapcun, B. On the modelling of solid-state reactions. Synthesis of YAG. J. Math. Chem. 37, 365\u2013376. https:\/\/doi.org\/10.1007\/s10910-004-1103-2 (2005).\n\n12. Nair, P. A. K., Vasconcelos, W. L., Paine, K. & Calabria-Holley, J. A review on applications of sol-gel science in cement. Constr. Build. Mater. 291, 123065. https:\/\/doi.org\/10.1016\/j.conbuildmat.2021.123065 (2021).\n\n13. Gowda, G. Synthesis of yttrium aluminates by the sol-gel process. J. Mater. Sci. Lett. 5, 1029\u20131032. https:\/\/doi.org\/10.1007\/BF01730273 (1986).\n\n14. Manalert, R. & Rahaman, M. N. Sol-gel processing and sintering of yttrium aluminum garnet (YAG) powders. J. Mater. Sci. 31, 3453\u20133458. https:\/\/doi.org\/10.1007\/BF00360748 (1996).\n\n15. Singlard, M. et al. Sol-gel synthesis of yttrium aluminum garnet (YAG): effects of the precursor nature and concentration on the crystallization. J. Sol-Gel Sci. Technol. 87, 496\u2013503. https:\/\/doi.org\/10.1007\/s10971-018-4722-y (2018).\n\n16. Li, W. et al. Extrusion-based additive manufacturing of functionally graded ceramics. J. Eur. Ceram. Soc. 41, 2049\u20132057. https:\/\/doi.org\/10.1016\/j.jeurceramsoc.2020.10.029 (2021).\n\n17. Carloni, D., Zhang, G. & Wu, Y. Transparent alumina ceramics fabricated by 3D printing and vacuum sintering. J. Eur. Ceram. Soc. 41, 781\u2013791. https:\/\/doi.org\/10.1016\/j.jeurceramsoc.2020.07.051 (2021).\n\n18. Zhang, G., Carloni, D. & Wu, Y. 3D printing of transparent YAG ceramics using copolymer-assisted slurry. Ceram. Int. 46, 17130\u201317134. https:\/\/doi.org\/10.1016\/j.ceramint.2020.03.247 (2020).\n\n19. Seeley, Z. et al. 3D printed transparent ceramic YAG laser rods: matching the core-clad refractive index. Opt. Mater. 107, 110121 (2020).\n\n20. Nseowo Udofia, E. & Zhou, W. 3D printed optics with a soft and stretchable optical material. Addit. Manuf. 31, 100912. https:\/\/doi.org\/10.1016\/j.addma.2019.100912 (2020).\n\n21. Bhattacharyya, S. & Ghatak, S. Methods of synthesis of Y3Al5O12 (YAG)\u2014a review. Trans. Indian Ceram. Soc. 66, 77\u201384 (2007).\n\n22. Kupp, E. R., Kochawattana, S., Lee, S.-H., Misture, S. & Messing, G. L. Particle size effects on yttrium aluminum garnet (YAG) phase formation by solid-state reaction. J. Mater. Res. 29, 2303\u20132311 (2014).\n\nAcknowledgements\n\nThe authors wish to thank French Space Agency (CNES), for the technical and project support throughout the activity.\n\nAuthor information\n\nAuthors\n\nContributions\n\nN.F.M.: experiments\u2019 conduction and writing original draft. J.J., F.R., N.F.M.: analysis, discussion of the results and manuscript edition. L.O.: experimental work. S.O., G.F., S.R.: project supervision and manuscript edition.\n\nCorresponding author\n\nCorrespondence to Sylvie Rossignol.\n\nEthics declarations\n\nCompeting interests\n\nThe authors declare no competing interests.\n\nPublisher's note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\nRights and permissions\n\nOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:\/\/creativecommons.org\/licenses\/by\/4.0\/.\n\nReprints and Permissions\n\nFlores-Martinez, N., Ouamara, L., Remondiere, F. et al. Synthesis and robocasting of YAG xerogel: one-step conversion of ceramics. Sci Rep 12, 8454 (2022). https:\/\/doi.org\/10.1038\/s41598-022-12204-6\n\n\u2022 Accepted:\n\n\u2022 Published:\n\n\u2022 DOI: https:\/\/doi.org\/10.1038\/s41598-022-12204-6","date":"2022-08-16 17:39:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5917352437973022, \"perplexity\": 4978.196186249741}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882572408.31\/warc\/CC-MAIN-20220816151008-20220816181008-00156.warc.gz\"}"} | null | null |
{"url":"https:\/\/www.aimsciences.org\/article\/doi\/10.3934\/dcds.2016.36.4101","text":"# American Institute of Mathematical Sciences\n\nAugust\u00a0 2016,\u00a036(8):\u00a04101-4131. doi:\u00a010.3934\/dcds.2016.36.4101\n\n## Improved estimates for nonoscillatory phase functions\n\n 1 Department of Mathematics, University of California, Davis, Davis, CA 95616, United States 2 Department of Computer Science, Yale University, New Haven, CT 06511, United States\n\nReceived\u00a0 May 2015 Published\u00a0 March 2016\n\nRecently, it was observed that solutions of a large class of highly oscillatory second order linear ordinary differential equations can be approximated using nonoscillatory phase functions. In particular, under mild assumptions on the coefficients and wavenumber $\\lambda$ of the equation, there exists a function whose Fourier transform decays as $\\exp(-\\mu |\\xi|)$ and which represents solutions of the differential equation with accuracy on the order of $\\lambda^{-1} \\exp(-\\mu \\lambda)$. In this article, we establish an improved existence theorem for nonoscillatory phase functions. Among other things, we show that solutions of second order linear ordinary differential equations can be represented with accuracy on the order of $\\lambda^{-1} \\exp(-\\mu \\lambda)$ using functions in the space of rapidly decaying Schwartz functions whose Fourier transforms are both exponentially decaying and compactly supported. These new observations are used in the analysis of a method for the numerical solution of second order ordinary differential equations whose running time is independent of the parameter $\\lambda$. This algorithm will be reported at a later date.\nCitation: James Bremer, Vladimir Rokhlin. Improved estimates for nonoscillatory phase functions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4101-4131. doi: 10.3934\/dcds.2016.36.4101\n##### References:\n [1] G. Andrews, R. Askey and R. Roy, Special Functions,, Cambridge University Press, (1999). doi:\u00a010.1017\/CBO9781107325937. [2] R. Bellman, Stability Theory of Differential Equations,, Dover Publications, (1953). [3] O. Bor\u016fvka, Linear Differential Transformations of the Second Order,, The English University Press, (1971). [4] E. Coddington and N. Levinson, Theory of Ordinary Differential Equations,, Krieger Publishing Company, (1955). [5] A. O. Daalhuis, Hyperasymptotic solutions of second-order linear differential equations. II,, Methods and Applications of Analysis, 2 (1995), 198. doi:\u00a010.4310\/MAA.1995.v2.n2.a5. [6] A. O. Daalhuis and F. W. J. Olver, Hyperasymptotic solutions of second-order linear differential equations. I,, Methods and Applications of Analysis, 2 (1995), 173. doi:\u00a010.4310\/MAA.1995.v2.n2.a4. [7] M. V. Fedoryuk, Asymptotic Analysis,, Springer-Verlag, (1993). doi:\u00a010.1007\/978-3-642-58016-1. [8] G. B. Folland, Real Analysis: Modern Techniques and Their Application,, 2nd edition, (1999). [9] M. Goldstein and R. M. Thaler, Bessel functions for large arguments,, Mathematical Tables and Other Aids to Computation, 12 (1958), 18. doi:\u00a010.2307\/2002123. [10] L. Grafakos, Classical Fourier Analysis,, Springer, (2014). doi:\u00a010.1007\/978-1-4939-1194-3. [11] L. Grafakos, Modern Fourier Analysis,, Springer, (2009). doi:\u00a010.1007\/978-0-387-09434-2. [12] Z. Heitman, J. Bremer and V. Rokhlin, On the existence of nonoscillatory phase functions for second order ordinary differential equations in the high-frequency regime,, Journal of Computational Physics, 290 (2015), 1. doi:\u00a010.1016\/j.jcp.2015.02.028. [13] Z. Heitman, J. Bremer, V. Rokhlin and B. Vioreanu, On the asymptotics of Bessel functions in the Fresnel regime,, Applied and Computational Harmonic Analysis, 39 (2015), 347. doi:\u00a010.1016\/j.acha.2014.12.002. [14] L. H\u00f6rmader, The Analysis of Linear Partial Differential Operators I,, 2nd edition, (1990). doi:\u00a010.1007\/978-3-642-61497-2. [15] L. H\u00f6rmader, The Analysis of Linear Partial Differential Operators II,, 2nd edition, (1990). [16] E. Kummer, De generali quadam aequatione differentiali tertti ordinis,, Progr. Evang. K\u00f6ngil. Stadtgymnasium Liegnitz., (). [17] F. Neuman, Global Properties of Linear Ordinary Differential Equations,, Kluwer Academic Publishers, (1991). [18] F. Olver, D. Lozier, R. Boisvert and C. Clark, NIST Handbook of Mathematical Functions,, Cambridge University Press, (2010). [19] W. Rudin, Principles of Mathematical Analysis,, McGraw-Hill, (1976). [20] J. Segura, Bounds for the ratios of modified Bessel functions and associated Tur\u00e1n-type inequalities,, Journal of Mathematics Analysis and Applications, 374 (2011), 516. doi:\u00a010.1016\/j.jmaa.2010.09.030. [21] R. Spigler and M. Vianello, The phase function method to solve second-order asymptotically polynomial differential equations,, Numerische Mathematik, 121 (2012), 565. doi:\u00a010.1007\/s00211-011-0441-9. [22] N. Trefethen, Approximation Theory and Approximation Practice,, Society for Industrial and Applied Mathematics, (2013). [23] E. Zeidler, Nonlinear Functional Analysis and Its Applications, Volume I: Fixed-point Theorems,, Springer-Verlag, (1986). doi:\u00a010.1007\/978-1-4612-4838-5.\n\nshow all references\n\n##### References:\n [1] G. Andrews, R. Askey and R. Roy, Special Functions,, Cambridge University Press, (1999). doi:\u00a010.1017\/CBO9781107325937. [2] R. Bellman, Stability Theory of Differential Equations,, Dover Publications, (1953). [3] O. Bor\u016fvka, Linear Differential Transformations of the Second Order,, The English University Press, (1971). [4] E. Coddington and N. Levinson, Theory of Ordinary Differential Equations,, Krieger Publishing Company, (1955). [5] A. O. Daalhuis, Hyperasymptotic solutions of second-order linear differential equations. II,, Methods and Applications of Analysis, 2 (1995), 198. doi:\u00a010.4310\/MAA.1995.v2.n2.a5. [6] A. O. Daalhuis and F. W. J. Olver, Hyperasymptotic solutions of second-order linear differential equations. I,, Methods and Applications of Analysis, 2 (1995), 173. doi:\u00a010.4310\/MAA.1995.v2.n2.a4. [7] M. V. Fedoryuk, Asymptotic Analysis,, Springer-Verlag, (1993). doi:\u00a010.1007\/978-3-642-58016-1. [8] G. B. Folland, Real Analysis: Modern Techniques and Their Application,, 2nd edition, (1999). [9] M. Goldstein and R. M. Thaler, Bessel functions for large arguments,, Mathematical Tables and Other Aids to Computation, 12 (1958), 18. doi:\u00a010.2307\/2002123. [10] L. Grafakos, Classical Fourier Analysis,, Springer, (2014). doi:\u00a010.1007\/978-1-4939-1194-3. [11] L. Grafakos, Modern Fourier Analysis,, Springer, (2009). doi:\u00a010.1007\/978-0-387-09434-2. [12] Z. Heitman, J. Bremer and V. Rokhlin, On the existence of nonoscillatory phase functions for second order ordinary differential equations in the high-frequency regime,, Journal of Computational Physics, 290 (2015), 1. doi:\u00a010.1016\/j.jcp.2015.02.028. [13] Z. Heitman, J. Bremer, V. Rokhlin and B. Vioreanu, On the asymptotics of Bessel functions in the Fresnel regime,, Applied and Computational Harmonic Analysis, 39 (2015), 347. doi:\u00a010.1016\/j.acha.2014.12.002. [14] L. H\u00f6rmader, The Analysis of Linear Partial Differential Operators I,, 2nd edition, (1990). doi:\u00a010.1007\/978-3-642-61497-2. [15] L. H\u00f6rmader, The Analysis of Linear Partial Differential Operators II,, 2nd edition, (1990). [16] E. Kummer, De generali quadam aequatione differentiali tertti ordinis,, Progr. Evang. K\u00f6ngil. Stadtgymnasium Liegnitz., (). [17] F. Neuman, Global Properties of Linear Ordinary Differential Equations,, Kluwer Academic Publishers, (1991). [18] F. Olver, D. Lozier, R. Boisvert and C. Clark, NIST Handbook of Mathematical Functions,, Cambridge University Press, (2010). [19] W. Rudin, Principles of Mathematical Analysis,, McGraw-Hill, (1976). [20] J. Segura, Bounds for the ratios of modified Bessel functions and associated Tur\u00e1n-type inequalities,, Journal of Mathematics Analysis and Applications, 374 (2011), 516. doi:\u00a010.1016\/j.jmaa.2010.09.030. [21] R. Spigler and M. Vianello, The phase function method to solve second-order asymptotically polynomial differential equations,, Numerische Mathematik, 121 (2012), 565. doi:\u00a010.1007\/s00211-011-0441-9. [22] N. Trefethen, Approximation Theory and Approximation Practice,, Society for Industrial and Applied Mathematics, (2013). [23] E. Zeidler, Nonlinear Functional Analysis and Its Applications, Volume I: Fixed-point Theorems,, Springer-Verlag, (1986). doi:\u00a010.1007\/978-1-4612-4838-5.\n [1] Leon Ehrenpreis. Special functions. Inverse Problems & Imaging, 2010, 4 (4) : 639-647. doi: 10.3934\/ipi.2010.4.639 [2] M.T. Boudjelkha. Extended Riemann Bessel functions. Conference Publications, 2005, 2005 (Special) : 121-130. doi: 10.3934\/proc.2005.2005.121 [3] Jean Mawhin, James R. Ward Jr. Guiding-like functions for periodic or bounded solutions of ordinary differential equations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 39-54. doi: 10.3934\/dcds.2002.8.39 [4] Jacques Wolfmann. Special bent and near-bent functions. Advances in Mathematics of Communications, 2014, 8 (1) : 21-33. doi: 10.3934\/amc.2014.8.21 [5] Marc Chamberland, Anna Cima, Armengol Gasull, Francesc Ma\u00f1osas. Characterizing asymptotic stability with Dulac functions. Discrete & Continuous Dynamical Systems - A, 2007, 17 (1) : 59-76. doi: 10.3934\/dcds.2007.17.59 [6] Gerard G\u00f3mez, Josep\u2013Maria Mondelo, Carles Sim\u00f3. A collocation method for the numerical Fourier analysis of quasi-periodic functions. I: Numerical tests and examples. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 41-74. doi: 10.3934\/dcdsb.2010.14.41 [7] Gerard G\u00f3mez, Josep\u2013Maria Mondelo, Carles Sim\u00f3. A collocation method for the numerical Fourier analysis of quasi-periodic functions. II: Analytical error estimates. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 75-109. doi: 10.3934\/dcdsb.2010.14.75 [8] Fr\u00e9d\u00e9ric Mazenc, Christophe Prieur. Strict Lyapunov functions for semilinear parabolic partial differential equations. Mathematical Control & Related Fields, 2011, 1 (2) : 231-250. doi: 10.3934\/mcrf.2011.1.231 [9] Yubo Chen, Wan Zhuang. The extreme solutions of PBVP for integro-differential equations with caratheodory functions. Conference Publications, 1998, 1998 (Special) : 160-166. doi: 10.3934\/proc.1998.1998.160 [10] H.Thomas Banks, Danielle Robbins, Karyn L. Sutton. Theoretical foundations for traditional and generalized sensitivity functions for nonlinear delay differential equations. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1301-1333. doi: 10.3934\/mbe.2013.10.1301 [11] Luis Barreira, Claudia Valls. Stability of nonautonomous equations and Lyapunov functions. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2631-2650. doi: 10.3934\/dcds.2013.33.2631 [12] Ali Akg\u00fcl, Mustafa Inc, Esra Karatas. Reproducing kernel functions for difference equations. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1055-1064. doi: 10.3934\/dcdss.2015.8.1055 [13] Chihiro Matsuoka, Koichi Hiraide. Special functions created by Borel-Laplace transform of H\u00e9non map. Electronic Research Announcements, 2011, 18: 1-11. doi: 10.3934\/era.2011.18.1 [14] Krzysztof Fr\u0105czek, M. Lema\u0144czyk, E. Lesigne. Mild mixing property for special flows under piecewise constant functions. Discrete & Continuous Dynamical Systems - A, 2007, 19 (4) : 691-710. doi: 10.3934\/dcds.2007.19.691 [15] Volodymyr Pichkur. On practical stability of differential inclusions using Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1977-1986. doi: 10.3934\/dcdsb.2017116 [16] Xia Li. Long-time asymptotic solutions of convex hamilton-jacobi equations depending on unknown functions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5151-5162. doi: 10.3934\/dcds.2017223 [17] Emmanuel N. Barron, Rafal Goebel, Robert R. Jensen. The quasiconvex envelope through first-order partial differential equations which characterize quasiconvexity of nonsmooth functions. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1693-1706. doi: 10.3934\/dcdsb.2012.17.1693 [18] Michael Sch\u00f6nlein. Asymptotic stability and smooth Lyapunov functions for a class of abstract dynamical systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4053-4069. doi: 10.3934\/dcds.2017172 [19] Josef Dibl\u00edk, Zden\u011bk Svoboda. Asymptotic properties of delayed matrix exponential functions via Lambert function. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 123-144. doi: 10.3934\/dcdsb.2018008 [20] Sihem Mesnager, Fengrong Zhang, Yong Zhou. On construction of bent functions involving symmetric functions and their duals. Advances in Mathematics of Communications, 2017, 11 (2) : 347-352. doi: 10.3934\/amc.2017027\n\n2017\u00a0Impact Factor:\u00a01.179","date":"2019-06-18 07:48:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6359399557113647, \"perplexity\": 3589.669533945312}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627998690.87\/warc\/CC-MAIN-20190618063322-20190618085322-00175.warc.gz\"}"} | null | null |
Q: When did scientists first experimentally measure wavelengths of EM radiation? How? When did someone first discover that short wavelength light has higher energy than long?
And can gamma ray wavelengths be measured, even today?
A: It was back in 1789, when Thomas Young proposed a new experiment to show the wave nature of light. Although Newton observed bright and dark pattern of light under certain circumstances, Young was the first to explain these patterns with the wave nature of light.
He used a candle and a card with a rectangular hole, where he stretched a human hair.
Hes used his observations to measure the wavelength of light which proves that light indeed acts as a wave.
https://en.wikipedia.org/wiki/Light
"I therefore made a rectangular hole in a card, and bent its ends so as to support a hair parallel to the sides
of the hole; then, upon applying the eye near the hole, the hair, of course, appeared dilated by indistinct
vision into a surface, of which the breadth was determined by the distance of the hair and the magnitude of
the hole, independently of the temporary aperture of the pupil. When the hair approached so near to the
direction of the margin of a candle that the inflected light was sufficiently copious to produce a sensible
effect, the fringes [alternating bands] began to appear; and it was easy to estimate the proportion of their
breadth to the apparent breadth of the hair across the image of which they extended. I found that six of the
brightest red fringes, nearly at equal distance, occupied the whole of that image. The breadth of the aperture
was 66/1000 [of an inch], and its distance from the hair 8/10 of an inch; the diameter of the hair was ... 1/600
[of an inch]. Hence, we have 11/1000 for the deviation of the first red fringe at the distance of 8/10; and as
8/10 / 11/1000 = 1/600 / 11/480000, or 1/43636 [of an inch] for the difference of the routes of the red light
where it was most intense."
https://www.dartmouth.edu/~phys1/labs/lab2.pdf
A: Your first question is about the history of physics. Perhaps there is a more suitable site to ask it.
It was the Dutch scientist Christiaan Huygens who proposed the wave theory of light in 1678, long before Young. en.wikipedia.org/wiki/Christiaan_Huygens . Although Huygens' theory does not talk about interference, Huygens considered light to behave like a wave, like water (page 4 of his treatise, http://www.gutenberg.org/files/14725/14725-h/14725-h.htm). Clearly waves have a frequency, a wave length and a speed. Nevertheless he did not discuss this. Note that if he had, acceptance of his work would likely have been negatively impacted by the influence of Newton, as has been the work on differential calculus of Leibniz. Newton had a competing theory of light as consisting of particles. Wave particle duality "avant la la lettre" !
Once you know that light is a wave - of course now we know that it consists of quanta described by a wave - and that it has a finite velocity (https://en.wikipedia.org/wiki/Ole_Rømer) - then you can surmise that higher frequencies correspond to higher energy for the same amplitude. The full theory was developed and published by Fresnel and Young.
As to your second question, it is too broad. There are many ways to detect gamma rays as you can find out by web search.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,053 |
Al's papers' citations and possibly links and excerpts or my synopses
By AlPater, August 13, 2016 in General Health and Longevity
Good diets
AlPater
Website URL:https://www.crsociety.org/index.php?app=core&module=global§ion=register
Do aspirin and other NSAIDs confer a survival benefit in men diagnosed with prostate cancer? A pooled analysis of NIH-AARP and PLCO cohorts.
Zhou CK, Daugherty SE, Liao LM, Freedman ND, Abnet CC, Pfeiffer R, Cook MB.
Cancer Prev Res (Phila). 2017 May 15. pii: canprevres.0033.2016. doi: 10.1158/1940-6207.CAPR-17-0033. [Epub ahead of print]
Prostate cancer is one of the leading causes of cancer death in US men. There is an unmet need to identify modifiable risk factors for prostate cancer survival. Experimental studies have suggested that nonsteroidal anti-inflammatory drugs (NSAIDs) may improve prostate cancer survival through anti-thrombotic and anti-inflammation mechanisms. Results from previous observational studies have been equivocal, and few have assessed whether an etiologically relevant time window of exposure exists. We sampled prostate cancer cases from two large US prospective cohorts-NIH-AARP Diet and Health Study and PLCO Cancer Screening Trial-to investigate whether pre- and post-diagnostic aspirin and non-aspirin NSAID use were associated with prostate cancer-specific and all-cause mortality. Cox proportional hazards regression models estimated hazard ratios (HRs) and 95% confidence intervals (CIs). Study-specific results were meta-analyzed using fixed-effects models. Pre- and post-diagnostic aspirin or non-aspirin NSAID use were not statistically significantly associated with prostate cancer-specific mortality. However, occasional (less than daily) and daily aspirin users five years or more before prostate cancer diagnosis had 18% (HR=0.82; 95%CI=0.75 to 0.90) and 15% (HR=0.85; 95%CI=0.77 to 0.94) reduced all-cause mortality versus nonusers. Similarly, post-diagnostic occasional and daily aspirin use were associated with 17% (HR=0.83; 95%CI=0.72 to 0.95) and 25% (HR=0.75; 95%CI=0.66 to 0.86) reduced all-cause mortality, independent of pre-diagnostic aspirin use. This study suggests that aspirin or non-aspirin NSAIDs are not associated with prostate cancer survival. However, aspirin use both before and after prostate cancer diagnosis was associated with longer overall survival, highlighting the importance of comorbidity prevention among prostate cancer survivors.
Ten-Year Changes in Healthy Eating Attitudes in the SUN Cohort.
Andrade L, Zazpe I, Santiago S, Carlos S, Bes-Rastrollo M, Martínez-González MA.
J Am Coll Nutr. 2017 May 16:1-11. doi: 10.1080/07315724.2016.1278566. [Epub ahead of print]
The objective of this study was to assess the within-subject longitudinal changes in self-perceived healthy eating attitudes after 10 years of follow-up and to identify predictors of long-term changes in a middle-aged adult cohort.
Four thousand five hundred seventy-two participants completed a validated food frequency questionnaire (FFQ) at baseline and after 10 years of follow-up. The FFQ was expanded with a brief 10-item questionnaire about eating attitudes with 2 possible answers: yes or no. A baseline score and a 10-year score were calculated with these 10 items (range from 0 to 10). Participants were categorized into 3 groups according to this score. Linear and logistic regressions were used to examine changes at follow-up and associations between baseline characteristics and improvement in the score.
After 10 years of follow-up, a statistically significant favorable change (p < 0.001) was achieved in all questions about eating attitudes, particularly in these items: "Do you try to eat less sweets and pastries?" (12%), "Do you try to eat less meat?" (11.1%), and "Do you try to reduce your fat intake?" (10%). Being female (odds ratio [OR] = 1.19, 95% confidence interval [CI], 1.02-1.39), being 35-50 or ≥ 50 years old (OR = 1.24, 95% CI, 1.07-1.44 and OR = 1.74, 95% CI, 1.38-2.18, respectively), a high level of physical activity (OR for third vs first tertile = 1.20, 95% CI, 1.02-1.41), and a higher Mediterranean diet score (OR for second and third tertiles = 1.18, 95% CI, 1.01-1.37 and OR = 1.26, 95% CI, 1.04-1.52, respectively) were associated with a higher probability of improving the eating attitudes score, while a low body mass index (BMI; OR = 0.71, 95% CI, 0.51-1.00) and snacking between meals (OR = 0.84, 95% CI, 0.73-0.97) were associated with a lower probability of improving their score.
The eating attitudes of the participants in the Seguimiento Universidad de Navarra (SUN) cohort became more favorable after 10 years of follow-up. Certain sociodemographic or clinical variables may predict a positive change.
Attitudes; behaviors; brief questionnaire; cohort; eating habits; food consumption
Explaining the Obesity Paradox: The Association between Body Composition and Colorectal Cancer Survival (C-SCANS Study).
Caan BJ, Meyerhardt JA, Kroenke CH, Alexeeff S, Xiao J, Weltzien E, Feliciano EC, Castillo AL, Quesenberry CP, Kwan ML, Prado CM.
Cancer Epidemiol Biomarkers Prev. 2017 May 15. doi: 10.1158/1055-9965.EPI-17-0200. [Epub ahead of print]
Background: Body composition may partially explain the U-shaped association between body mass index (BMI) and colorectal cancer survival.Methods: Muscle and adiposity at colorectal cancer diagnosis and survival were examined in a retrospective cohort using Kaplan-Meier curves, multivariable Cox regression, and restricted cubic splines in 3,262 early-stage (I-III) male (50%) and female (50%) patients. Sarcopenia was defined using optimal stratification and sex- and BMI-specific cut points. High adiposity was defined as the highest tertile of sex-specific total adipose tissue (TAT). Primary outcomes were overall mortality and colorectal cancer-specific mortality (CRCsM).Results: Slightly over 42% patients were sarcopenic. During 5.8 years of follow-up, 788 deaths occurred, including 433 from colorectal cancer. Sarcopenic patients had a 27% [hr, 1.27; 95% confidence interval (CI), 1.09-1.48] higher risk of overall mortality than those who were not sarcopenic. Females with both low muscle and high adiposity had a 64% higher risk of overall mortality (HR, 1.64; 95% CI, 1.05-2.57) than females with adequate muscle and lower adiposity. The lowest risk of overall mortality was seen in patients with a BMI between 25 and <30 kg/m2, a range associated with the greatest number of patients (58.6%) who were not at increased risk of overall mortality due to either low muscle or high adiposity.Conclusions: Sarcopenia is prevalent among patients with non-metastatic colorectal cancer, and should, along with adiposity be a standard oncological marker.Impact: Our findings suggest a biologic explanation for the obesity paradox in colorectal cancer and refute the notion that the association between overweight and lower mortality is due solely to methodologic biases.
Nutritional determinants of frailty in older adults: A systematic review.
Lorenzo-López L, Maseda A, de Labra C, Regueiro-Folgueira L, Rodríguez-Villamil JL, Millán-Calenti JC.
BMC Geriatr. 2017 May 15;17(1):108. doi: 10.1186/s12877-017-0496-2.
Frailty is a geriatric syndrome that affects multiple domains of human functioning. A variety of problems contributes to the development of this syndrome; poor nutritional status is an important determinant of this condition. The purpose of this systematic review was to examine recent evidence regarding the association between nutritional status and frailty syndrome in older adults.
PubMed, Web of Science, and Scopus electronic databases were searched using specific key words, for observational papers that were published during the period from 2005 to February 2017 and that studied the association or relationship between nutritional status and frailty in older adults. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement was followed to assess the quality of the included articles.
Of the 2042 studies found, nineteen met the inclusion criteria. Of these studies, five provided data on micronutrients and frailty, and reported that frailty syndrome is associated with low intakes of specific micronutrients. Five studies provided data on macronutrients and frailty, and among those studies, four revealed that a higher protein intake was associated with a lower risk of frailty. Three studies examined the relationship between diet quality and frailty, and showed that the quality of the diet is inversely associated with the risk of being frail. Two studies provided data on the antioxidant capacity of the diet and frailty, and reported that a high dietary antioxidant capacity is associated with a lower risk of developing frailty. Finally, seven studies evaluated the relationship between scores on both the Mini Nutritional Assessment (MNA) and the MNA-SF (Short Form) and frailty, and revealed an association between malnutrition and/or the risk of malnutrition and frailty.
This systematic review confirms the importance of both quantitative (energy intake) and qualitative (nutrient quality) factors of nutrition in the development of frailty syndrome in older adults. However, more longitudinal studies on this topic are required to further understand the potential role of nutrition in the prevention, postponement, or even reversion of frailty syndrome.
Frail elderly; Macronutrients; Micronutrients; Nutritional status; Protein
Cardiovascular Mortality Differences-Place Matters.
Mensah GA, Goff DC, Gibbons GH.
JAMA. 2017 May 16;317(19):1955-1957. doi: 10.1001/jama.2017.4168. No abstract available.
http://sci-hub.cc/10.1001/jama.2017.4168
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Trends and Patterns of Geographic Variation in Cardiovascular Mortality Among US Counties, 1980-2014.
Roth GA, Dwyer-Lindgren L, Bertozzi-Villa A, Stubbs RW, Morozoff C, Naghavi M, Mokdad AH, Murray CJL.
JAMA. 2017 May 16;317(19):1976-1992. doi: 10.1001/jama.2017.4150.
IMPORTANCE:
In the United States, regional variation in cardiovascular mortality is well-known but county-level estimates for all major cardiovascular conditions have not been produced.
To estimate age-standardized mortality rates from cardiovascular diseases by county.
DESIGN AND SETTING:
Deidentified death records from the National Center for Health Statistics and population counts from the US Census Bureau, the National Center for Health Statistics, and the Human Mortality Database from 1980 through 2014 were used. Validated small area estimation models were used to estimate county-level mortality rates from all cardiovascular diseases, including ischemic heart disease, cerebrovascular disease, ischemic stroke, hemorrhagic stroke, hypertensive heart disease, cardiomyopathy, atrial fibrillation and flutter, rheumatic heart disease, aortic aneurysm, peripheral arterial disease, endocarditis, and all other cardiovascular diseases combined.
EXPOSURES:
The 3110 counties of residence.
MAIN OUTCOMES AND MEASURES:
Age-standardized cardiovascular disease mortality rates by county, year, sex, and cause.
From 1980 to 2014, cardiovascular diseases were the leading cause of death in the United States, although the mortality rate declined from 507.4 deaths per 100 000 persons in 1980 to 252.7 deaths per 100 000 persons in 2014, a relative decline of 50.2% (95% uncertainty interval [uI], 49.5%-50.8%). In 2014, cardiovascular diseases accounted for more than 846 000 deaths (95% UI, 827-865 thousand deaths) and 11.7 million years of life lost (95% UI, 11.6-11.9 million years of life lost). The gap in age-standardized cardiovascular disease mortality rates between counties at the 10th and 90th percentile declined 14.6% from 172.1 deaths per 100 000 persons in 1980 to 147.0 deaths per 100 000 persons in 2014 (posterior probability of decline >99.9%). In 2014, the ratio between counties at the 90th and 10th percentile was 2.0 for ischemic heart disease (119.1 vs 235.7 deaths per 100 000 persons) and 1.7 for cerebrovascular disease (40.3 vs 68.1 deaths per 100 000 persons). For other cardiovascular disease causes, the ratio ranged from 1.4 (aortic aneurysm: 3.5 vs 5.1 deaths per 100 000 persons) to 4.2 (hypertensive heart disease: 4.3 vs 17.9 deaths per 100 000 persons). The largest concentration of counties with high cardiovascular disease mortality extended from southeastern Oklahoma along the Mississippi River Valley to eastern Kentucky. Several cardiovascular disease conditions were clustered substantially outside the South, including atrial fibrillation (Northwest), aortic aneurysm (Midwest), and endocarditis (Mountain West and Alaska). The lowest cardiovascular mortality rates were found in the counties surrounding San Francisco, California, central Colorado, northern Nebraska, central Minnesota, northeastern Virginia, and southern Florida.
CONCLUSIONS AND RELEVANCE:
Substantial differences exist between county ischemic heart disease and stroke mortality rates. Smaller differences exist for diseases of the myocardium, atrial fibrillation, aortic and peripheral arterial disease, rheumatic heart disease, and endocarditis.
Bystander Efforts and 1-Year Outcomes in Out-of-Hospital Cardiac Arrest.
Kragholm K, Wissenberg M, Mortensen RN, Hansen SM, Malta Hansen C, Thorsteinsson K, Rajan S, Lippert F, Folke F, Gislason G, Køber L, Fonager K, Jensen SE, Gerds TA, Torp-Pedersen C, Rasmussen BS.
N Engl J Med. 2017 May 4;376(18):1737-1747. doi: 10.1056/NEJMoa1601891.
The effect of bystander interventions on long-term functional outcomes among survivors of out-of-hospital cardiac arrest has not been extensively studied.
We linked nationwide data on out-of-hospital cardiac arrests in Denmark to functional outcome data and reported the 1-year risks of anoxic brain damage or nursing home admission and of death from any cause among patients who survived to day 30 after an out-of-hospital cardiac arrest. We analyzed risks according to whether bystander cardiopulmonary resuscitation (CPR) or defibrillation was performed and evaluated temporal changes in bystander interventions and outcomes.
Among the 2855 patients who were 30-day survivors of an out-of-hospital cardiac arrest during the period from 2001 through 2012, a total of 10.5% had brain damage or were admitted to a nursing home and 9.7% died during the 1-year follow-up period. During the study period, among the 2084 patients who had cardiac arrests that were not witnessed by emergency medical services (EMS) personnel, the rate of bystander CPR increased from 66.7% to 80.6% (P<0.001), the rate of bystander defibrillation increased from 2.1% to 16.8% (P<0.001), the rate of brain damage or nursing home admission decreased from 10.0% to 7.6% (P<0.001), and all-cause mortality decreased from 18.0% to 7.9% (P=0.002). In adjusted analyses, bystander CPR was associated with a risk of brain damage or nursing home admission that was significantly lower than that associated with no bystander resuscitation (hazard ratio, 0.62; 95% confidence interval [CI], 0.47 to 0.82), as well as a lower risk of death from any cause (hazard ratio, 0.70; 95% CI, 0.50 to 0.99) and a lower risk of the composite end point of brain damage, nursing home admission, or death (hazard ratio, 0.67; 95% CI, 0.53 to 0.84). The risks of these outcomes were even lower among patients who received bystander defibrillation as compared with no bystander resuscitation.
In our study, we found that bystander CPR and defibrillation were associated with risks of brain damage or nursing home admission and of death from any cause that were significantly lower than those associated with no bystander resuscitation.
The US Preventive Services Task Force 2017 Draft Recommendation Statement on Screening for Prostate Cancer: An Invitation to Review and Comment.
Bibbins-Domingo K, Grossman DC, Curry SJ.
Effect of moderate weight loss on ovarian function assessed by salivary progesterone measurements.
Lager C, Ellison PT.
Am J Hum Biol. 1990;2(3):303-312. doi: 10.1002/ajhb.1310020312.
The effects of moderate, voluntary weight loss on ovarian function are studied by monitoring the daily levels of salivary progesterone in 8 dieting women (18 cycles) and 9 age-matched controls (19 cycles). Both groups of women were within normal standards of weight for height, though the dieters were significantly heavier than the controls. Dieters lost weight at an average rate of 1.9 ± 0.3 kg/mo during the study. Dieters' cycles during periods of weight loss (weight loss cycles) have significantly lower peak levels of luteal progesterone (controls 655 ± 46 pmol/L, weight loss 461 ± 67 pmol/L; P < 0.005) and lower average levels of luteal progesterone (controls 287 ± 30 pmol/L, weight loss 214 ± 23 pmol/L; P < 0.005) than do controls. All control cycles were classified as ovulatory by virtue of at least one salivary progesterone reading ≥ 300 pmol/L. Only 62% of the weight loss cycles were classified as ovulatory by this criterion. Where longitudinal weight data are available both the magnitude and duration of progesterone elevation correlates significantly with net weight change during the preceding cycle and show no significant correlation with net weight change during the current cycle. Examination of individual profiles confirms that the most profound suppression of luteal activity usually occurs during post-loss rather than weight loss cycles, even if weight is stable or increasing during the post-loss cycle itself. These results, together with field studies of African horticultural populations, suggest that human ovarian function may be adapted to modulate waiting time to conception in response to trends in energetic balance.
Associations of visceral fat area and physical activity levels with the risk of metabolic syndrome in postmenopausal women.
Zając-Gawlak I, Kłapcińska B, Kroemeke A, Pośpiech D, Pelclová J, Přidalová M.
Biogerontology. 2017 Jun;18(3):357-366. doi: 10.1007/s10522-017-9693-9. Epub 2017 Mar 18.
https://link.springer.com/article/10.1007/s10522-017-9693-9/fulltext.html?wt_mc=alerts.TOCjournals
This study was aimed at the evaluation of relationship between visceral fat area (VFA) and physical activity (PA) with the metabolic syndrome (MetS) risk in the physically active postmenopausal women. A total of 85 attendants of the University of the Third Age (U3A) aged 62.8 ± 5.9 years (median time since menopause 11.8 y), participated in this study. VFA was assessed by bioimpedance method using InBody 720 analyzer. PA was assessed using the ActiGraph GT1 M accelerometer. Fasting levels of serum lipids (TG, HDL), serum glucose, waist circumference (WC) and blood pressure were measured to diagnose MetS according to NCEP-ATP III criteria. In 73 out of 85 participants the VFA exceeded the upper normal level of 100 cm2, however, in almost a half of this group (n = 36) with elevated VFA (139.5 ± 26.1 cm2 on average), only 2 out of 5 criteria for MetS diagnosis were met. Participants were physically active, making on average 10,919 ± 3435 steps/day. The risk of MetS occurrence in women with VFA > 100 cm2 was twelve times higher (OR 12.33; CI 95% [1.5; 99.8]) than in the group with VFA < 100 cm2. The participants from the group with the highest PA level (≥12,500 steps/day) were at almost 4 times lower risk for MetS, than their less active counterparts (OR 3.84; CI 95% [1.27;11.64]). Increased level of VFA is a strong risk factor for the MetS in postmenopausal women, however high level of regular PA above the threshold of 12,500 steps/day may substantially reduce it.
Metabolic syndrome; Physical activity; Visceral fat; Women
Protective Effect of Dietary Calcium Intake on Esophageal Cancer Risk: A Meta-Analysis of Observational Studies.
Li Q, Cui L, Tian Y, Cui H, Li L, Dou W, Li H, Wang L.
Nutrients. 2017 May 18;9(5). pii: E510. doi: 10.3390/nu9050510.
http://www.mdpi.com/2072-6643/9/5/510/htm
Although several epidemiological studies have investigated the association between dietary calcium intake and the risk of esophageal cancer, the results are inconsistent. This study aimed to make a comprehensive evaluation regarding the association between calcium intake and risk of esophageal cancer through a meta-analysis approach. We searched for all relevant articles from the inception to April 2017, using PUBMED, EMBASE, and Web of Knowledge. The pooled odds ratio (ORs) with the 95% confidence interval (95% CI) for the highest versus the lowest categories of calcium intake was calculated using a Mantel-Haenszel fixed-effect model. In total, 15 articles reporting 17 studies including 3396 esophageal cancer cases and 346,815 controls were selected for the meta-analysis. By comparing the highest vs. the lowest levels of dietary calcium intake, we found that dietary calcium intake was inversely associated with the risk of esophageal cancer (OR = 0.80, 95% CI: 0.71-0.91, I² = 33.6%). The subgroup analysis indicated that the protective function of dietary calcium intake were observed in esophageal squamous cell cancer, but not in esophageal adenocarcinoma in the studies conducted in Asia, but not those in Europe and America. In conclusion, our results suggest that higher dietary calcium intake is associated with a lower risk of esophageal cancer-especially esophageal squamous cell cancer-in Asian populations, though more data from prospective cohort studies are needed.
dietary calcium; esophageal cancer; meta-analysis
Orange juice allied to a reduced-calorie diet results in weight loss and ameliorates obesity-related biomarkers: A randomized controlled trial.
Ribeiro C, Dourado G, Cesar T.
Nutrition. 2017 Jun;38:13-19. doi: 10.1016/j.nut.2016.12.020. Epub 2017 Jan 7.
http://sci-hub.cc/10.1016/j.nut.2016.12.020
Assumptions have linked orange juice (OJ) consumption with weight gain and adverse effects on health due to its sugar content; however, epidemiologic studies have not shown increased risk for overweight or obesity with the consumption of 100% OJ. The aim of this study was to verify whether the combination of a reduced-calorie diet (RCD) and 100% OJ contribute to weight loss, promote changes in glucose and lipid metabolism, and improve diet quality in obese individuals.
A randomized controlled trial with 78 obese patients (age 36 ± 1 y, body mass index [bMI] 33 ± 3 kg/m2) were enrolled in two groups: Individuals in the OJ group submitted to an RCD that included OJ (500 mL/d), and individuals in the control group submitted to an RCD without OJ. Body composition, biochemical biomarkers, and dietary intake were analyzed over a 12-wk period.
Both treatments had similar outcomes regarding body weight (-6.5 kg; P = 0.363), BMI (-2.5 kg/m2; P = 0.34), lean mass (-1 kg; P = 0.29), fat mass (-5 kg; P = 0.58), body fat (-3%; P = 0.15), and waist-to-hip ratio (-0.1; P = 0.79). Insulin levels in the OJ group decreased by 18% (P = 0.05), homeostasis model assessment-insulin resistance by 33% (P = 0.04), total cholesterol by 24% (P = 0.004), low-density lipoprotein cholesterol by 24% (P ≤ 0.001), and high-sensitivity C-reactive protein levels by 33% (P = 0.001) compared with the control group. Consumption of energy and nutrients was similar between the two groups, but vitamin C and folate increased by 62% (P ≤ 0.015) and 39% (P = 0.033), respectively, after OJ intervention.
When consumed concomitantly with an RCD, OJ does not inhibit weight loss; ameliorate the insulin sensitivity, lipid profile, or inflammatory status, or contribute nutritionally to the quality of the diet.
Biochemical biomarkers; Body composition; Obese; Orange juice; Randomized-controlled trial; Reduced-calorie diet
"CitrusBr has funded this work. The authors thank the financial
301 support of "Programa de Apoio ao Desenvolvimento Científico da Faculdade de
302 Ciencias Farmaceuticas, UNESP (PADC/FCFAr)" and Citrosuco S.A."
The effects of folic acid and pyridoxine supplementation on characteristics of migraine attacks in migraine patients with aura: A double-blind, randomized placebo-controlled, clinical trial.
Askari G, Nasiri M, Mozaffari-Khosravi H, Rezaie M, Bagheri-Bidakhavidi M, Sadeghi O.
Nutrition. 2017 Jun;38:74-79. doi: 10.1016/j.nut.2017.01.007. Epub 2017 Feb 2.
The aim of this study was to assess the effects of folic acid alone and in combination with pyridoxine on characteristics of migraine attacks in adult migraine patients with aura.
This double-blind, randomized placebo-controlled, clinical trial was conducted on 95 migraine patients with aura (age range 18-65 y) in Isfahan, Islamic Republic of Iran, in 2014. Patients were randomly allocated to receive folic acid (5 mg/d) plus pyridoxine (80 mg/d) or folic acid alone (5 mg/d) or placebo (lactose) for 3 mo. Characteristics of migraine attacks including headache severity, attacks frequency, duration, and headache diary results (HDRs) were obtained for each patient at baseline and at the end of the study.
Folic acid plus pyridoxine intake resulted in a significant decrease compared with placebo in headache severity (-2.71 ± 0.08 versus -2.19 ± 0.05; P < 0.001), attack frequency (-3.35 ± 0.09 versus -2.73 ± 0.05; P < 0.001), duration (-7.25 ± 0.17 versus -6.5 ± 0.07; P < 0.001), and HDR (-74.15 ± 0.2 versus -72.73 ± 0.1; P < 0.001). Additionally, the reduction in these characteristics of migraine attacks in the folic acid plus pyridoxine group was significant compared with the group given folic acid alone (P < 0.001). However, these beneficial effects of the combined supplement became nonsignificant for attack duration compared with the folic acid-only and placebo groups after controlling for confounders. Folic acid intake without pyridoxine did not lead to a significant decrease in characteristics of migraine attacks compared with placebo group.
Supplementation of folic acid with pyridoxine could decrease the characteristics of migraine attacks including headache severity, attack frequency, and HDR; however, further studies are needed to shed light on the findings of the present study.
Folic acid; Headache; Migraine; Pyridoxine
Ketogenic diet in migraine: rationale, findings and perspectives.
Barbanti P, Fofi L, Aurilia C, Egeo G, Caprio M.
Neurol Sci. 2017 May;38(Suppl 1):111-115. doi: 10.1007/s10072-017-2889-6.
Ketogenic diet (KD) is an established treatment for refractory pediatric epilepsy and a promising therapy for diverse neurological diseases. Clinical data on KD in migraine-obtained from 150 patients investigated in case reports and prospective studies-suggest that KD may be a rapid onset effective prophylaxis for episodic and chronic migraine. KD would contribute to restore brain excitability and metabolism and to counteract neuroinflammation in migraine, although its precise mechanism is still unclear. Randomized controlled studies are needed to confirm the usefulness of KD in migraine and to investigate its optimal duration, repeatability, feasibility in normal weight subjects, efficacy in pediatric population and association to conventional migraine prophylaxis.
Disability; Ketogenic diet; Migraine; Prevention; Treatment
Association Between Teaching Status and Mortality in US Hospitals
Laura G. Burke, MD, MPH; Austin B. Frakt, PhD; Dhruv Khullar, MD, MPP; et al.
Abstract Full Text
JAMA. 2017;317(20):2105-2113. doi:10.1001/jama.2017.5702
This study uses national Medicare data to compare 30-day mortality among patients hospitalized or undergoing surgical procedures in teaching vs nonteaching hospitals between 2012 and 2014.
Question Is there a difference in mortality rates at US teaching hospitals compared with other hospitals?
Findings In an observational study of approximately 21 million hospitalizations of Medicare beneficiaries, adjusted 30-day mortality rates were significantly lower at 250 major teaching hospitals compared with 894 minor teaching and 3339 nonteaching hospitals overall (8.3% vs 9.2% and 9.5%) as well as for several individual common medical and surgical conditions.
Meaning Major teaching hospital status was associated with lower mortality rates for common conditions.
Importance Few studies have analyzed contemporary data on outcomes at US teaching hospitals vs nonteaching hospitals.
Objective To examine risk-adjusted outcomes for patients admitted to teaching vs nonteaching hospitals across a broad range of medical and surgical conditions.
Design, Setting, and Participants Use of national Medicare data to compare mortality rates in US teaching and nonteaching hospitals for all hospitalizations and for common medical and surgical conditions among Medicare beneficiaries 65 years and older.
Exposures Hospital teaching status: major teaching hospitals (members of the Council of Teaching Hospitals), minor teaching hospitals (other hospitals with medical school affiliation), and nonteaching hospitals (remaining hospitals).
Main Outcomes and Measures Primary outcome was 30-day mortality rate for all hospitalizations and for 15 common medical and 6 surgical conditions. Secondary outcomes included 30-day mortality stratified by hospital size and 7-day mortality and 90-day mortality for all hospitalizations as well as for individual medical and surgical conditions.
Results The sample consisted of 21 451 824 total hospitalizations at 4483 hospitals, of which 250 (5.6%) were major teaching, 894 (19.9%) were minor teaching, and 3339 (74.3%) were nonteaching hospitals. Unadjusted 30-day mortality was 8.1% at major teaching hospitals, 9.2% at minor teaching hospitals, and 9.6% at nonteaching hospitals, with a 1.5% (95% CI, 1.3%-1.7%; P < .001) mortality difference between major teaching hospitals and nonteaching hospitals. After adjusting for patient and hospital characteristics, the same pattern persisted (8.3% mortality at major teaching vs 9.2% at minor teaching and 9.5% at nonteaching), but the difference in mortality between major and nonteaching hospitals was smaller (1.2% [95% CI, 1.0%-1.4%]; P < .001). After stratifying by hospital size, 187 large (≥400 beds) major teaching hospitals had lower adjusted overall 30-day mortality relative to 76 large nonteaching hospitals (8.1% vs 9.4%; 1.2% difference [95% CI, 0.9%-1.5%]; P < .001). This same pattern of lower overall 30-day mortality at teaching hospitals was observed for medium-sized (100-399 beds) hospitals (8.6% vs 9.3% and 9.4%; 0.8% difference between 61 major and 1207 nonteaching hospitals [95% CI, 0.4%-1.3%]; P = .003). Among small (≤99 beds) hospitals, 187 minor teaching hospitals had lower overall 30-day mortality relative to 2056 nonteaching hospitals (9.5% vs 9.9%; 0.4% difference [95% CI, 0.1%-0.7%]; P = .01).
Conclusions and Relevance Among hospitalizations for US Medicare beneficiaries, major teaching hospital status was associated with lower mortality rates for common conditions compared with nonteaching hospitals. Further study is needed to understand the reasons for these differences.
Clinical Trials Update
May 23/30, 2017
Vitamin E and Selenium Fail to Prevent Dementia in Men
Anita Slomski, MA
JAMA. 2017;317(20):2054. doi:10.1001/jama.2017.6078
Antioxidant supplementation with vitamin E and selenium, taken alone or in combination, was not associated with a decreased incidence of dementia in asymptomatic older men, according to a study published by JAMA Neurology. Oxidative stress has been implicated as an important mechanism in Alzheimer disease, spurring interest in the use of antioxidants to modify risk of cognitive decline and dementia.
The Prevention of Alzheimer Disease by Vitamin E and Selenium (PREADViSE) trial began as ancillary to a randomized controlled trial for prostate cancer prevention, which ended prematurely due to lack of efficacy. PREADViSE initially enrolled 7540 older men who were randomized to receive selenium (200 µg daily), vitamin E (400 IU daily), vitamin E and selenium, or placebo for an average of 5.4 years. A subset of 3786 men were observed and evaluated with at least 1 memory screen for an additional 6 years without taking the supplements in a cohort study. The incidence of dementia (4.4%) did not differ among the four study groups at the end of the observational period.
The authors cautioned that the study had significant limitations, such as the loss of about half of the participants to long-term follow-up during the transition from a randomized clinical trial to a cohort study, and the refusal of many participants to see clinicians for definitive testing for dementia. The relatively young age (mean 67.5 years) and the high level of education of participants at baseline likely contributed to the low incidence of dementia, which may have made it difficult to detect any positive effect of the interventions.
Association of Antioxidant Supplement Use and Dementia in the Prevention of Alzheimer's Disease by Vitamin E and Selenium Trial (PREADViSE).
Kryscio RJ, Abner EL, Caban-Holt A, Lovell M, Goodman P, Darke AK, Yee M, Crowley J, Schmitt FA.
JAMA Neurol. 2017 May 1;74(5):567-573. doi: 10.1001/jamaneurol.2016.5778.
http://sci-hub.cc/10.1001/jamaneurol.2016.5778
Oxidative stress is an established dementia pathway, but it is unknown if the use of antioxidant supplements can prevent dementia.
To determine if antioxidant supplements (vitamin E or selenium) used alone or in combination can prevent dementia in asymptomatic older men.
DESIGN, SETTING, AND PARTICIPANTS:
The Prevention of Alzheimer's Disease by Vitamin E and Selenium (PREADViSE) trial began as a double-blind randomized clinical trial in May 2002, which transformed into a cohort study from September 2009 to May 2015. The PREADViSE trial was ancillary to the Selenium and Vitamin E Cancer Prevention Trial (SELECT), a randomized clinical trial of the same antioxidant supplements for preventing prostate cancer, which closed in 2009 owing to findings from a futility analysis. The PREADViSE trial recruited 7540 men, of whom 3786 continued into the cohort study. Participants were at least 60 years old at study entry and were enrolled at 130 SELECT sites, and Cox proportional hazards models were used in a modified intent-to-treat analysis to compare hazard rates among the study arms.
INTERVENTIONS:
Participants were randomized to vitamin E, selenium, vitamin E and selenium, or placebo. While taking study supplements, enrolled men visited their SELECT site and were evaluated for dementia using a 2-stage screen. During the cohort study, men were contacted by telephone and assessed using an enhanced 2-stage cognitive screen. In both phases, men were encouraged to visit their physician if the screen results indicated possible cognitive impairment.
Dementia case ascertainment relied on a consensus review of the cognitive screens and medical records for men with suspected dementia who visited their physician for an evaluation or by review of all available information, including a functional assessment screen.
The mean (SD) baseline age of the 7540 participants was 67.5 (5.3) years, with 3936 (52.2%) reporting a college education or better, 754 (10.0%) reporting black race, and 505 (6.7%) reporting Hispanic ethnicity. Dementia incidence (325 of 7338 men [4.4%]) was not different among the 4 study arms. A Cox model, which adjusted incidence for participant demographic information and baseline self-reported comorbidities, yielded hazard ratios of 0.88 (95% CI, 0.64-1.20) for vitamin E, 0.83 (0.60-1.13) for selenium, and 1.00 (0.75-1.35) for the combination compared with placebo.
Neither supplement prevented dementia. To our knowledge, this is the first study to investigate the long-term association of antioxidant supplement use and dementia incidence among asymptomatic men.
Health Agencies Update
Increase in Diabetes Cases Among Young People
Jennifer Abbasi
A new report finds diabetes is increasing among young people.
The incidence of both type 1 and type 2 diabetes in US youth younger than 20 years increased between 2002 and 2012, according to a new analysis from the ongoing SEARCH for Diabetes in Youth study, which is funded by the Centers for Disease Control and Prevention and the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) at the National Institutes of Health.
Based on data from patients at 5 clinical centers in California, Colorado, Ohio, South Carolina, and Washington, the SEARCH investigators reported that during the study period, the relative annual increase in the incidence of type 1 diabetes among youth was 1.8% and that of type 2 diabetes was 4.8%, after adjusting for age, sex, and race or ethnic group.
The estimated annual incidence of type 1 diabetes in youths 0 to 19 years old increased from 15 900 cases in the 2002-2003 period to 17 900 cases in the 2011-2012 period. At the same time, the estimated annual incidence of type 2 diabetes in 10- to 19-year-olds increased from 3800 to 5300 cases.
Hispanic youth had the greatest increase in annual incidence of type 1 diabetes (4.2%), while the rate of type 2 diabetes increased most in Native Americans (8.9%) and Asian Americans/Pacific Islanders (8.5%). For both types of diabetes, whites had the smallest increases in annual incidence—1.2% for type 1 diabetes and 0.6% for type 2 diabetes—over the study period. New cases of type 2 diabetes increased more in females (6.2%) than in males (3.7%).
"Physicians need to especially think about type 2 diabetes, which can be asymptomatic," Barbara Linder, MD, PhD, senior advisor for childhood diabetes research at the NIDDK, told JAMA. She encouraged screening young people who are obese and have a family history of type 2 diabetes and emphasized the importance of screening at-risk youth from racial or ethnic minority groups. "It is important to make the diagnosis of type 2 diabetes as early as possible, and to promote lifestyle changes and good glycemic control, as these can prevent long-term vascular complications from occurring," she said.
Cancer Death Rates Decrease in Men, Women, and Children
Overall death rates from cancer are decreasing in the United States, according to the "Annual Report to the Nation on the Status of Cancer" issued by the Centers for Disease Control and Prevention, the National Cancer Institute, the American Cancer Society, and the North American Association of Central Cancer Registries.
The 2017 report found that cancer mortality decreased 1.8% per year in men, 1.4% per year in women, and 1.6% per year in children between 2010 and 2014. During this time frame, death rates decreased for lung, colorectal, female breast, and prostate cancers, among others, but increased for liver cancer in men and women, pancreas and brain cancers in men, and uterine cancer.
The report also included a special section on survival. For 18 of 20 types of cancer, patients diagnosed in 2006-2012 had increased 5-year survival rates compared with those diagnosed in 1975-1977, with the greatest increases in absolute survival rates reported for prostate cancer, leukemia, non-Hodgkin lymphoma, myeloma, and kidney cancer. Survival rates did not increase for cancers of the cervix and the uterus. Cancers of the brain, stomach, esophagus, lung, liver, and pancreas diagnosed in 2006-2012 had the lowest 5-year relative survival rates.
Racial differences in survival also were reported: Compared with white people, the adjusted relative risk of death for all cancers was 51% higher among American Indians and Alaska Natives and 33% higher in black people.
Overall, new cancer cases decreased 2.3% per year for men between 2009 and 2013 but were stable for women. However, the report noted a 0.4% annual increase during this 5-year time period in the incidence of breast cancer, the most common cancer among women.
>>>>>>>>>>>
Annual Report to the Nation on the Status of Cancer, 1975-2014, Featuring Survival.
Jemal A, Ward EM, Johnson CJ, Cronin KA, Ma J, Ryerson B, Mariotto A, Lake AJ, Wilson R, Sherman RL, Anderson RN, Henley SJ, Kohler BA, Penberthy L, Feuer EJ, Weir HK.
J Natl Cancer Inst. 2017 Sep 1;109(9). doi: 10.1093/jnci/djx030.
PMID: 28376154 Free PMC Article
The American Cancer Society (ACS), the Centers for Disease Control and Prevention (CDC), the National Cancer Institute (NCI), and the North American Association of Central Cancer Registries (NAACCR) collaborate to provide annual updates on cancer occurrence and trends in the United States. This Annual Report highlights survival rates. Data were from the CDC- and NCI-funded population-based cancer registry programs and compiled by NAACCR. Trends in age-standardized incidence and death rates for all cancers combined and for the leading cancer types by sex were estimated by joinpoint analysis and expressed as annual percent change. We used relative survival ratios and adjusted relative risk of death after a diagnosis of cancer (hazard ratios [hrs]) using Cox regression model to examine changes or differences in survival over time and by sociodemographic factors.
Overall cancer death rates from 2010 to 2014 decreased by 1.8% (95% confidence interval [CI] = -1.8 to -1.8) per year in men, by 1.4% (95% CI = -1.4 to -1.3) per year in women, and by 1.6% (95% CI = -2.0 to -1.3) per year in children. Death rates decreased for 11 of the 16 most common cancer types in men and for 13 of the 18 most common cancer types in women, including lung, colorectal, female breast, and prostate, whereas death rates increased for liver (men and women), pancreas (men), brain (men), and uterine cancers. In contrast, overall incidence rates from 2009 to 2013 decreased by 2.3% (95% CI = -3.1 to -1.4) per year in men but stabilized in women. For several but not all cancer types, survival statistically significantly improved over time for both early and late-stage diseases. Between 1975 and 1977, and 2006 and 2012, for example, five-year relative survival for distant-stage disease statistically significantly increased from 18.7% (95% CI = 16.9% to 20.6%) to 33.6% (95% CI = 32.2% to 35.0%) for female breast cancer but not for liver cancer (from 1.1%, 95% CI = 0.3% to 2.9%, to 2.3%, 95% CI = 1.6% to 3.2%). Survival varied by race/ethnicity and state. For example, the adjusted relative risk of death for all cancers combined was 33% (HR = 1.33, 95% CI = 1.32 to 1.34) higher in non-Hispanic blacks and 51% (HR = 1.51, 95% CI = 1.46 to 1.56) higher in non-Hispanic American Indian/Alaska Native compared with non-Hispanic whites.
Cancer death rates continue to decrease in the United States. However, progress in reducing death rates and improving survival is limited for several cancer types, underscoring the need for intensified efforts to discover new strategies for prevention, early detection, and treatment and to apply proven preventive measures broadly and equitably.
A cluster-randomized trial to reduce caesarean delivery rates in Quebec: cost-effectiveness analysis.
Johri M, Ng ESW, Bermudez-Tamayo C, Hoch JS, Ducruet T, Chaillet N.
BMC Med. 2017 May 22;15(1):96. doi: 10.1186/s12916-017-0859-8.
Widespread increases in caesarean section (CS) rates have sparked concerns about risks to mothers and infants and rising healthcare costs. A multicentre, two-arm, cluster-randomized trial in Quebec, Canada assessed whether an audit and feedback intervention targeting health professionals would reduce CS rates for pregnant women compared to usual care, and concluded that it reduced CS rates without adverse effects on maternal or neonatal health. The effect was statistically significant but clinically small. We assessed cost-effectiveness to inform scale-up decisions.
A prospective economic evaluation was undertaken using individual patient data from the Quality of Care, Obstetrics Risk Management, and Mode of Delivery (QUARISMA) trial (April 2008 to October 2011). Analyses took a healthcare payer perspective. The time horizon captured hospital-based costs and clinical events for mothers and neonates from labour onset to 3 months postpartum. Resource use was identified and measured from patient charts and valued using standardized government sources. We estimated the changes in CS rates and costs for the intervention group (versus controls) between the baseline and post-intervention periods. We examined heterogeneity between clinical subgroups of high-risk versus low-risk pregnancies and estimated the joint uncertainty in cost-effectiveness over 20,000 trial simulations. We decomposed costs to identify drivers of change.
The intervention group experienced per-patient reductions of 0.005 CS (95% confidence interval (CI): -0.015 to 0.004, P = 0.09) and $180 (95% CI: -$277 to - $83, P < 0.001). Women with low-risk pregnancies experienced statistically significant reductions in CS rates and costs; changes for the high-risk subgroup were not significant. The intervention was "dominant" (effective in reducing CS and less costly than usual care) in 86.08% of simulations. It reduced costs in 99.99% of simulations. Cost reductions were driven by lower rates of neonatal complications in the intervention group (-$190, 95% CI: -$255 to - $125, P < 0.001). Given 88,000 annual provincial births, a similar intervention could save $15.8 million (range: $7.3 to $24.4 million) in Quebec annually.
From a healthcare payer perspective, a multifaceted intervention involving audits and feedback resulted in a small reduction in caesarean deliveries and important cost savings. Cost reductions are consistent with improved quality of care in intervention group hospitals.
Adolescent; Adult; Caesarean section/utilization; Cost-benefit analysis; Female; Guideline adherence; Infant; Medical audit; Multilevel analysis; Newborn; Pregnancy outcomes; Randomized controlled trial
Risk factors for cervical intraepithelial neoplasia and cervical cancer in Chinese women: large study in Jiexiu, Shanxi Province, China.
Wang Z, Wang J, Fan J, Zhao W, Yang X, Wu L, Li D, Ding L, Wang W, Xu J, Stram M, Zhao C, Hao M.
J Cancer. 2017 Mar 12;8(6):924-932. doi: 10.7150/jca.17416. eCollection 2017.
We aimed to investigate the risk factors for cervical intraepithelial neoplasia (CIN) in Jiexiu, Shanxi Province, China. Twenty thousand eligible married women (age: 18-65 years) were administered with a questionnaire on potential risk factors for CIN and underwent liquid based Pap test. All women with abnormal cytological results underwent colposcopy with biopsy. Based on the biopsy pathology results, women were then assigned to either study group (with CIN) or control group (negative for histological results and volunteered to participate in the follow up study). The women in both study group and control group underwent vaginal microflora detection and dietary survey. The potential risk factors were analyzed by using ordinal logistic regression. Among the 20,000 women ne 1,438 women (7.19%) had cytologic abnormalities and 410 (2.05%) women were diagnosed histologically with CIN lesions, including 317 (1.58%) with CIN1, 93 (0.50%) with CIN2/3and 11 (55/100,000) with squamous cell carcinoma (SCC). The average daily dietary folate intake was significantly lower in the study group (344.61±153.07μg) than in the control group (371.50±166.58μg; P<0.001). Multivariate analysis demonstrated that age of 56-65 years, farming as the husband's occupation, unwashing the vulva after sexual intercourse, and low self-reported folate intake were positively associated with CIN development and might have contribution to the increased CIN incidence in this population. These findings may provide help to develop the strategies to reduce the risk of cervical cancer in China.
CIN; China; cervical cancer; folate; risk factors
Effect of citrus-based products on urine profile: A systematic review and meta-analysis.
Rahman F, Birowo P, Widyahening IS, Rasyid N.
F1000Res. 2017 Mar 6;6:220. doi: 10.12688/f1000research.10976.1. eCollection 2017.
Background. Urolithiasis is a disease with high recurrence rate, 30-50% within 5 years. The aim of the present study was to learn the effects of citrus-based products on the urine profile in healthy persons and people with urolithiasis compared to control diet and potassium citrate. Methods. A systematic review was performed, which included interventional, prospective observational and retrospective studies, comparing citrus-based therapy with standard diet therapy, mineral water, or potassium citrate. A literature search was conducted using PUBMED, COCHRANE, and Google Scholar with "citrus or lemonade or orange or grapefruit or lime or juice" and "urolithiasis" as search terms. For statistical analysis, a fixed-effects model was conducted when p > 0.05, and random-effects model was conducted when p < 0.05. Results. In total, 135 citations were found through database searching with 10 studies found to be consistent with our selection criteria. However, only 8 studies were included in quantitative analysis, due to data availability. The present study showed a higher increased in urine pH for citrus-based products (mean difference, 0.16; 95% CI 0.01-0.32) and urinary citrate (mean difference, 124.49; 95% CI 80.24-168.74) compared with a control group. However, no differences were found in urine volume, urinary calcium, urinary oxalate, and urinary uric acid. From subgroup analysis, we found that citrus-based products consistently increased urinary citrate level higher than controls in both healthy and urolithiasis populations. Furthermore, there was lower urinary calcium level among people with urolithiasis. Conclusions. Citrus-based products could increase urinary citrate level significantly higher than control. These results should encourage further research to explore citrus-based products as a urolithiasis treatment.
Citrus; citrate; potassium citrate; urine profile; urolithiasis
The effects of lutein on respiratory health across the life course: A systematic review.
Melo van Lent D, Leermakers ETM, Darweesh SKL, Moreira EM, Tielemans MJ, Muka T, Vitezova A, Chowdhury R, Bramer WM, Brusselle GG, Felix JF, Kiefte-de Jong JC, Franco OH.
Clin Nutr ESPEN. 2016 Jun;13:e1-e7. doi: 10.1016/j.clnesp.2016.02.096. Epub 2016 Mar 26. Review.
Lutein, a fat-soluble carotenoid present in green leafy vegetables and eggs, has strong antioxidant properties and could therefore be important for respiratory health.
We systematically reviewed the literature for articles that evaluated associations of lutein (intake, supplements or blood levels) with respiratory outcomes, published in Medline, Embase, Cochrane Central, PubMed, Web of Science and Google Scholar, up to August 2014.
We identified one Randomized Control Trial (RCT), two longitudinal, four prospective and six cross-sectional studies. The individual studies obtained a Quality Score ranging between 3 and 9. Six studies were performed in children, which examined bronchopulmonary dysplasia (BPD), asthma and wheezing. In adults, 7 studies investigated asthma, respiratory function and respiratory mortality. The RCT found a borderline significant effect of lutein/zeaxanthin supplementation in neonates on the risk of BPD (OR 0.43 (95% CI 0.15; 1.17). No association was found between lutein intake or levels and respiratory outcomes in children. A case-control study in adults showed lower lutein levels in asthma cases. Three studies, with a prospective or longitudinal study design, in adults found a small but a significant positive association between lutein intake or levels and respiratory function. No association was found in the other two studies. In relation to respiratory mortality, one longitudinal study showed that higher lutein blood levels were associated with a decreased mortality (HR 0.77 (95% CI 0.60; 0.99), per SD increase in lutein).
The published literature suggests a possible positive association between lutein and respiratory health. However, the literature is scarce and most studies are of observational nature.
Antioxidant; Asthma; Carotenoid; Life course; Lung function; Lutein; Systematic review
Adherence to Mediterranean diet has a mediating effect inflammation as regards cardiovascular disease risk: The 10-year (2002-12) follow-up of ATTICA study.
Georgoysopoulou EN, Panagiotakos DB, Pitsavos C, Kalogeropoulou A, Ntertimani M, Pitaraki E, Chrysohoou C, Skoumas I, Tousoulis D, Stefanadis C.
Clin Nutr ESPEN. 2016 Jun;13:e67. doi: 10.1016/j.clnesp.2016.03.051. Epub 2016 May 20. No abstract available.
http://sci-hub.cc/10.1016/j.clnesp.2016.03.051
Introduction: Mediterranean diet has been associated with lower allcause
and cardiovascular disease (CVD) morbidity and mortality, but the
clinical and pathway has not been well understood and appreciated.
Aim: The aim of this work was to explore the path between adherence to a
Mediterranean-type diet, lifestyle behaviors, clinical status and 10-year
incidence of CVD.
Materials and methods: the ATTICA study was carried out in the Athens
area during 2001-2002 and included 3042 participants free of CVD at
baseline (49.8% men, aged 18-89). Adherence to Mediterranean diet was
assessed using the MedDietScore (range 0-55). During 2011-2012, 2583
out of the 3042 baseline participants attended the 10-year follow-up of the
Attica study (15% lost-to-follow-up).
Results: Adherence to Mediterranean diet decreased CVD risk (Relative
Risk (RR) per 1/55 unit ¼0.96, 95%, CI: 0.93-1.00), independently of various
socio-demographic, lifestyle and clinical factors. Path analysis revealed
that adherence to Mediterranean diet decreases C-reactive protein's levels
and interleukin-6 levels, but also has an independent protective role on
CVD risk per se (total effect of the MedDietScore on CVD¼-0.003, 95%CI:
-0.005-0.000).
Conclusions: adherence to Mediterranean diet confers a considerable
reduction on CVD risk, independently of various factors. Therefore, even
subjects with unhealthy lifestyle behaviors may benefit from adherence to
this diet, suggesting another dimension on prevention strategies.
The effects of Mediterranean Diet on cognitive function and dementia: Systematic review of the evidence.
Petersson S, Philippou E.
Introduction: There is a growing body of evidence suggesting that
adherence to the Mediterranean Diet (MD) may protect against cognitive
decline and dementia although the evidence is still inconsistent.
Aim: The aim of this systematic review is to update the current knowledge
on the effects of MD on cognitive function and/or cognitive impairment
(CI) and/or Alzheimer's disease (AD) and/or all-type dementia.
Materials and methods: Five databases were searched: PubMed, CINAHL,
CENTRAL and PsychINFO (date 1806 to 25th May, 2015), using pre-specified
criteria. Human studies, published in English, without any restriction in
study type, population assessed, intervention period, follow-up time, or
publication date, examining the association between adherence to the MD
and cognitive function or dementia symptoms (as measured by cognitive
function tests) were included. Only primary publication types were included.
Results: 32 studies, including 5 Randomized Controlled Trials (RCTs) and
27 observational studies, met the inclusion criteria. The majority of studies
showed that MD improved cognitive function and/or decreased risk of CI
and/or decreased risk of dementia/AD. Three studies found no correlation
between MD and AD, 3 found no association between MD and CI and 5
found no association between MD and cognitive function. There was large
heterogeneity and studies differed with regards to quality.
Conclusion: Overall, the existing evidence, stemming mostly from
epidemiological studies, suggests that MD improves cognitive function
and delays the onset of dementia. However, more RCTs are required to
establish a causational relationship.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Mediterranean Diet, Cognitive Function, and Dementia: A Systematic Review of the Evidence.
Petersson SD, Philippou E.
Adv Nutr. 2016 Sep 15;7(5):889-904. doi: 10.3945/an.116.012138. Print 2016 Sep. Review.
http://sci-hub.cc/10.3945/an.116.012138
A growing body of evidence suggests that adherence to the Mediterranean diet (MD) may protect against cognitive decline and dementia. Many epidemiologic studies and several randomized controlled trials (RCTs) have found positive effects of the MD on cognitive function, but findings remain inconsistent. The aim of this systematic review was to provide an update on the current knowledge of the effects of the MD on cognitive function, cognitive impairment, Alzheimer disease (AD), and all-type dementia. Five databases were searched-PubMed, Embase, CINAHL, CENTRAL, and PsycINFO (1806 to 25 May 2015)-with the use of prespecified criteria. Human studies that were published in English without any restriction on study type, population assessed, intervention period, follow-up time, or publication date, and that examined the association between adherence to the MD and cognitive function or dementia symptoms (as measured by cognitive function tests), were included. Only primary publication types were included. Thirty-two studies from 25 unique cohorts, including 5 RCTs and 27 observational studies, met the inclusion criteria. The majority of studies showed that the MD was associated with improved cognitive function, a decreased risk of cognitive impairment or decreased risk of dementia, or AD. Three studies found no correlation between the MD and AD, 3 further studies found no association between the MD and cognitive impairment, and 5 studies found no association between the MD and cognitive function. There was large heterogeneity, and studies differed with regard to quality. Based on the findings and the limitations in study design, we conclude that adherence to the MD is associated with better cognitive performance. However, it should be noted that the majority of findings come from epidemiologic studies that provide evidence for a correlation between the MD and cognition but not for a cause-and-effect relation. More controlled trials are required to establish a causational relation.
Alzheimer disease; Mediterranean diet; cognitive function; cognitive impairment; dementia; dietary patterns; systematic review
Health and Functional Status of Adults Aged 90 Years in the United States.
Odden MC, Koh WJH, Arnold AM, Psaty BM, Newman AB.
JAMA Intern Med. 2017 May 1;177(5):732-734. doi: 10.1001/jamainternmed.2017.0242. No abstract available.
10.1001/jamainternmed.2017.0242
Edited May 24, 2017 by AlPater
Leucine-nicotinic acid synergy stimulates AMPK/Sirt1 signaling and regulates lipid metabolism and lifespan in Caenorhabditis elegans, and hyperlipidemia and atherosclerosis in mice.
Bruckbauer A, Banerjee J, Cao Q, Cui X, Jing J, Zha L, Li F, Xue B, Shi H, Zemel MB.
Am J Cardiovasc Dis. 2017 Apr 15;7(2):33-47. eCollection 2017.
BACKGROUND/AIMS:
Nicotinic acid (NA), a lipid-lowering drug, serves as a source of NAD+, the cofactor for Sirt1. Leucine (Leu) stimulates the AMPK/Sirt1 axis and amplifies the effects of other AMPK/Sirt1 activating compounds. Therefore, we tested the interactive effects of leucine and low dose NA on AMPK/Sirt1 signaling and downstream effects of lipid metabolism in cell culture, C. elegans and mice.
LDL-receptor knockout mice were fed an atherogenic Western diet supplemented with leucine (24 g/kg diet) and sub-therapeutic NA combinations (50 mg/kg diet and 250 mg/kg diet) or low therapeutic NA (1000 mg/kg diet) for 8 weeks to evaluate markers of hyperlipidemia and atherosclerosis.
NA-Leu increased P-AMPK and Sirt1 in adipocytes and myotubes. In C. elegans, NA-Leu increased P-AMPK and DAF-16 (FOXO), reduced lipid accumulation and increased median survival under mild oxidative stress conditions. In the mice, NA-Leu reduced total cholesterol, cholesterol esters, plasma triglycerides, atherosclerotic lesion size, lipid area, and aortic macrophage infiltration, similar to the therapeutic NA dose.
Leu amplifies the effects of NA on lipid metabolism, hyperlipidemia and atherosclerosis in mice, at least in part by activation of the AMPK/Sirt1 axis. This combination may be a potential therapeutic alternative for hyperlipidemia and atherosclerosis.
AMPK; C. elegans; Sirt1; atherosclerosis; leucine; lipid metabolism; nicotinic acid
Impact of legumes and plant proteins consumption on cognitive performances in the elderly.
Mazza E, Fava A, Ferro Y, Moraca M, Rotundo S, Colica C, Provenzano F, Terracciano R, Greco M, Foti D, Gulletta E, Russo D, Bosco D, Pujia A, Montalcini T.
J Transl Med. 2017 May 22;15(1):109. doi: 10.1186/s12967-017-1209-5.
Numerous studies have investigated the role of the dietary factors in the prevention of cognitive decline but the short-term effects of foods choice on cognitive performances in the elderly are poorly explored. Our aim was to investigate the choice of foods among elderly Italian individuals and the association with cognitive function.
In this longitudinal study, the participants were 214 individuals aged ≥65 years with a score >20 at the Mini Mental State Examination. The cognitive sub-test of ADAScale was used to detect cognitive decline progression over 12 months. Food choices was measured by a combination of a 24-h recall and a seven-day diet record and Principal Components Analysis.
The Principal Components Analysis identified four food and four nutrient patterns. MMSE and ADAS-cog score after 1 year were found to be associated with legumes pattern (B = 0.25, p = 0.007; 95% CI 0.07/0.44; and B = -0.10, p = 0.006; CI -0.79/-0.30, respectively). A dietary pattern including plant proteins was independently associated with an improved ADAS-cog after 1 year (B = 0.584, p = 0.04; OR 1.79, CI 0.04-0.42).
The Principal Components Analysis is useful to investigate the choice of foods and nutrients in the elderly. We demonstrated an association between legumes pattern with cognitive performances.
Cognitive decline; Elderly; Legumes; Mediterranean diet; Plant protein; Principal Components Analysis
Prior weight loss exacerbates the biological drive to gain weight after the loss of ovarian function.
Sherk VD, Jackman MR, Giles ED, Higgins JA, Foright RM, Presby DM, Johnson GC, Houck JA, Houser JL, Oljira R, MacLean PS.
Physiol Rep. 2017 May;5(10). pii: e13272. doi: 10.14814/phy2.13272.
Both the history of obesity and weight loss may change how menopause affects metabolic health. The purpose was to determine whether obesity and/or weight loss status alters energy balance (EB) and subsequent weight gain after the loss of ovarian function. Female lean and obese Wistar rats were randomized to 15% weight loss (WL) or ad libitum fed controls (CON). After the weight loss period, WL rats were kept in EB at the reduced weight for 8 weeks prior to ovariectomy (OVX). After OVX, all rats were allowed to eat ad libitum until weight plateaued. Energy intake (EI), spontaneous physical activity, and total energy expenditure (TEE) were measured with indirect calorimetry before OVX, immediately after OVX, and after weight plateau. Changes in energy intake (EI), TEE, and weight gain immediately after OVX were similar between lean and obese rats. However, obese rats gained more total weight and fat mass than lean rats over the full regain period. Post-OVX, EI increased more (P ≤ 0.03) in WL rats (58.9 ± 3.5 kcal/d) than CON rats (8.5 ± 5.2 kcal/d), and EI partially normalized (change from preOVX: 20.5 ± 4.2 vs. 1.5 ± 4.9 kcal/day) by the end of the study. As a result, WL rats gained weight (week 1:44 ± 20 vs. 7 ± 25 g) more rapidly (mean = 44 ± 20 vs. 7 ± 25 g/week; P < 0.001) than CON Prior obesity did not affect changes in EB or weight regain following OVX, whereas a history of weight loss prior to OVX augmented disruptions in EB after OVX, resulting in more rapid weight regain.
OVX ; Energy balance; weight regain
Meat, dietary heme iron and risk of type 2 diabetes: The Singapore Chinese Health Study.
Talaei M, Wang YL, Yuan JM, Pan A, Koh WP.
Am J Epidemiol. 2017 May 23. doi: 10.1093/aje/kwx156. [Epub ahead of print]
We evaluated the relations of red meat, poultry, fish and shellfish, as well as heme iron intake, with risk of type 2 diabetes (T2D).The Singapore Chinese Health Study is a population-based cohort that recruited 63,257 Chinese adults aged 45-74 years from 1993 to 1998. Usual diet was evaluated by a validated 165-item semi-quantitative food-frequency questionnaire at recruitment. Physician-diagnosed T2D was self-reported during two follow-up interviews in 1999-2004 and 2006-2010. During a mean follow-up of 10.9 years, 5207 incident cases of T2D were reported. The multivariate-adjusted HR (95% CI) for T2D comparing highest versus the lowest quartiles was 1.23 (1.14, 1.33) for red meat (P for trend < 0.001), 1.15 (1.06, 1.24) for poultry (P for trend = 0.004), and 1.07 (0.99, 1.16) for fish/shellfish (P for trend = 0.12). After additional adjustment for heme iron, only red meat intake remained significantly associated with T2D risk (1.13; 1.01, 1.25; P for trend = 0.02). Heme iron was associated with increased T2D risk even after additionally adjusted for red meat (1.14; 1.02, 1.28; P for trend = 0.03).In conclusion, red meat and poultry intake was associated with an increased risk of T2D. These associations were mediated by heme iron, for poultry completely but partially for red meat.
© The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
epidemiology; fish; heme iron; poultry; prospective studies; red meat; type 2 diabetes
Chocolate intake and risk of clinically apparent atrial fibrillation: the Danish Diet, Cancer, and Health Study.
Mostofsky E, Berg Johansen M, Tjønneland A, Chahal HS, Mittleman MA, Overvad K.
Heart. 2017 May 23. pii: heartjnl-2016-310357. doi: 10.1136/heartjnl-2016-310357. [Epub ahead of print]
To evaluate the association between chocolate intake and incident clinically apparent atrial fibrillation or flutter (AF).
The Danish Diet, Cancer, and Health Study is a large population-based prospective cohort study. The present study is based on 55 502 participants (26 400 men and 29 102 women) aged 50-64 years who had provided information on chocolate intake at baseline. Incident cases of AF were ascertained by linkage with nationwide registries.
During a median of 13.5 years there were 3346 cases of AF. Compared with chocolate intake less than once per month, the rate of AF was lower for people consuming 1-3 servings/month (hazard ratio (HR) 0.90, 95% confidence interval (CI) 0.82 to 0.98), 1 serving/week (HR 0.83, 95% CI 0.74 to 0.92), 2-6 servings/week (HR 0.80, 95% CI 0.71 to 0.91) and ≥1 servings/day (HR 0.84, 95% CI 0.65 to 1.09; p-linear trend <0.0001), with similar results for men and women.
Accumulating evidence indicates that moderate chocolate intake may be inversely associated with AF risk, although residual confounding cannot be ruled out.
Dietary intake of fibre and risk of knee osteoarthritis in two US prospective cohorts.
Dai Z, Niu J, Zhang Y, Jacques P, Felson DT.
Ann Rheum Dis. 2017 May 23. pii: annrheumdis-2016-210810. doi: 10.1136/annrheumdis-2016-210810. [Epub ahead of print]
Dietary fibre reduces body weight and inflammation both of which are linked with knee osteoarthritis (OA). We examined the association between fibre intake and risk of knee OA.
We used data from the Osteoarthritis Initiative (OAI) of 4796 participants and Framingham Offspring Osteoarthritis Study (Framingham) of 1268 persons. Dietary intake of fibre was estimated at baseline, and incident radiographic OA (ROA) and symptomatic OA (SxOA) were followed annually until 48 months in OAI and assessed 9 years later in Framingham. Knee pain worsening was also examined in OAI. Generalised estimating equations were applied in multivariable regression models.
In OAI, we identified 861 knees with SxOA, 152 knees with ROA and 1964 knees with pain worsening among 4051 subjects with valid dietary intake (baseline mean age: 61.2 years; mean body mass index (BMI): 28.6). In Framingham, 143 knees with SxOA and 175 knees with ROA among 971 such subjects (baseline mean age: 53.9 years; mean BMI: 27.0) were identified. In both cohorts, dietary total fibre was inversely associated with risk of SxOA (p trend <0.03) with significantly lower risk at the highest versus lowest quartile (OR (95% CI): 0.70 (0.52, 0.94) for OAI and 0.39 (0.17, 0.88) for Framingham). Furthermore, dietary total and cereal fibre were significantly inversely associated with knee pain worsening in OAI (p trend <0.02). No apparent association was found with ROA.
Findings from two longitudinal studies consistently showed that higher total fibre intake was related to a lower risk of SxOA, while the relation to ROA was unclear.
Epidemiology; Knee Osteoarthritis; Treatment
Relation of total sugars, fructose and sucrose with incident type 2 diabetes: a systematic review and meta-analysis of prospective cohort studies.
Tsilas CS, de Souza RJ, Mejia SB, Mirrahimi A, Cozma AI, Jayalath VH, Ha V, Tawfik R, Di Buono M, Jenkins AL, Leiter LA, Wolever TMS, Beyene J, Khan T, Kendall CWC, Jenkins DJA, Sievenpiper JL.
CMAJ. 2017 May 23;189(20):E711-E720. doi: 10.1503/cmaj.160706.
Sugar-sweetened beverages are associated with type 2 diabetes. To assess whether this association holds for the fructose-containing sugars they contain, we conducted a systematic review and meta-analysis of prospective cohort studies.
We searched MEDLINE, Embase, CINAHL and the Cochrane Library (through June 2016). We included prospective cohort studies that assessed the relation of fructose-containing sugars with incident type 2 diabetes. Two independent reviewers extracted relevant data and assessed risk of bias. We pooled risk ratios (RRs) using random effects meta-analyses. The overall quality of the evidence was assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system.
Fiffeen prospective cohort studies (251 261 unique participants, 16 416 cases) met the eligibility criteria, comparing the highest intake (median 137, 35.2 and 78 g/d) with the lowest intake (median 65, 9.7 and 25.8 g/d) of total sugars, fructose and sucrose, respectively. Although there was no association of total sugars (RR 0.91, 95% confidence interval [CI] 0.76-1.09) or fructose (RR 1.04, 95% CI 0.84-1.29) with type 2 diabetes, sucrose was associated with a decreased risk of type 2 diabetes (RR 0.89, 95% CI 0.80-0.98). Our confidence in the estimates was limited by evidence of serious inconsistency between studies for total sugars and fructose, and serious imprecision in the pooled estimates for all 3 sugar categories.
Current evidence does not allow us to conclude that fructose-containing sugars independent of food form are associated with increased risk of type 2 diabetes. Further research is likely to affect our estimates.
Caffeine ingestion acutely enhances muscular strength and power but not muscular endurance in resistance-trained men.
Grgic J, Mikulic P.
Eur J Sport Sci. 2017 May 24:1-8. doi: 10.1080/17461391.2017.1330362. [Epub ahead of print]
The goal of this randomized, double-blind, cross-over study was to assess the acute effects of caffeine ingestion on muscular strength and power, muscular endurance, rate of perceived exertion (RPE), and pain perception (PP) in resistance-trained men. Seventeen volunteers (mean ± SD: age = 26 ± 6 years, stature = 182 ± 9 cm, body mass = 84 ± 9 kg, resistance training experience = 7 ± 3 years) consumed placebo or 6 mg kg-1 of anhydrous caffeine 1 h before testing. Muscular power was assessed with seated medicine ball throw and vertical jump exercises, muscular strength with one-repetition maximum (1RM) barbell back squat and bench press exercises, and muscular endurance with repetitions of back squat and bench press exercises (load corresponding to 60% of 1RM) to momentary muscular failure. RPE and PP were assessed immediately after the completion of the back squat and bench press exercises. Compared to placebo, caffeine intake enhanced 1RM back squat performance (+2.8%; effect size [ES] = 0.19; p = .016), which was accompanied by a reduced RPE (+7%; ES = 0.53; p = .037), and seated medicine ball throw performance (+4.3%, ES = 0.32; p = .009). Improvements in 1RM bench press were not noted although there were significant (p = .029) decreases in PP related to this exercise when participants ingested caffeine. The results point to an acute benefit of caffeine intake in enhancing lower-body strength, likely due to a decrease in RPE; upper-, but not lower-body power; and no effects on muscular endurance, in resistance-trained men. Individuals competing in events in which strength and power are important performance-related factors may consider taking 6 mg kg-1 of caffeine pre-training/competition for performance enhancement.
Fatigue; metabolism; nutrition; performance
Association of Protein Intake with Bone Mineral Density and Bone Mineral Content among Elderly Women: The OSTPRE Fracture Prevention Study.
Isanejad M, Sirola J, Mursu J, Kröger H, Tuppurainen M, Erkkilä AT.
J Nutr Health Aging. 2017;21(6):622-630. doi: 10.1007/s12603-016-0800-4.
It has been hypothesized that high protein intakes are associated with lower bone mineral content (BMC). Previous studies yield conflicting results and thus far no studies have undertaken the interaction of body mass index (BMI) and physical activity with protein intakes in relation to BMC and bone mineral density (BMD).
To evaluate the associations of dietary total protein (TP), animal protein (AP) and plant protein (PP) intakes with BMC and BMD and their changes. We tested also the interactions of protein intake with, obesity (BMI ≤30 vs. >30 kg/m2) and physical activity level (passive vs. active). Design/ Setting: Prospective cohort study (Osteoporosis Risk-Factor and Fracture-Prevention Study). Participants/measures: At the baseline, 554 women aged 65-72 years filled out a 3-day food record and a questionnaire covering data on lifestyle, physical activity, diseases, and medications. Intervention group received calcium 1000 mg/d and cholecalciferol 800 IU for 3 years. Control group received neither supplementation nor placebo. Bone density was measured at baseline and year 3, using dual energy x-ray absorptiometry. Multivariable regression analyses were conducted to examine the associations between protein intake and BMD and BMC.
In cross-sectional analyses energy-adjusted TP (P≤0·029) and AP (P≤0·045) but not PP (g/d) were negatively associated with femoral neck (FN) BMD and BMC. Women with TP≥1·2 g/kg/body weight (BW) (Ptrend≤0·009) had lower FN, lumbar spine (LS) and total BMD and BMC. In follow-up analysis, TP (g/kg/BW) was inversely associated with LS BMD and LS BMC. The detrimental associations were stronger in women with BMI<30 kg/m2. In active women, TP (g/kg/BW) was positively associated with LS BMD and FN BMC changes.
This study suggests detrimental associations between protein intake and bone health. However, these negative associations maybe counteracted by BMI>30 kg/m2 and physical activity.
Dietary protein intake; body mass index; bone mineral density; physical activity; source of protein intake
A NAD+/PARP1/SIRT1 axis in Aging.
Mendelsohn AR, Larrick J.
Rejuvenation Res. 2017 May 24. doi: 10.1089/rej.2017.1980. [Epub ahead of print]
NAD+ levels decline with age in diverse animals from C. elegans to mice. Raising NAD+ levels by dietary supplementation with NAD+ precursors NR or NMN improves mitochondrial function and muscle, neural and melanocyte stem cell function in mice as well as increasing murine lifespan. Decreased NAD+ levels with age reduces SIRT1 function and reduces the mitochondrial unfolded protein response, which can be overcome by NR supplementation. Decreased NAD+ levels cause NAD+-binding protein DCB1 to form a complex with PARP1, inhibiting PARP catalytic activity. Old mice have increased amounts of DCB1-PARP1 complexes. lower PARP activity, increased DNA damage and reduced non-homologous end joining (NHEJ) and homologous recombination (HR) repair. DCB1-PARP1 complexes in old mice can be broken by increasing NAD+ levels through treatment with NMN, reducing DNA damage and restoring PARP activity to youthful levels. The mechanism of declining NAD+ levels and its fundamental importance to aging are yet to be elucidated. There is a correlation of PARP activity with mammalian lifespan, that suggests that a NAD+/SIRT1/PARP1 may be more significant than the modest effects on lifespan observed for NR supplementation on old mice. A NAD+/PARP1/SIRT1 axis may link NAD+ levels and DNA damage with the apparent epigenomic DNA methylation "clocks" that have been described.
Oral health in relation to all-cause mortality: the IPC cohort study.
Adolph M, Darnaud C, Thomas F, Pannier B, Danchin N, Batty GD, Bouchard P.
Sci Rep. 2017 Mar 15;7:44604. doi: 10.1038/srep44604.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5353629/pdf/srep44604.pdf
We evaluated the association between oral health and mortality. The study population comprised 76,188 subjects aged 16-89 years at recruitment. The mean follow-up time was 3.4 ± 2.4 years. Subjects with a personal medical history of cancer or cardiovascular disease and death by casualty were excluded from the analysis. A full-mouth clinical examination was performed in order to assess dental plaque, dental calculus and gingival inflammation. The number of teeth and functional masticatory units <5 were recorded. Causes of death were ascertained from death certificates. Mortality risk was evaluated using Cox regression model with propensity score calibrated for each oral exposure. All-cause mortality risk were raised with dental plaque, gingival inflammation, >10 missing teeth and functional masticatory units <5. All-cancer mortality was positively associated with dental plaque and gingival inflammation. Non-cardiovascular and non-cancer mortality were also positively associated with high dental plaque (HR = 3.30, [95% CI: 1.76-6.17]), high gingival inflammation (HR = 2.86, [95% CI: 1.71-4.79]), >10 missing teeth (HR = 2.31, [95% CI: 1.40-3.82]) and functional masticatory units <5 (HR = 2.40 [95% CI 1.55-3.73]). Moreover, when ≥3 oral diseases were cumulated in the model, the risk increased for all-cause mortality (HR = 3.39, [95% CI: 2.51-5.42]), all-cancer mortality (HR = 3.59, [95% CI: 1.23-10.05]) and non-cardiovascular and non-cancer mortality (HR = 4.71, [95% CI: 1.74-12.7]). The present study indicates a postive linear association between oral health and mortality.
Association of Grip Strength With Risk of All-Cause Mortality, Cardiovascular Diseases, and Cancer in Community-Dwelling Populations: A Meta-analysis of Prospective Cohort Studies.
Wu Y, Wang W, Liu T, Zhang D.
J Am Med Dir Assoc. 2017 Jun 1;18(6):551.e17-551.e35. doi: 10.1016/j.jamda.2017.03.011.
http://jech.bmj.com/content/jech/early/2016/06/08/jech-2015-206776.full.pdf
Grip strength has been linked to risk of adverse health outcomes. This study aimed to quantitatively assess the associations between grip strength and risk of all-cause mortality, cardiovascular diseases, and cancer in community-dwelling populations.
A meta-analysis of prospective cohort studies was conducted.
Embase, Medline, and PubMed were searched from inception to September 14, 2016. Study-specific most adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) were combined with a random effects model. Dose-response relation was assessed by restricted cubic splines.
Data were obtained from 42 studies including 3,002,203 participants. For lowest versus highest category of grip strength, the HRs (95% CIs) were 1.41 (1.30-1.52) for all-cause mortality, 1.63 (1.36-1.96) for cardiovascular diseases and 0.89 (0.66-1.20) for cancer. The HRs (95% CIs) with per-5-kg decrease in grip strength was 1.16 (1.12-1.20) for all-cause mortality, 1.21 (1.14-1.29) for cardiovascular diseases, 1.09 (1.05-1.14) for stroke, 1.07 (1.03-1.11) for coronary heart disease, and 1.01 (0.98-1.05) for cancer. The observed associations did not differ by sex, and remained after excluding participants with cardiovascular diseases or cancer at baseline. Adjustment for other covariates cannot fully explain the observed associations. Linear relationships were found between grip strength and risk of all-cause mortality and cardiovascular diseases within grip strength of 56 kg.
Grip strength was an independent predictor of all-cause mortality and cardiovascular diseases in community-dwelling populations.
Grip strength; all-cause mortality; cancer; cardiovascular diseases; meta-analysis
Extreme changes in dietary sodium effect daily variability and level of blood pressure in borderline hypertensive patients.
James GD, Pecker MS, Pickering TG, Jackson S, Difabio B, Carroll L, Laragh JH.
This study examined the effect of large changes in dietary sodium on the average ambulatory blood pressure and its variability in 19 patients with uncomplicated borderline hypertension. Each patient participated in a 16-week protocol that consisted of four 4-week periods of different sodium intake (medium (120-160 mEq/day) during periods 1 and 3 and low (< 40 mEq/day) or high (> 225 mEq/day) during either period 2 or 4. The 24-hour urine sodium during the low and high periods averaged 18 and 327 mEq/day, respectively. Ambulatory blood pressure monitoring was done at the end of the fourth week of the low and high diet periods. During monitoring, pressures were recorded every 15 minutes while awake; in addition, patients kept diaries noting activities, posture, and situation at each measurement. The results show that there was a decline of 16/7 mmHg in the average ambulatory awake systolic and diastolic pressures from the high sodium to low sodium diets. Corresponding casual pressures decreased an average of 15 and 8 mmHg, respectively. In examining the factors associated with ambulatory pressure variability, systolic pressure showed greater variation by activity on a low sodium diet than on the high. The findings suggest that sodium restriction has a variable, but in some cases marked, effect on lowering the ambulatory blood pressure in borderline mildly hypertensive patients and that sodium balance may be important to consider when examining ambulatory blood pressure variability.
How telling patients of a possible side-effect may make it more likely
Study offers insight into how an expectation of side-effects may make patients more likely to perceive them
Thomson Reuters Posted: May 27, 2017
http://www.cbc.ca/news/health/placebo-effect-opposite-1.4133814
>>>>>>>>>>>>>>>>>>>>>>>>>
Adverse events associated with unblinded, but not with blinded, statin therapy in the Anglo-Scandinavian Cardiac Outcomes Trial-Lipid-Lowering Arm (ASCOT-LLA): a randomised double-blind placebo-controlled trial and its non-randomised non-blind extension phase.
Gupta A, Thompson D, Whitehouse A, Collier T, Dahlof B, Poulter N, Collins R, Sever P; ASCOT Investigators..
Lancet. 2017 May 2. pii: S0140-6736(17)31075-9. doi: 10.1016/S0140-6736(17)31075-9. [Epub ahead of print]
In blinded randomised controlled trials, statin therapy has been associated with few adverse events (AEs). By contrast, in observational studies, larger increases in many different AEs have been reported than in blinded trials.
In the Lipid-Lowering Arm of the Anglo-Scandinavian Cardiac Outcomes Trial, patients aged 40–79 years with hypertension, at least three other cardiovascular risk factors, and fasting total cholesterol concentrations of 6·5 mmol/L or lower, and who were not taking a statin or fibrate, had no history of myocardial infarction, and were not being treated for angina were randomly assigned to atorvastatin 10 mg daily or matching placebo in a randomised double-blind placebo-controlled phase. In a subsequent non-randomised non-blind extension phase (initiated because of early termination of the trial because efficacy of atorvastatin was shown), all patients were offered atorvastatin 10 mg daily open label. We classified AEs using the Medical Dictionary for Regulatory Activities. We blindly adjudicated all reports of four prespecified AEs of interest—muscle-related, erectile dysfunction, sleep disturbance, and cognitive impairment—and analysed all remaining AEs grouped by system organ class. Rates of AEs are given as percentages per annum.
The blinded randomised phase was done between February, 1998, and December, 2002; we included 101 80 patients in this analysis (5101 [50%] in the atorvastatin group and 5079 [50%] in the placebo group), with a median follow-up of 3·3 years (IQR 2·7–3·7). The non-blinded non-randomised phase was done between December, 2002, and June, 2005; we included 9899 patients in this analysis (6409 [65%] atorvastatin users and 3490 [35%] non-users), with a median follow-up of 2·3 years (2·2–2·4). During the blinded phase, muscle-related AEs (298 [2·03% per annum] vs 283 [2·00% per annum]; hazard ratio 1·03 [95% CI 0·88–1·21]; p=0·72) and erectile dysfunction (272 [1·86% per annum] vs 302 [2·14% per annum]; 0·88 [0·75–1·04]; p=0·13) were reported at a similar rate by participants randomly assigned to atorvastatin or placebo. The rate of reports of sleep disturbance was significantly lower among participants assigned atorvastatin than assigned placebo (149 [1·00% per annum] vs 210 [1·46% per annum]; 0·69 [0·56–0·85]; p=0·0005). Too few cases of cognitive impairment were reported for a statistically reliable analysis (31 [0·20% per annum] vs 32 [0·22% per annum]; 0·94 [0·57–1·54]; p=0·81). We observed no significant differences in the rates of all other reported AEs, with the exception of an excess of renal and urinary AEs among patients assigned atorvastatin (481 [1·87%] per annum vs 392 [1·51%] per annum; 1·23 [1·08–1·41]; p=0·002). By contrast, during the non-blinded non-randomised phase, muscle-related AEs were reported at a significantly higher rate by participants taking statins than by those who were not (161 [1·26% per annum] vs 124 [1·00% per annum]; 1·41 [1·10–1·79]; p=0·006). We noted no significant differences between statin users and non-users in the rates of other AEs, with the exception of musculoskeletal and connective tissue disorders (992 [8·69% per annum] vs 831 [7·45% per annum]; 1·17 [1·06–1·29]; p=0·001) and blood and lymphatic system disorders (114 [0·88% per annum] vs 80 [0·64% per annum]; 1·40 [1·04–1·88]; p=0·03), which were reported more commonly by statin users than by non-users.
These analyses illustrate the so-called nocebo effect, with an excess rate of muscle-related AE reports only when patients and their doctors were aware that statin therapy was being used and not when its use was blinded. these results will help assure both physicians and patients that most AEs associated with statins are not causally related to use of the drug and should help counter the adverse effect on public health of exaggerated claims about statin-related side-effects.
Pfizer, Servier Research Group, and Leo Laboratories.
>>>>>>>>>>>>>>>>>>>>>>
Statin-associated muscle symptoms: beware of the nocebo effect.
Pedro-Botet J, Rubiés-Prat J.
Lancet. 2017 May 2. pii: S0140-6736(17)31163-7. doi: 10.1016/S0140-6736(17)31163-7. [Epub ahead of print] No abstract available.
Patient-reported statin intolerance, predominantly due to statin-associated muscle symptoms (SAMS), is a common and difficult-to-manage condition affecting millions of patients worldwide.1 Different expert panels have proposed various definitions and classifications for statin intolerance.2,3 However, the development of SAMS does not necessarily signify statin intolerance since statin therapy might not always be pharmacologically involved. Moreover, some patients with SAMS might be able to tolerate a lower dose than the dose that leads to SAMS, longer dose intervals, or an alternative statin.
Resting heart rate and the risk of cardiovascular disease, total cancer, and all-cause mortality - A systematic review and dose-response meta-analysis of prospective studies.
Aune D, Sen A, ó'Hartaigh B, Janszky I, Romundstad PR, Tonstad S, Vatten LJ.
Nutr Metab Cardiovasc Dis. 2017 Apr 21. pii: S0939-4753(17)30085-6. doi: 10.1016/j.numecd.2017.04.004. [Epub ahead of print]
http://sci-hub.cc/10.1016/j.numecd.2017.04.004
BACKGROUND AND AIM:
Epidemiological studies have reported increased risk of cardiovascular disease, cancer and all-cause mortality with greater resting heart rate, however, the evidence is not consistent. Differences by gender, adjustment for confounding factors, as well as the potential impact of subclinical disease are not clear. A previous meta-analysis missed a large number of studies, and data for atrial fibrillation have not been summarized before. We therefore aimed to clarify these associations in a systematic review and meta-analysis of prospective studies.
METHODS AND RESULTS:
PubMed and Embase were searched up to 29 March 2017. Summary RRs and 95% confidence intervals (CIs) were calculated using random effects models. Eighty seven studies were included. The summary RR per 10 beats per minute increase in resting heart rate was 1.07 (95% CI: 1.05-1.10, I2 = 61.9%, n = 31) for coronary heart disease, 1.09 (95% CI: 1.00-1.18, I2 = 62.3%, n = 5) for sudden cardiac death, 1.18 (95% CI: 1.10-1.27, I2 = 74.5%, n = 8) for heart failure, 0.97 (95% CI: 0.92-1.02, I2 = 91.4%, n = 9) for atrial fibrillation, 1.06 (95% CI: 1.02-1.10, I2 = 59.5%, n = 16) for total stroke, 1.15 (95% CI: 1.11-1.18, I2 = 84.3%, n = 35) for cardiovascular disease, 1.14 (95% CI: 1.06-1.23, I2 = 90.2%, n = 12) for total cancer, and 1.17 (95% CI: 1.14-1.19, I2 = 94.0%, n = 48) for all-cause mortality. There was a positive dose-response relationship for all outcomes except for atrial fibrillation for which there was a J-shaped association.
This meta-analysis found an increased risk of coronary heart disease, sudden cardiac death, heart failure, atrial fibrillation, stroke, cardiovascular disease, total cancer and all-cause mortality with greater resting heart rate.
All-cause mortality; Atrial fibrillation; Cancer; Cardiovascular disease; Coronary heart disease; Heart failure; Stroke; Sudden cardiac death
Does Time of Sampling or Food Intake Alter Thyroid Function Test?
Mahadevan S, Sadacharan D, Kannan S, Suryanarayanan A.
Indian J Endocrinol Metab. 2017 May-Jun;21(3):369-372. doi: 10.4103/ijem.IJEM_15_17.
A common question from most patients or laboratories is whether blood sample for thyroid-stimulating hormone (TSH) and free T4 (fT4) needs to be collected in a fasting state and whether time of the day when sample is collected matters.
The aim of the study was to study the impact of the time of day and food intake on levels of TSH and fT4.
SETTINGS AND DESIGN:
Cross-sectional prospective data collection.
SUBJECTS AND METHODS:
We prospectively collected data from 52 volunteers who were not known to have any thyroid disorder and were not on any thyroid-related medication. Blood samples for TSH and fT4 were collected on day 1 at 8 am and 10 am with the patient remaining in the fasting state till the collection of the second sample at 10 am. On day 2, samples were collected at 8 am (fasting state) and at 10 am (2 h postprandial state). In 22 volunteers from the group, the tests were performed in three common assay techniques including chemiluminescent assays (chemiluminescent immunoassay [CLIA] and chemiluminescent microparticle immunoassay [CMIA]) and enzyme-linked fluorescence assay.
The mean (standard deviation) and median (interquartile range) TSH during the extended fast on day 1 were 2.26 ± 1.23 and 2.19 (1.21-3.18), which was significantly lower than the fasting TSH performed on day 1 (P < 0.001). Similarly, the values of TSH 2 h postmeal on day 2 of the testing (mean 1.93 ± 1.12; median 1.64 [1.06-2.86]) were significantly lower than TSH performed in the fasting state on day 2 (P < 0.001). The mean fT4 value was 1.01 ± 0.15 with median of 0.99 (0.91-1.11) in the fasting state and there was no significant difference between the fT4 values performed during fasting, extended fasting, and postmeal state. Among the volunteers in whom the test was performed in the three different assay techniques, the TSH was not statistically different either in the fasting (P = 0.801), extended fasting (P = 0.955), and postprandial samples (P = 0.989). The fT4 values did not vary significantly when done by the same assay method. However, the fT4 levels varied significantly (P < 0.001) when done by another assay method.
We conclude stating that the timing of the test affects TSH values and this should be factored in making decisions in diagnosis of subclinical hypothyroidism.
Fasting; postprandial; thyroid-stimulating hormone; timing of test
Mastery and Depressive Symptoms: How Does Mastery Influence the Impact of Stressors From Midlife to Old Age?
Nicolaisen M, Moum T, Thorsen K.
J Aging Health. 2017 Apr 1:898264317705782. doi: 10.1177/0898264317705782. [Epub ahead of print]
The objective of this research is to study depressive symptoms (DS) among adults aged 40 to 79 years and examine how mastery influences the impact of sociodemographic, socioeconomic, and health factors on DS.
We used a sample of the Norwegian Life Course, Generation, and Gender (LOGG) study ( N = 6,879) and analyzed how mastery influences the independent variables on DS via regression analyses.
Mastery affected DS directly and influenced the effects of sociodemographic, socioeconomic, and health factors on DS. There was a stronger relationship between stressors and DS among respondents with low than high mastery. DS were most prevalent among people aged 70 to 79 years. When mastery was also controlled for, the oldest group (70-79 years) had significantly fewer DS than those aged 40 to 49 years.
The influence of mastery and stressors on DS seems to vary along the life span. The result that mastery was a relatively stronger buffer against DS in midlife than in old age is discussed.
age groups; life course; mental health; psychosocial factors
Remote tissue conditioning - an emerging approach for inducing body-wide protection against diseases of ageing.
Kim B, Brandli A, Mitrofanis J, Stone J, Purushothuman S, Johnstone DM.
Ageing Res Rev. 2017 May 24. pii: S1568-1637(17)30005-3. doi: 10.1016/j.arr.2017.05.005. [Epub ahead of print] Review.
We have long accepted that exercise is 'good for us'; that - put more rigorously - moderate exercise is associated with not just aerobic fitness but also reduced morbidity and reduced mortality from cardiovascular disease and even malignancies. Caloric restriction (moderate hunger) and our exposure to dietary phytochemicals are also emerging as stresses which are 'good for us' in the same sense. This review focuses on an important extension of this concept: that stress localized within the body (e.g. in a limb) can induce resilience in tissues throughout the body. We describe evidence for the efficacy of two 'remote' protective interventions - remote ischemic conditioning and remote photobiomodulation - and discuss the mechanisms underlying their protective actions. While the biological phenomenon of remote tissue conditioning is only partially understood, it holds promise for protecting critical-to-life tissues while mitigating risks and practical barriers to direct conditioning of these tissues.
hormesis; ischemic conditioning; photobiomodulation; remote; resilience; stress response
A 1-Hour Walk, 3 Times a Week, Has Benefits for Dementia
By GRETCHEN REYNOLDS MAY 24, 2017
https://www.nytimes.com/2017/05/24/well/move/a-1-hour-walk-3-times-a-week-has-benefits-for-dementia.html?rref=collection%2Fsectioncollection%2Fhealth&action=click&contentCollection=health®ion=rank&module=package&version=highlights&contentPlacement=8&pgtype=sectionfront
>>>>>>>>>>>>>>>>>
Aerobic exercise promotes executive functions and impacts functional neural activity among older adults with vascular cognitive impairment.
Hsu CL, Best JR, Davis JC, Nagamatsu LS, Wang S, Boyd LA, Hsiung GR, Voss MW, Eng JJ, Liu-Ambrose T.
Br J Sports Med. 2017 Apr 21. pii: bjsports-2016-096846. doi: 10.1136/bjsports-2016-096846. [Epub ahead of print]
Vascular cognitive impairment (VCI) results from cerebrovascular disease, and worldwide, it is the second most common type of cognitive dysfunction. While targeted aerobic training is a promising approach to delay the progression of VCI by reducing cardiometabolic risk factors, few randomised controlled trials to date have specifically assessed the efficacy of aerobic training on cognitive and brain outcomes in this group at risk for functional decline.
To examine the effect of moderate-intensity aerobic training on executive functions and functional neural activity among older adults with mild subcortical ischaemic VCI (SIVCI).
Older adults with mild SIVCI were randomly assigned to: (1) 6-month, 3×/week aerobic training (n=10) or (2) usual care (control; n=11). Participants completed functional MRI (fMRI) at baseline and trial completion. During the fMRI sessions, behavioural performance on the Eriksen flanker task and task-evoked neural activity were assessed.
At trial completion, after adjusting for baseline general cognition, total white matter lesion volume and flanker performance, compared with the control group, the aerobic training group significantly improved flanker task reaction time. Moreover, compared with the controls, the aerobic training group demonstrated reduced activation in the left lateral occipital cortex and right superior temporal gyrus. Reduced activity in these brain regions was significantly associated with improved (ie, faster) flanker task performance at trial completion.
Aerobic training among older adults with mild SIVCI can improve executive functions and neural efficiency of associated brain areas. Future studies with greater sample size should be completed to replicate and extend these findings.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Aerobic exercise and vascular cognitive impairment: A randomized controlled trial.
Liu-Ambrose T, Best JR, Davis JC, Eng JJ, Lee PE, Jacova C, Boyd LA, Brasher PM, Munkacsy M, Cheung W, Hsiung GR.
Neurology. 2016 Nov 15;87(20):2082-2090. Epub 2016 Oct 19.
To assess the efficacy of a progressive aerobic exercise training program on cognitive and everyday function among adults with mild subcortical ischemic vascular cognitive impairment (SIVCI).
This was a proof-of-concept single-blind randomized controlled trial comparing a 6-month, thrice-weekly, progressive aerobic exercise training program (AT) with usual care plus education on cognitive and everyday function with a follow-up assessment 6 months after the formal cessation of aerobic exercise training. Primary outcomes assessed were general cognitive function (Alzheimer's Disease Assessment Scale-Cognitive subscale [ADAS-Cog]), executive functions (Executive Interview [EXIT-25]), and activities of daily living (Alzheimer's Disease Cooperative Study-Activities of Daily Living [ADCS-ADL]).
Seventy adults randomized to aerobic exercise training or usual care were included in intention-to-treat analyses (mean age 74 years, 51% female, n = 35 per group). At the end of the intervention, the aerobic exercise training group had significantly improved ADAS-Cog performance compared with the usual care plus education group (-1.71 point difference, 95% confidence interval [CI] -3.15 to -0.26, p = 0.02); however, this difference was not significant at the 6-month follow-up (-0.63 point difference, 95% CI -2.34 to 1.07, p = 0.46). There were no significant between-group differences at intervention completion and at the 6-month follow-up in EXIT-25 or ADCS-ADL performance. Examination of secondary measures showed between-group differences at intervention completion favoring the AT group in 6-minute walk distance (30.35 meter difference, 95% CI 5.82 to 54.86, p = 0.02) and in diastolic blood pressure (-6.89 mm Hg difference, 95% CI -12.52 to -1.26, p = 0.02).
This study provides preliminary evidence for the efficacy of 6 months of thrice-weekly progressive aerobic training in community-dwelling adults with mild SIVCI, relative to usual care plus education.
Anti-Inflammatory Effects of the Mediterranean Diet in the Early and Late Stages of Atheroma Plaque Development.
Casas R, Urpi-Sardà M, Sacanella E, Arranz S, Corella D, Castañer O, Lamuela-Raventós RM, Salas-Salvadó J, Lapetra J, Portillo MP, Estruch R.
Mediators Inflamm. 2017;2017:3674390. doi: 10.1155/2017/3674390. Epub 2017 Apr 18.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5412172/pdf/MI2017-3674390.pdf
Objective. To evaluate the long-term effects of a Mediterranean diet (MeDiet) intervention on the plasma concentrations of inflammatory and plaque stability-related molecules in elderly people at high risk for cardiovascular disease. Design and Setting. 66 participants from primary care centers affiliated with the Hospital Clinic of Barcelona were randomized into 3 groups: MeDiet plus extra virgin olive oil (EVOO) or nuts and a low-fat diet (LFD). At baseline and at 3 and 5 years, we evaluated the changes in the plasma concentrations of 24 inflammatory biomarkers related to the different stages of the atherosclerotic process by Luminex®. Results. At 3 and 5 years, both MeDiet groups showed a significant reduction of IL-6, IL-8, MCP-1, and MIP-1β (P < 0.05; all) compared to LFD. IL-1β, IL-5, IL-7, IL-12p70, IL-18, TNF-α, IFN-γ, GCSF, GMCSF, and ENA78 (P < 0.05; all) only decreased in the MeDiet+EVOO group and E-selectin and sVCAM-1 (P < 0.05; both) in the MeDiet+nuts group. Conclusions. Long-term adherence to MeDiet decreases the plasma concentrations of inflammatory biomarkers related to different steps of atheroma plaque development in elderly persons at high cardiovascular risk.
Moderate-to-high normal levels of thyrotropin is a risk factor for urinary incontinence and an unsuitable quality of life in women over 65 years.
Cuevas-Romero E, Sánchez-Cardiel A, Zamora-Gallegos AM, Cruz-Lumbreras R, Quintanilla DL, Castelán F, Martínez-Gómez M.
Clin Exp Pharmacol Physiol. 2017 May 28. doi: 10.1111/1440-1681.12788. [Epub ahead of print]
The present study was aimed to investigate the relationship between normal serum concentrations of thyrotropin (TSH) and urinary incontinence (IU), urinary infections, and quality of life in old women. Euthyroid post-menopausal women without sarcopenia, estrogen replacement, emotional illness, and-or cancer were enrolled as participants. Anthropometric indicators, serum glucose and estradiol, and thyroid profile were measured. Sociodemographic, clinical, physical activity, and quality of life (SF-36) surveys were applied. One-hour pad test and International Consultation on Incontinence Questionnaire Short Form (ICIQ-SF) were used to determine UI. Urinalysis was also done. In agreement with results from the pad test (cut-off point ≥ 1.4 g), the ICIQ-SF reveled ~50% of incontinent women. A high percentage of women had moderate-high bacteriuria and urinary infections. Logistic regression analysis showed that age is a risk factor for both UI and urinary infection. Nor diabetes, number of pregnancies or childbirths, urinary infections, and bacteriuria influenced in the presence of UI. To allocate women into four groups according to their age (<65 or ≥65 years old) and TSH concentrations (0.3-1.9 or 2-10 μUI/mL), we found that moderate-to-high normal levels of TSH is a risk factor for UI and a worst quality of life in the oldest women. Our results highlight the profit of measuring TSH concentrations in post-menopausal women.
Ageing; ICIQ-SF; pad test; quality of life; thyroid hormones; thyrotropin
Evidence of the Anti-Inflammatory Effects of Probiotics and Synbiotics in Intestinal Chronic Diseases.
Plaza-Díaz J, Ruiz-Ojeda FJ, Vilchez-Padial LM, Gil A.
Nutrients. 2017 May 28;9(6). pii: E555. doi: 10.3390/nu9060555. Review.
Probiotics and synbiotics are used to treat chronic diseases, principally due to their role in immune system modulation and the anti-inflammatory response. The present study reviewed the effects of probiotics and synbiotics on intestinal chronic diseases in in vitro, animal, and human studies, particularly in randomized clinical trials. The selected probiotics exhibit in vitro anti-inflammatory properties. Probiotic strains and cell-free supernatants reduced the expression of pro-inflammatory cytokines via action that is principally mediated by toll-like receptors. Probiotic administration improved the clinical symptoms, histological alterations, and mucus production in most of the evaluated animal studies, but some results suggest that caution should be taken when administering these agents in the relapse stages of IBD. In addition, no effects on chronic enteropathies were reported. Probiotic supplementation appears to be potentially well tolerated, effective, and safe in patients with IBD, in both CD and UC. Indeed, probiotics such as Bifidobacterium longum 536 improved the clinical symptoms in patients with mild to moderate active UC. Although it has been proposed that probiotics can provide benefits in certain conditions, the risks and benefits should be carefully assessed before initiating any therapy in patients with IBD. For this reason, further studies are required to understand the precise mechanism by which probiotics and synbiotics affect these diseases.
anti-inflammatory effects; inflammatory bowel diseases; intestinal diseases; probiotics
Is There a Dose-Response Relationship between Tea Consumption and All-Cause, CVD, and Cancer Mortality?
Yan Y, Sui X, Yao B, Lavie CJ, Blair SN.
J Am Coll Nutr. 2017 May-Jun;36(4):281-286. doi: 10.1080/07315724.2016.1261054.
A small change in tea consumption at population level could have large impact on public health. However, the health benefits of tea intake among Americans are inconclusive.
To evaluate the association between tea consumption and all-causes, cardiovascular disease (CVD) and cancer mortality in the Aerobics Center Longitudinal study (ACLS).
11808 participants (20-82 years) initially free of CVD and cancers enrolled in the ACLS and were followed for mortality. Participants provided baseline self-report of tea consumption (cups/day). During a median follow-up of 16 years, 842 participants died. Of others, 250 died from CVD, and 345 died from cancer, respectively. A Cox proportional hazard model was used to produce hazard ratio (HR) and 95% confidence interval (CI).
Compared with participants consuming no tea, tea drinkers had a survival advantage ( Log-2 = 10.2, df = 3, P = 0.017); however, the multivariate hazard ratios (HRs) of all-cause mortality for those drinking 1-7, 8-14, and >14 cups/week were 0.95 (95% CI, 0.81-1.12), 1.00 (95% CI, 0.82-1.22), and 0.98 (95% CI, 0.76-1.25), respectively (P for linear trend = 0.83). The multivariate HR were 1.16 (95% CI, 0.86-1.56), 1.22 (95% CI, 0.85-1.76), and 0.94 (95% CI, 0.56-1.54) for CVD mortality (P for linear trend = 0.47), and 0.97 (95% CI, 0.75-1.25), 0.85 (95% CI, 0.60-1.16), and 0.94 (95% CI, 0.64-1.38) for cancer mortality (P for trend = 0.62).
There were week or null relationships between tea consumption and mortality due to all-cause, CVD disease or cancer were observed in ACLS.
Tea consumption; all-cause mortality; cancer mortality; cardiovascular disease mortality; survival probability
Website Flags Wrongly Paywalled Papers
By Dalmeet Singh Chawla
http://www.the-scientist.com/?articles.view/articleNo/49544/title/Website-Flags-Wrongly-Paywalled-Papers/&utm_campaign=NEWSLETTER_TS_The-Scientist-Daily_2016&utm_source=hs_email&utm_medium=email&utm_content=52560558&_hsenc=p2ANqtz-86oKijzDpXC78E2YmS1T6zRMQM6HsF0Xh_0vHT8yyQtRsCDgbLIqqlmW__DbzKhTWmaymYs5i1w5Oc25iDs3aGRPF4PA&_hsmi=52560558
Thousands of open access papers have mistakenly asked readers to pay access fees, but publishers are correcting the errors.
[There otta be a law.]
Opioid epidemic fuelled by 1 paragraph in journal, doctors say
CBC News Posted: May 31, 2017
http://www.cbc.ca/news/health/opioid-letter-nejm-1.4140182
http://www.nejm.org/doi/full/10.1056/NEJMc1700150?query=featured_home
http://www.nejm.org/doi/10.1056/NEJM198001103020221
Gender differences and similarities in effects of nonpharmacologic approaches to the treatment of mild hypertension.
Wassertheil-Smoller S, Davis BR, Oberman A, Blaufox MD, Kirchner K.
TAIM, the Trial of Antihypertensive Interventions and Management, studied the effects of dietary sodium restriction or weight reduction, alone and in combination with low-dose diuretic or beta blocker on blood pressure after 6 months. The responses to these interventions of men compared to women are presented for those persons randomized to placebo drug. Men undergoing a weight-reduction intervention were able to lose more weight (5.9 kg) than women (3.1 kg), P ⩽ 05. Men also had a greater percentage of wright loss and a greater reduction in body mass index (BMI), although not significantly so. Weight loss was correlated to a decrease in triglycerides (r = 0.37), but not in cholesterol. The weight-reduction intervention lowered triglycerides more in men (-81 mg/dl) than in women(-21 mg/dl; P = .008). There were no sex differences in abiility to reduce sodium or increase potassium for those in the sodium restriction group. Both men and women decreased their sodium to the same extent by 36 mmol/day and 25 mmol, respectively, and increased their potassium by 13 mmol and 11 mmol, respectively. Blood pressure response at 6 months was greater in men than in women on weight reduction (a drop in diastolic pressure of -11 mmHg in men and 7 mmHg in women, P =.04). Sodium restriction had a similar effect on blood pressure in both sexes, and among men resulted in a significantly smaller reduction in blood pressure than did weight reduction.
Prevalence, Correlates, and Prognosis of Healthy Vascular Aging in a Western Community-Dwelling Cohort: The Framingham Heart Study.
Niiranen TJ, Lyass A, Larson MG, Hamburg NM, Benjamin EJ, Mitchell GF, Vasan RS.
Hypertension. 2017 May 30. pii: HYPERTENSIONAHA.117.09026. doi: 10.1161/HYPERTENSIONAHA.117.09026. [Epub ahead of print]
Hypertension and increased vascular stiffness are viewed as inevitable parts of aging. To elucidate whether the age-related decrease in vascular function is avoidable, we assessed the prevalence, correlates, and prognosis of healthy vascular aging (HVA) in 3196 Framingham Study participants aged ≥50 years. We defined HVA as absence of hypertension and pulse wave velocity <7.6 m/s (mean+2 SD of a reference sample aged <30 years). Overall, 566 (17.7%) individuals had HVA, with prevalence decreasing from 30.3% in people aged 50 to 59 to 1% in those aged ≥70 years. In regression models adjusted for physical activity, caloric intake, and traditional cardiovascular disease (CVD) risk factors, we observed that lower age, female sex, lower body mass index, use of lipid-lowering drugs, and absence of diabetes mellitus were cross-sectionally associated with HVA (P<0.001 for all). A unit increase in a cardiovascular health score (Life's Simple 7) was associated with 1.55-fold (95% confidence interval, 1.38-1.74) age- and sex-adjusted odds of HVA. During a follow-up of 9.6 years, 391 CVD events occurred. In Cox regression models adjusted for traditional CVD risk factors, including blood pressure, HVA was associated with a hazard ratio of 0.45 (95% confidence interval, 0.26-0.77) for CVD relative to absence of HVA. Although HVA is achievable in individuals acculturated to a Western lifestyle, maintaining normal vascular function beyond 70 years of age is challenging. Although our data are observational, our findings support prevention strategies targeting modifiable factors and behaviors and obesity, in particular, to prevent or delay vascular aging and the associated risk of CVD.
aging; blood pressure; epidemiology; hypertension; vascular stiffness
Association Between Endometriosis and Hypercholesterolemia or Hypertension.
Mu F, Rich-Edwards J, Rimm EB, Spiegelman D, Forman JP, Missmer SA.
An altered hormonal or chronic systemic inflammatory milieu characterizing endometriosis may result in a higher risk of hypercholesterolemia and hypertension. Conversely, elevated low-density lipoprotein in hypercholesterolemia and chronic systemic inflammation resulting from hypertension may increase the risk of endometriosis. We assessed the association of laparoscopically confirmed endometriosis with hypercholesterolemia and hypertension in a large prospective cohort study. In 1989, 116 430 registered female nurses aged 25 to 42 completed the baseline questionnaire and were followed for 20 years. Multivariable Cox proportional hazards models were applied. In 1989, there were 4244 women with laparoscopically confirmed endometriosis and 91 554 women without. After adjusting for demographic, anthropometric, family history, reproductive, dietary, and lifestyle risk factors prospectively, comparing women with laparoscopically confirmed endometriosis to women without, the relative risks were 1.25 (95% confidence interval, 1.21-1.30) for development of hypercholesterolemia and 1.14 (95% confidence interval, 1.09-1.18) for hypertension. Conversely, the relative risks of developing laparoscopically confirmed endometriosis were 1.22 (95% confidence interval, 1.15-1.31) comparing women with hypercholesterolemia to women without and 1.29 (95% confidence interval, 1.18-1.41) comparing women with hypertension to women without. The strength of associations of laparoscopically confirmed endometriosis with hypercholesterolemia or hypertension was strongest among women aged ≤40 and weakened as age increased (P values for interaction <0.001). We observed that ≈45% of the associations between endometriosis and hypercholesterolemia and hypertension could be accounted for by treatment factors after endometriosis diagnosis, including greater frequency of hysterectomy/oophorectomy and earlier age for this surgery. In this large cohort study, laparoscopically confirmed endometriosis was prospectively associated with increased risk of hypercholesterolemia and hypertension. Conversely, hypercholesterolemia and hypertension were prospectively associated with higher risk of laparoscopically confirmed endometriosis.
endometriosis; epidemiology; hypertension; inflammation
Systolic Blood Pressure Reduction and Risk of Cardiovascular Disease and Mortality: A Systematic Review and Network Meta-analysis.
Bundy JD, Li C, Stuchlik P, Bu X, Kelly TN, Mills KT, He H, Chen J, Whelton PK, He J.
JAMA Cardiol. 2017 May 31. doi: 10.1001/jamacardio.2017.1421. [Epub ahead of print]
http://sci-hub.cc/10.1001/jamacardio.2017.1421
Clinical trials have documented that lowering blood pressure reduces cardiovascular disease and premature deaths. However, the optimal target for reduction of systolic blood pressure (SBP) is uncertain.
To assess the association of mean achieved SBP levels with the risk of cardiovascular disease and all-cause mortality in adults with hypertension treated with antihypertensive therapy.
MEDLINE and EMBASE were searched from inception to December 15, 2015, supplemented by manual searches of the bibliographies of retrieved articles.
STUDY SELECTION:
Studies included were clinical trials with random allocation to an antihypertensive medication, control, or treatment target. Studies had to have reported a difference in mean achieved SBP of 5 mm Hg or more between comparison groups.
DATA EXTRACTION AND SYNTHESIS:
Data were extracted from each study independently and in duplicate by at least 2 investigators according to a standardized protocol. Network meta-analysis was used to obtain pooled randomized results comparing the association of each 5-mm Hg SBP category with clinical outcomes after adjusting for baseline risk.
Cardiovascular disease and all-cause mortality.
Forty-two trials, including 144 220 patients, met the eligibility criteria. In general, there were linear associations between mean achieved SBP and risk of cardiovascular disease and mortality, with the lowest risk at 120 to 124 mm Hg. Randomized groups with a mean achieved SBP of 120 to 124 mm Hg had a hazard ratio (HR) for major cardiovascular disease of 0.71 (95% CI, 0.60-0.83) compared with randomized groups with a mean achieved SBP of 130 to 134 mm Hg, an HR of 0.58 (95% CI, 0.48-0.72) compared with those with a mean achieved SBP of 140 to 144 mm Hg, an HR of 0.46 (95% CI, 0.34-0.63) compared with those with a mean achieved SBP of 150 to 154 mm Hg, and an HR of 0.36 (95% CI, 0.26-0.51) compared with those with a mean achieved SBP of 160 mm Hg or more. Likewise, randomized groups with a mean achieved SBP of 120 to 124 mm Hg had an HR for all-cause mortality of 0.73 (95% CI, 0.58-0.93) compared with randomized groups with a mean achieved SBP of 130 to 134 mm Hg, an HR of 0.59 (95% CI, 0.45-0.77) compared with those with a mean achieved SBP of 140 to 144 mm Hg, an HR of 0.51 (95% CI, 0.36-0.71) compared with those with a mean achieved SBP of 150 to 154 mm Hg, and an HR of 0.47 (95% CI, 0.32-0.67) compared with those with a mean achieved SBP of 160 mm Hg or more.
This study suggests that reducing SBP to levels below currently recommended targets significantly reduces the risk of cardiovascular disease and all-cause mortality. These findings support more intensive control of SBP among adults with hypertension.
Baseline dietary intake and physical activity of Japanese American men in relation to glucose tolerance at 5-year follow-up.
Leonetti DL, Tsunehara CH, Wahl PW, Fujimoto WY.
Am J Hum Biol. 1996;8(1):55-67. doi: 10.1002/(SICI)1520-6300(1996)8:1<55::AID-AJHB5>3.0.CO;2-P.
Japanese American men (n = 124), with normal glucose tolerance (NGT, n = 69) or impaired glucose tolerance (IGT, n = 55) at baseline, were studied for effects of baseline dietary intake and physical activity on glucose tolerance at baseline and at 5-year follow-up. At baseline, both NGT and IGT men with positive family history of diabetes (FH) showed high intakes of animal fat and protein, but only the NGT men countered this with high levels of energy expenditure. In the total sample at 5-year follow-up, 2-hour plasma glucose was significantly related to intake of animal fat (AF), partial correlation r = 0.32, P < 0.001, adjusted for total energy intake, age, self-reported health, body mass index, FH, and baseline glucose tolerance category. Energy expenditure (EE) was not related to 5-year 2-hour plasma glucose in the total sample, but displayed a relationship with 5-year 2-hour plasma glucose in those IGT (r = -0.27, P < 0.05), but not in those NGT at baseline, and in those with positive FH (r = -0.33, P < 0.05), but not in those with negative FH. Additionally, AF showed a relationship to 5-year 2-hour plasma glucose only for those in the lowest (r = 0.37, P < 0.05) and middle (r = 0.33, P < 0.05) tertiles, but not in the highest tertile of EE. For baseline IGT men, 5-year 2-hour plasma glucose was related to "high vs. low risk" categories of AF intake and EE, but only in men with a positive FH (AF ≥ 25 vs. < 25 g/day: 180.1 ± 38.6 vs. 143.6 ± 39.7 mg/dl, P = 0.048; EE ≤ 2,000 kcal/week vs. > 2,000 kcal/week, 189.9 ± 39.2 vs. 150.8 ± 37.4 mg/dl, P = 0.028; with risk categories combined, i.e., both high, mixed, both low: 192.0 ± 41.3, 165.4 ± 28.4, 139.4 ± 40.9 mg/dl, P = 0.045, linear trend, P = 0.014). Thus, high AF intake and low EE may have long-range detrimental effects on glucose tolerance, especially for those with IGT and positive FH.
Improving risk estimates for metabolically healthy obesity and mortality using a refined healthy reference group.
Hamer M, Johnson W, Bell JA.
Eur J Endocrinol. 2017 May 31. pii: EJE-17-0217. doi: 10.1530/EJE-17-0217. [Epub ahead of print]
We aimed to re-examine mortality risk estimates for metabolically healthy obesity by using a 'stable' healthy non-obese referent group.
prospective cohort study Methods: Participants were 5,427 men and women (aged 65.9 ± 9.4 years, 45.9% men) from the English Longitudinal Study of Ageing. Obesity was defined as body mass index ≥ 30 kg/m2 (vs. non-obese as below this threshold). Based on blood pressure, HDL-cholesterol, triglycerides, glycated haemoglobin, and C-reactive protein, participants were classified as 'healthy' (0 or 1 metabolic abnormality) or 'unhealthy' (≥ 2 metabolic abnormalities).
671 deaths were observed over an average follow up of 8 years. When defining the referent group based on 1 clinical assessment, the unhealthy non-obese (Hazard ratio = 1.22; 95% CI, 1.01, 1.45) and unhealthy obese (1.29; 1.05, 1.60) were at greater risk of all-cause mortality compared to the healthy non-obese, yet no excess risk was seen in the healthy obese (1.14; 0.83, 1.52). When we re-defined the referent group based on 2 clinical assessments, effect estimates were accentuated and healthy obesity was at increased risk of mortality (2.67; 1.64, 4.34).
An unstable healthy referent group may make 'healthy obesity' appear less harmful by obscuring the benefits of remaining never obese without metabolic dysfunction.
Common acne medicine reduces risk of multiple sclerosis, Calgary researchers find
'It's a big discovery because it's a cheap, generic, oral medication,' says University of Calgary neurologist
By Robson Fletcher, CBC News Posted: May 31, 2017
http://www.cbc.ca/news/canada/calgary/minocycline-ms-multiple-sclerosis-calgary-research-1.4139877
Minocycline in Multiple Sclerosis - Compelling Results but Too Early to Tell.
Xia Z, Friedlander RM.
N Engl J Med. 2017 Jun 1;376(22):2191-2193. doi: 10.1056/NEJMe1703230. No abstract available.
Journal pre-amble: Care for persons with multiple sclerosis has evolved as options for disease-modifying treatments have expanded. The high cost of current treatments1 has stimulated interest in repurposing existing, lower-cost drugs as new therapies for multiple sclerosis.
This article has no abstract; the first 100 words appear below.
Care for persons with multiple sclerosis has evolved as options for disease-modifying treatments have expanded. The high cost of current treatments1 has stimulated interest in repurposing existing, lower-cost drugs as new therapies for multiple sclerosis. One appealing candidate is minocycline, a relatively safe and inexpensive synthetic tetracycline that crosses the blood–brain barrier.2 Minocycline has antiinflammatory and antiapoptotic properties beyond its antibiotic activity. As an antiinflammatory agent, minocycline inhibits reactive microgliosis, production of interleukin-1β, up-regulation of inducible nitric oxide synthase, and activation of CD4+ T cells.3–5 As an antiapoptotic agent, the drug impedes cellular stress–mediated release of caspase-dependent and caspase-independent . . .
Endocrine responses and acute mTOR pathway phosphorylation to resistance exercise with leucine and whey.
Lane MT, Herda TJ, Fry AC, Cooper MA, Andre MJ, Gallagher PM.
Biol Sport. 2017 Jun;34(2):197-203. doi: 10.5114/biolsport.2017.65339. Epub 2017 Jan 20.
Leucine ingestion reportedly activates the mTOR pathway in skeletal muscle, contributing to a hypertrophy response. The purpose of the study was to compare the post-resistance exercise effects of leucine and whey protein supplementation on endocrine responses and muscle mTOR pathway phosphorylation. On visit 1, subjects (X±SD; n=20; age=27.8±2.8yrs) provided baseline blood samples for analysis of cortisol, glucose and insulin; a muscle biopsy of the vastus lateralis muscle to assess mTOR signaling pathway phosphorylation; and were tested for maximum strength on the leg press and leg extension exercises. For visits 2 and 3, subjects were randomized in a double-blind crossover design to ingest either leucine and whey protein (10g+10g; supplement) or a non-caloric placebo. During these visits, 5 sets of 10 repetitions were performed on both exercises, immediately followed by ingestion of the supplement or placebo. Blood was sampled 30 min post-, and a muscle biopsy 45 min post-exercise. Western blots quantified total and phosphorylated proteins. Insulin increased (α<.05) with supplementation with no change in glucose compared to placebo. Relative phosphorylation of AKT and rpS6 were greater with leucine and whey supplementation compared to placebo. Supplementation of leucine and whey protein immediately after heavy resistance exercise increases anabolic signaling in human skeletal muscle.
AKT; Hypertrophy; Leucine; Resistance training; mTOR
Vitamin B<sub>6</sub> Intake and the Risk of Colorectal Cancer: A Meta-Analysis of Prospective Cohort Studies.
Jia K, Wang R, Tian J.
Nutr Cancer. 2017 Jun 1:1-9. doi: 10.1080/01635581.2017.1324633. [Epub ahead of print]
We performed this meta-analysis to estimate the association between vitamin B6 intake and colorectal cancer risk.
Prospective cohort studies on vitamin B6 intake and colorectal cancer risk were identified by searching databases from the period of 1960 to 2016. Results from individual studies were synthetically combined using Stata 13.0 software.
A total of 10 prospective cohort studies including 13 data sets were included in our meta-analysis, containing 7,817 cases and 784,550 subjects. The combined relative risks (RR) of colorectal cancer for the highest vitamin B6 intake compared with the lowest vitamin B6 intake was 0.88 [95% confidence intervals (CIs): 0.77-1.02]. Dose-response meta-analysis based on five eligible studies showed that for each additional 3 and 5 mg of vitamin B6 intake, the risk would decrease by 11% (RR: 0.89, 95%CI: 0.81-0.98) and 17% (RR: 0.83, 95%CI: 0.71-0.97), respectively. Little evidence of publication bias was found.
This meta-analysis provides evidence of a nonsignificant decrease in colorectal cancer risk associated with the high level of vitamin B6 intake, but the risk in dose-response analysis is significant. However, the latter finding is based on a limited number of studies, which should be interpreted with caution.
Association between blood pressure and Alzheimer disease measured up to 27 years prior to diagnosis: the HUNT Study.
Gabin JM, Tambs K, Saltvedt I, Sund E, Holmen J.
Alzheimers Res Ther. 2017 May 31;9(1):37. doi: 10.1186/s13195-017-0262-x.
http://alzres.biomedcentral.com.sci-hub.cc/articles/10.1186/s13195-017-0262-x
http://download.springer.com.sci-hub.cc/static/pdf/518/art%3A10.1186%2Fs13195-017-0262-x.pdf?originUrl=http%3A%2F%2Falzres.biomedcentral.com%2Farticle%2F10.1186%2Fs13195-017-0262-x&token2=exp=1496425954~acl=%2Fstatic%2Fpdf%2F518%2Fart%253A10.1186%252Fs13195-017-0262-x.pdf*~hmac=f3de75df228aeb8a04032acd0ac036ff6d9514d602cf1a1a9ca11acba2a7b464
A lot of attention has been paid to the relationship of blood pressure and dementia because epidemiological research has reported conflicting evidence. Observational data has shown that midlife hypertension is a risk factor for cognitive decline and dementia later in life, whereas there is evidence that low blood pressure is predictive in later life. The aim of the present study was to examine the association between dementia and blood pressure measured up to 27 years (mean 17.6 years) prior to ascertainment.
In Nord-Trøndelag County, Norway, incident dementia data were collected during 1995-2011, and the diagnoses were validated by a panel of experts in the field. By using the subjects' personal identification numbers, the dementia data were linked to data from the Nord-Trøndelag Health Study (the HUNT Study), a large, population-based health study performed in 1984-1986 (HUNT 1) and 1995-1997 (HUNT 2). A total of 24,638 participants of the HUNT Study were included in the present study, 579 of whom were diagnosed with Alzheimer disease, mixed Alzheimer/vascular dementia, or vascular dementia. Multiple logistic regression analyses were conducted to analyze the association between dementia and blood pressure data from HUNT 1 and HUNT 2.
Over the age of 60 years, consistent inverse associations were observed between systolic blood pressure and all-cause dementia, mixed Alzheimer/vascular dementia, and Alzheimer disease, but not with vascular dementia, when adjusting for age, sex, education, and other relevant covariates. This was observed for systolic blood pressure in both HUNT 1 and HUNT 2, regardless of antihypertensive medication use. There was an adverse association between systolic blood pressure, pulse pressure, and Alzheimer disease in individuals treated with antihypertensive medication under the age of 60 years.
Our data are in line with those in previous studies demonstrating an inverse association between dementia and systolic blood pressure in individuals over the age of 60 years. We cannot exclude a survival effect, however. Among middle-aged subjects (<60 years), elevated systolic blood pressure and pulse pressure were associated with eventual Alzheimer disease in individuals who reported using antihypertensive medication.
Alzheimer disease; Blood pressure; Epidemiology; Prospective case cohort; Risk factors; Vascular dementia
Blocking FSH induces thermogenic adipose tissue and reduces body fat.
Liu P, Ji Y, Yuen T, Rendina-Ruedy E, DeMambro VE, Dhawan S, Abu-Amer W, Izadmehr S, Zhou B, Shin AC, Latif R, Thangeswaran P, Gupta A, Li J, Shnayder V, Robinson ST, Yu YE, Zhang X, Yang F, Lu P, Zhou Y, Zhu LL, Oberlin DJ, Davies TF, Reagan MR, Brown A, Kumar TR, Epstein S, Iqbal J, Avadhani NG, New MI, Molina H, van Klinken JB, Guo EX, Buettner C, Haider S, Bian Z, Sun L, Rosen CJ, Zaidi M.
Nature. 2017 Jun 1;546(7656):107-112. doi: 10.1038/nature22342. Epub 2017 May 24.
Menopause is associated with bone loss and enhanced visceral adiposity. A polyclonal antibody that targets the β-subunit of the pituitary hormone follicle-stimulating hormone (Fsh) increases bone mass in mice. Here, we report that this antibody sharply reduces adipose tissue in wild-type mice, phenocopying genetic haploinsufficiency for the Fsh receptor gene Fshr. The antibody also causes profound beiging, increases cellular mitochondrial density, activates brown adipose tissue and enhances thermogenesis. These actions result from the specific binding of the antibody to the β-subunit of Fsh to block its action. Our studies uncover opportunities for simultaneously treating obesity and osteoporosis.
Edited June 2, 2017 by AlPater
Pregnancy increases stroke risk up to 1 year postpartum and reduces long-term risk.
Cheng CA, Lee JT, Lin HC, Lin HC, Chung CH, Lin FH, Tsao CH, Wu YF, Chien WC, Chiu HW.
QJM. 2017 Jun 1;110(6):355-360. doi: 10.1093/qjmed/hcw222.
http://sci-hub.cc/10.1093/qjmed/hcw222
: The incidence of stroke in pregnant women is low but trending upward. There are few studies of the topic in women of Asian ethnicity.
We aim to evaluate stroke risk in Asian women during and after pregnancy.
: Using the Taiwan National Health Insurance database, we designed a retrospective study that included 18-45-year-old pregnant women between the years 2000 and 2010. We selected a 1:1 age-matched control group of non-pregnant women. The endpoint was any type of stroke during pregnancy or the postpartum period; otherwise, the patients were tracked until 31 December 2010.
: The risk factors for stroke were found using Cox proportional regression to calculate the hazard ratio (HR) with a 95% CI compared with the control group.
: The incidence of stroke within 1 year postpartum was 71/100,000. The risk of postpartum stroke within 1 year was an HR of 1.208 (95% CI: 1.001-5.129). The occurrence of stroke was associated with hypertension, diabetes mellitus, coagulation disorders, migraine, obesity, cerebrovascular malformation and parity. Women with third and fourth parity carried increased risks of 13.3% and 2.5%, respectively, compared with first parity women. In long-term follow-ups, stroke risk was significantly lower, with an adjusted HR of 0.362 (95% CI: 0.269-0.489).
The risk of stroke was elevated during the first year postpartum, but lower in subsequent years. Stroke risk increased in multiparous (≥3) women. Physicians should be on alert for pregnancy complications and ensure appropriate management to prevent postpartum stroke.
Pre-diagnostic copper and zinc biomarkers and colorectal cancer risk in the European Prospective Investigation into Cancer and Nutrition cohort.
Stepien M, Jenab M, Freisling H, Becker NP, Czuban M, Tjønneland A, Olsen A, Overvad K, Boutron-Ruault MC, Mancini FR, Savoye I, Katzke V, Kühn T, Boeing H, Iqbal K, Trichopoulou A, Bamia C, Orfanos P, Palli D, Sieri S, Tumino R, Naccarati A, Panico S, Bueno-de-Mesquita HBA, Peeters PH, Weiderpass E, Merino S, Jakszyn P, Sanchez MJ, Dorronsoro M, Huerta JM, Barricarte A, Boden S, van Guelpen B, Wareham N, Khaw KT, Bradbury KE, Cross AJ, Schomburg L, Hughes DJ.
Carcinogenesis. 2017 Jun 1. doi: 10.1093/carcin/bgx051. [Epub ahead of print]
Adequate intake of copper and zinc, two essential micronutrients, are important for antioxidant functions. Their imbalance may have implications for development of diseases like colorectal cancer (CRC), where oxidative stress is thought to be etiologically involved. As evidence from prospective epidemiologic studies is lacking, we conducted a case-control study nested within the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort to investigate the association between circulating levels of copper and zinc, and their calculated ratio, with risk of CRC development. Copper and zinc levels were measured by reflection X-ray fluorescence spectrometer in 966 cases and 966 matched controls. Multivariable adjusted odds ratios {OR} and 95% confidence intervals (CI) were calculated using conditional logistic regression and are presented for the 5th vs.1st quintile.Higher circulating concentration of copper was associated with a raised CRC risk (OR=1.50; 95%CI: 1.06, 2.13; p-trend=0.02) while an inverse association with cancer risk was observed for higher zinc levels (OR=0.65; 95%CI: 0.43, 0.97; p-trend=0.07). Consequently, the ratio of copper/zinc was positively associated with CRC (OR=1.70; 95%CI: 1.20, 2.40; p-trend=0.0005). In subgroup analyses by follow-up time, the associations remained statistically significant only in those diagnosed within two years of blood collection.In conclusion, these data suggest that copper or copper levels in relation to zinc (copper to zinc ratio) become imbalanced in the process of CRC development. Mechanistic studies into the underlying mechanisms of regulation and action are required to further examine a possible role for higher copper and copper/zinc ratio levels in CRC development and progression.
Efficacy of maternal influenza vaccination against all-cause lower respiratory tract infection hospitalizations in young infants: Results from a randomized controlled trial.
Nunes MC, Cutland CL, Jones S, Downs S, Weinberg A, Ortiz JR, Neuzil KM, Simões EAF, Klugman KP, Madhi SA.
Clin Infect Dis. 2017 May 29. doi: 10.1093/cid/cix497. [Epub ahead of print]
Influenza immunization of pregnant women protects their young infants against laboratory-confirmed influenza infection. Influenza infection might predispose to subsequent bacterial infections that cause severe pneumonia. In a secondary analysis of a randomized clinical trial (RCT), we evaluated the effect of maternal vaccination on infant hospitalizations for all-cause acute lower respiratory tract infection (ALRI).
Infants born to women who participated in a double-blind placebo-controlled RCT in 2011 and 2012 on the efficacy of trivalent inactivated influenza vaccine (IIV) during pregnancy were followed during the first 6 months of life.
The study included 1026 infants born to IIV-recipients and 1023 born to placebo- recipients. There were 52 ALRI hospitalizations (median age 72 days). The incidence (per 1,000 infant-months) of ALRI hospitalizations was lower in infants born to IIV-recipients (3.4 [95%CI: 2.2, 5.4], 19 cases) compared to placebo-recipients (6.0 [95%CI: 4.3, 8.5], 33 cases) with a vaccine efficacy of 43.1% (p=0.050). Thirty of the ALRI hospitalizations occurred during the first 90 days of life, 9 in the IIV-group (3.0 [95%CI: 1.6, 5.7]) and 21 in the placebo-group (7.0 [95%CI: 4.6, 10.8]; incidence rate ratio: 0.43 [95%CI: 0.19, 0.93]) for a vaccine efficacy of 57.5% (p=0.032). The incidence of ALRI hospitalizations was similar in the IIV- and placebo-group for infants older than 3 months. Forty-four of the hospitalized infants were tested for influenza virus infection and one tested positive.
Using a RCT as a vaccine-probe, influenza vaccination during pregnancy decreased all-cause ALRI hospitalization during the first 3 months of life, suggesting possible protection against subsequent bacterial infections that influenza infection might predispose to.
efficacy; hospitalizations; influenza vaccine; lower respiratory tract infections; phase III trial
Mortality rates are lower in siad, than in hypervolaemic or hypovolaemic hyponatraemia; results of a prospective observational study.
Cuesta M, Garrahy A, Slattery D, Gupta S, Hannon AM, McGurren K, Sherlock M, Tormey W, Thompson CJ.
Clin Endocrinol (Oxf). 2017 Jun 2. doi: 10.1111/cen.13388. [Epub ahead of print]
Hyponatremia is associated with increased mortality, but the mortality associated specifically with SIAD is not known. We hypothesised that mortality in SIAD was elevated, but that it was less than in hypervolemic(HEN) or hypovolemic(HON) hyponatremia.
Mortality rates are presented as risk ratios(RR),with 95% confidence intervals (CI), and compared to normonatremic controls(NN).
Prospective, single center, non-interventional study of all patients with hyponatremia(≤ 130 mmol/l) admitted to hospital.
1323 admissions with hyponatremia were prospectively evaluated and 1136 contemporaneous NN controls. 431(32.6%) hyponatraemic patients had HON, 573(43.3%) had SIAD and 275(20.8%) patients had HEN. In patient mortality was higher in hyponatremia than NN (9.1% v 3.3%, p<0.0001). The RRs for in-hospital mortality compared to NN were: SIAD, 1.76 (95% CI 1.08-2.8, p=0.02), HON 2.77 (95% CI 1.8-4.3, p<0.0001) and HEN, 4.9 (95% CI 3.2-7.4, p<0.0001). The mortality rate was higher in HEN (RR 2.85; 95% CI 1.86-4.37, p<0.0001) and in HON, (RR 1.6; 95% CI 1.04-2.52; p=0.03), when compared to SIAD. The Charlson Comorbidity Index was lower in SIAD than in eunatraemic patients(p<0.0001). 9/121(7.4%) patients died with plasma sodium <125 mmol/l and 4(3.3%) with plasma sodium <120 mmol/l. However, 69/121(57%) patients died with a plasma sodium above 133 mmol/l.
We confirmed higher all-cause mortality in hyponatremia than in NN. Mortality was higher in SIAD than in normonatraemia, and was not explained on the basis of co-morbidities. Mortality was higher in HON and HEN than in SIAD. Mortality rates reported for all-cause hyponatremia in the medical literature are not applicable to SIAD.
SIAD ; SIADH ; Hyponatremia; Mortality
mTORC1 activity repression by late endosomal phosphatidylinositol 3,4-bisphosphate.
Marat AL, Wallroth A, Lo WT, Müller R, Norata GD, Falasca M, Schultz C, Haucke V.
Science. 2017 Jun 2;356(6341):968-972. doi: 10.1126/science.aaf8310.
Nutrient sensing by mechanistic target of rapamycin complex 1 (mTORC1) on lysosomes and late endosomes (LyLEs) regulates cell growth. Many factors stimulate mTORC1 activity, including the production of phosphatidylinositol 3,4,5-trisphosphate [PI(3,4,5)P3] by class I phosphatidylinositol 3-kinases (PI3Ks) at the plasma membrane. We investigated mechanisms that repress mTORC1 under conditions of growth factor deprivation. We identified phosphatidylinositol 3,4-bisphosphate [PI(3,4)P2], synthesized by class II PI3K β (PI3KC2β) at LyLEs, as a negative regulator of mTORC1, whereas loss of PI3KC2β hyperactivated mTORC1. Growth factor deprivation induced the association of PI3KC2β with the Raptor subunit of mTORC1. Local PI(3,4)P2 synthesis triggered repression of mTORC1 activity through association of Raptor with inhibitory 14-3-3 proteins. These results unravel an unexpected function for local PI(3,4)P2 production in shutting off mTORC1.
Achieved blood pressure and cardiovascular outcomes in high-risk patients: results from ONTARGET and TRANSCEND trials.
Böhm M, Schumacher H, Teo KK, Lonn EM, Mahfoud F, Mann JF, Mancia G, Redon J, Schmieder RE, Sliwa K, Weber MA, Williams B, Yusuf S.
Lancet. 2017 Apr 5. pii: S0140-6736(17)30754-7. doi: 10.1016/S0140-6736(17)30754-7. [Epub ahead of print]
Studies have challenged the appropriateness of accepted blood pressure targets. We hypothesised that different levels of low blood pressure are associated with benefit for some, but harm for other outcomes.
In this analysis, we assessed the previously reported outcome data from high-risk patients aged 55 years or older with a history of cardiovascular disease, 70% of whom had hypertension, from the ONTARGET and TRANSCEND trials investigating ramipril, telmisartan, and their combination, with a median follow-up of 56 months. Detailed descriptions of randomisation and intervention have already been reported. We analysed the associations between mean blood pressure achieved on treatment; prerandomisation baseline blood pressure; or time-updated blood pressure (last on treatment value before an event) on the composite outcome of cardiovascular death, myocardial infarction, stroke, and hospital admission for heart failure; the components of the composite outcome; and all-cause death. Analysis was done by Cox regression analysis, ANOVA, and χ2.
Recruitment for ONTARGET took place between Dec 1, 2001, and July 31, 2008. TRANSCEND took place between Nov 1, 2001, and May 30, 2004. 30 937 patients were recruited from 733 centres in 40 countries and followed up for a median of 56 months. In ONTARGET, 25 127 patients known to be tolerant to angiotensin-converting-enzyme (ACE)-inhibitors were randomly assigned after a run-in period to oral ramipril 10 mg/day (n=8407), telmisartan 80 mg/day (n=8386), or the combination of both (n=8334). In TRANSCEND, 5810 patients who were intolerant to ACE-inhibitors were randomly assigned to oral telmisartan 80 mg/day (n=2903) or placebo (n=2907). Baseline systolic blood pressure (SBP) 140 mm Hg or higher was associated with greater incidence of all outcomes compared with 120 mm Hg to less than 140 mm Hg. By contrast, a baseline diastolic blood pressure (DBP) less than 70 mm Hg was associated with the highest risk for most outcomes compared with all DBP categories 70 mm Hg or more. In 4052 patients with SBP less than 120 mm Hg on treatment, the risk of the composite cardiovascular outcome (adjusted hazard ratio {HR} 1·14, 95% CI 1·03-1·26), cardiovascular death (1·29, 1·12-1·49), and all deaths (1·28, 1·15-1·42) were increased compared with those in whom SBP was 120-140 mm Hg during treatment (HR 1 for all outcomes, n=16099). No harm or benefit was observed for myocardial infarction, stroke, or hospital admission for heart failure. Mean achieved SBP more accurately predicted outcomes than baseline or time-updated SBP, and was associated with the lowest risk at approximately 130 mm Hg, and at 110-120 mm Hg risk increased for the combined outcome, cardiovascular death, and all-cause death except stroke. A mean DBP less than 70 mm Hg (n=5352) during treatment was associated with greater risk of the composite primary outcome (HR 1·31, 95% CI 1·20-1·42), myocardial infarction (1·55, 1·33-1·80), hospital admission for heart failure (1·59, 1·36-1·86) and all-cause death (1·16, 1·06-1·28) than a DBP 70-80 mm Hg (14 305). A pretreatment and mean on-treatment DBP of about 75 mm Hg was associated with the lowest risk.
Mean achieved SBP less than 120 mm Hg during treatment was associated with increased risk of cardiovascular outcomes except for myocardial infarction and stroke. Similar patterns were observed for DBP less than 70 mm Hg, plus increased risk for myocardial infarction and hospital admission for heart failure. Very low blood pressure achieved on treatment was associated with increased risks of several cardiovascular disease events. These data suggest that the lowest blood pressure possible is not necessarily the optimal target for high-risk patients, although it is not possible to rule out some effect of reverse causality.
Boehringer Ingelheim.
A pilot study into a possible relationship between diet and stuttering.
Hum J, Rietveld T, Wiedijk P, van Lieshout P.
J Fluency Disord. 2017 Jun;52:25-36. doi: 10.1016/j.jfludis.2017.02.004. Epub 2017 Mar 2.
http://sci-hub.cc/10.1016/j.jfludis.2017.02.004
There are theoretical and empirical reasons to consider a potential role for copper metabolism in the brain in how it could influence stuttering. However, a link between stuttering and dietary intake has never been researched in a systematic way. This pilot study therefore aimed to explore a possible association between ingested amounts of copper and thiamine (vitamin B1) with stuttering frequency using a double blind cross-over longitudinal paradigm.
19 adults who stutter between 20 and 51 years old filled out an online survey for 9 consecutive weeks. The survey consisted of self-assessed fluency and mood state scales, as well as food journals. After 4 weeks, the participants consumed either copper or thiamine supplements for 2 weeks, followed by a 1-week washout period, and another period of two weeks taking the other supplement. Formal speech assessments were done pre/post baseline and at the end of each supplement intake. Participants were not informed about the nature of the supplements during the experiment and the investigators were blinded to the order of the supplements.
The results demonstrated that copper and thiamine had no measurable effect on the amount of stuttering (self and formal assessments) but there was a moderate, significant correlation between mood state and fluency.
The findings do not support notions of dietary influences of ingested copper or thiamine on stuttering but do provide modest support for a relationship between variations in stuttering and self-perceived anxiety.
Anxiety; Copper; Diet; Stuttering; Thiamine
Low Bone Density and Bisphosphonate Use and the Risk of Kidney Stones.
Prochaska M, Taylor E, Vaidya A, Curhan G.
Clin J Am Soc Nephrol. 2017 Jun 2. pii: CJN.01420217. doi: 10.2215/CJN.01420217. [Epub ahead of print]
BACKGROUND AND OBJECTIVES:
Previous studies have demonstrated lower bone density in patients with kidney stones, but no longitudinal studies have evaluated kidney stone risk in individuals with low bone density. Small studies with short follow-up reported reduced 24-hour urine calcium excretion with bisphosphonate use. We examined history of low bone density and bisphosphonate use and the risk of incident kidney stone as well as the association with 24-hour calcium excretion.
DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS:
We conducted a prospective analysis of 96,092 women in the Nurses' Health Study II. We used Cox proportional hazards models to adjust for age, body mass index, thiazide use, fluid intake, supplemental calcium use, and dietary factors. We also conducted a cross-sectional analysis of 2294 participants using multivariable linear regression to compare 24-hour urinary calcium excretion between participants with and without a history of low bone density, and among 458 participants with low bone density, with and without bisphosphonate use.
We identified 2564 incident stones during 1,179,860 person-years of follow-up. The multivariable adjusted relative risk for an incident kidney stone for participants with history of low bone density compared with participants without was 1.39 (95% confidence interval [95% CI], 1.20 to 1.62). Among participants with low bone density, the multivariable adjusted relative risk for an incident kidney stone for bisphosphonate users was 0.68 (95% CI, 0.48 to 0.98). In the cross-sectional analysis of 24-hour urine calcium excretion, the multivariable adjusted mean difference in 24-hour calcium was 10 mg/d (95% CI, 1 to 19) higher for participants with history of low bone density. However, among participants with history of low bone density, there was no association between bisphosphonate use and 24-hour calcium with multivariable adjusted mean difference in 24-hour calcium of -2 mg/d (95% CI, -25 to 20).
Low bone density is an independent risk factor for incident kidney stone and is associated with higher 24-hour urine calcium excretion. Among participants with low bone density, bisphosphonate use was associated with lower risk of incident kidney stone but was not independently associated with 24-hour urine calcium excretion.
Body Mass Index; Bone Density; Calcium, Dietary; Cross-Sectional Studies; Diphosphonates; Epidemiologic Studies; Female; Follow-Up Studies; Humans; Kidney Calculi; Linear Models; Proportional Hazards Models; Prospective Studies; Risk Assessment; Thiazides; risk factors
Impact of breakfast skipping compared with dinner skipping on regulation of energy balance and metabolic risk.
Nas A, Mirza N, Hägele F, Kahlhöfer J, Keller J, Rising R, Kufer TA, Bosy-Westphal A.
Am J Clin Nutr. 2017 Jun;105(6):1351-1361. doi: 10.3945/ajcn.116.151332. Epub 2017 May 10.
Background: Meal skipping has become an increasing trend of the modern lifestyle that may lead to obesity and type 2 diabetes.Objective: We investigated whether the timing of meal skipping impacts these risks by affecting circadian regulation of energy balance, glucose metabolism, and postprandial inflammatory responses.Design: In a randomized controlled crossover trial, 17 participants [body mass index (in kg/m2): 23.7 ± 4.6] underwent 3 isocaloric 24-h interventions (55%, 30%, and 15% carbohydrate, fat, and protein, respectively): a breakfast skipping day (BSD) and a dinner skipping day (DSD) separated by a conventional 3-meal-structure day (control). Energy and macronutrient balance was measured in a respiration chamber. Postprandial glucose, insulin, and inflammatory responses in leukocytes as well as 24-h glycemia and insulin secretion were analyzed.Results: When compared with the 3-meal control, 24-h energy expenditure was higher on both skipping days (BSD: +41 kcal/d; DSD: +91 kcal/d; both P < 0.01), whereas fat oxidation increased on the BSD only (+16 g/d; P < 0.001). Spontaneous physical activity, 24-h glycemia, and 24-h insulin secretion did not differ between intervention days. The postprandial homeostasis model assessment index (+54%) and glucose concentrations after lunch (+46%) were, however, higher on the BSD than on the DSD (both P < 0.05). Concomitantly, a longer fasting period with breakfast skipping also increased the inflammatory potential of peripheral blood cells after lunch.Conclusions: Compared with 3 meals/d, meal skipping increased energy expenditure. In contrast, higher postprandial insulin concentrations and increased fat oxidation with breakfast skipping suggest the development of metabolic inflexibility in response to prolonged fasting that may in the long term lead to low-grade inflammation and impaired glucose homeostasis.
energy balance; insulin sensitivity; macronutrient oxidation; meal frequency; meal skipping
Biomarker-calibrated nutrient intake and healthy diet index associations with mortality risks among older and frail women from the Women's Health Initiative.
Zaslavsky O, Zelber-Sagi S, Hebert JR, Steck SE, Shivappa N, Tabung FK, Wirth MD, Bu Y, Shikany JM, Orchard T, Wallace RB, Snetselaar L, Tinker LF.
Am J Clin Nutr. 2017 Jun;105(6):1399-1407. doi: 10.3945/ajcn.116.151530. Epub 2017 Apr 19.
Background: Although studies to date have confirmed the association between nutrition and frailty, the impact of dietary intake and dietary patterns on survivorship in those with frailty is yet to be examined in a well-powered cohort with validated frailty status. Moreover, previous studies were limited by measurement error from dietary self-reports.Objective: We derived biomarker-calibrated dietary energy and protein intakes to address dietary self-report error. Using these data, we then evaluated the association of mortality in older women with frailty and dietary intake and healthy diet indexes, such as the alternate Mediterranean Diet (aMED), the Dietary Approaches to Stop Hypertension (DASH) score, and the Dietary Inflammatory Index (DII).Design: The analytic sample included 10,034 women aged 65-84 y with frailty and complete dietary data from the Women's Health Initiative Observational Study. Frailty was assessed with modified Fried's criteria. Dietary data were collected by food-frequency questionnaire.Results: Over a mean follow-up period of 12.4 y, 3259 (31%) deaths occurred. The HRs showed progressively decreased rates of mortality in women with higher calibrated dietary energy intakes (P-trend = 0.003), higher calibrated dietary protein intakes (P-trend = 0.03), higher aMED scores (P-trend = 0.006), and higher DASH scores (P-trend = 0.02). Although the adjusted point estimates of HRs (95% CIs) for frail women scoring in the second, third, and fourth quartiles on DII measures were 1.15 (1.03, 1.27), 1.28 (1.15, 1.42), and 1.24 (1.12, 1.38), respectively, compared with women in the first quartile, no overall effect was observed across quartiles (P-trend = 0.35). Subgroup analyses by chronic morbidity or smoking status or by excluding women with early death did not substantially change these findings.Conclusions: The current study highlights the importance of nutrition in older, frail women. Diet quality and quantity should be considered in managing persons with frailty.
aging; biomarker; frailty; inflammation; mortality
Food groups and risk of all-cause mortality: a systematic review and meta-analysis of prospective studies.
Schwingshackl L, Schwedhelm C, Hoffmann G, Lampousi AM, Knüppel S, Iqbal K, Bechthold A, Schlesinger S, Boeing H.
PMID: 28446499 Free Article
http://ajcn.nutrition.org/content/105/6/1462.abstract?etoc
http://ajcn.nutrition.org/content/105/6/1462.full.pdf+html
Background: Suboptimal diet is one of the most important factors in preventing early death and disability worldwide.Objective: The aim of this meta-analysis was to synthesize the knowledge about the relation between intake of 12 major food groups, including whole grains, refined grains, vegetables, fruits, nuts, legumes, eggs, dairy, fish, red meat, processed meat, and sugar-sweetened beverages, with risk of all-cause mortality.Design: We conducted a systematic search in PubMed, Embase, and Google Scholar for prospective studies investigating the association between these 12 food groups and risk of all-cause mortality. Summary RRs and 95% CIs were estimated with the use of a random effects model for high-intake compared with low-intake categories, as well as for linear and nonlinear relations. Moreover, the risk reduction potential of foods was calculated by multiplying the RR by optimal intake values (serving category with the strongest association) for risk-reducing foods or risk-increasing foods, respectively.Results: With increasing intake (for each daily serving) of whole grains (RR: 0.92; 95% CI: 0.89, 0.95), vegetables (RR: 0.96; 95% CI: 0.95, 0.98), fruits (RR: 0.94; 95% CI: 0.92, 0.97), nuts (RR: 0.76; 95% CI: 0.69, 0.84), and fish (RR: 0.93; 95% CI: 0.88, 0.98), the risk of all-cause mortality decreased; higher intake of red meat (RR: 1.10; 95% CI: 1.04, 1.18) and processed meat (RR: 1.23; 95% CI: 1.12, 1.36) was associated with an increased risk of all-cause mortality in a linear dose-response meta-analysis. A clear indication of nonlinearity was seen for the relations between vegetables, fruits, nuts, and dairy and all-cause mortality. Optimal consumption of risk-decreasing foods results in a 56% reduction of all-cause mortality, whereas consumption of risk-increasing foods is associated with a 2-fold increased risk of all-cause mortality.Conclusion: Selecting specific optimal intakes of the investigated food groups can lead to a considerable change in the risk of premature death.
diet; dose response; food groups; meta-analysis; mortality
Effects of blood triglycerides on cardiovascular and all-cause mortality: a systematic review and meta-analysis of 61 prospective studies.
Liu J, Zeng FF, Liu ZM, Zhang CX, Ling WH, Chen YM.
Lipids Health Dis. 2013 Oct 29;12:159. doi: 10.1186/1476-511X-12-159. Review.
The relationship of triglycerides (TG) to the risk of death remains uncertain. The aim of this study was to determine the associations between blood triglyceride levels and cardiovascular diseases (CVDs) mortality and all-cause mortality. Four databases were searched without language restriction for relevant studies: PubMed, ScienceDirect, EMBASE, and Google Scholar. All prospective cohort studies reporting an association between TG and CVDs or all-cause mortality published before July 2013 were included. Risk ratios (RRs) with 95% confidence intervals (CIs) were extracted and pooled according to TG categories, unit TG, and logarithm of TG using a random-effects model with inverse-variance weighting. We identified 61 eligible studies, containing 17,018 CVDs deaths in 726,030 participants and 58,419 all-cause deaths in 330,566 participants. Twelve and fourteen studies, respectively, reported the effects estimates of CVDs and total mortality by TG categories. Compared to the referent (90-149 mg/dL), the pooled RRs (95% CI) of CVDs mortality for the lowest (< 90 mg/dL), borderline-high (150-199 mg/dL), and high TG (≥ 200 mg/dL) groups were 0.83 (0.75 to 0.93), 1.15 (1.03 to 1.29), and 1.25 (1.05 to 1.50); for total mortality they were 0.94 (0.85 to 1.03), 1.09 (1.02 to 1.17), and 1.20 (1.04 to 1.38), respectively. The risks of CVDs and all-cause deaths were increased by 13% and 12% (p < 0.001) per 1-mmol/L TG increment in twenty-two and twenty-two studies reported RRs per unit TG, respectively. In conclusion, elevated blood TG levels were dose-dependently associated with higher risks of CVDs and all-cause mortality.
Prospective study of dietary fat and the risk of age-related macular degeneration.
Cho E, Hung S, Willett WC, Spiegelman D, Rimm EB, Seddon JM, Colditz GA, Hankinson SE.
Am J Clin Nutr. 2001 Feb;73(2):209-18.
The relation between intakes of total fat and specific types of fat and age-related macular degeneration (AMD) remains unclear.
Our objective was to examine prospectively the association between fat intake and AMD.
We conducted a prospective follow-up study of participants in the Nurses' Health Study and the Health Professionals Follow-up Study. At baseline (1984 for women and 1986 for men), the study included 42743 women and 29746 men aged > or = 50 y with no diagnosis of AMD who were followed until 1996. Fat intake was assessed with a food-frequency questionnaire.
We accrued 567 patients with AMD with a visual loss of 20/30 or worse. The pooled multivariate relative risk (RR) for the highest compared with the lowest quintile of total fat intake was 1.54 (95% CI: 1.17, 2.01; P for trend = 0.008). Linolenic acid was positively associated with risk of AMD (top versus bottom quintile of RR: 1.49; 95% CI: 1.15, 1.94; P for trend = 0.0009). Docosahexaenoic acid had a modest inverse relation with AMD (top versus bottom quintile of RR: 0.70; 95% CI: 0.52, 0.93; P for trend = 0.05), and >4 servings of fish/wk was associated with a 35% lower risk of AMD compared with < or = 3 servings/mo (RR: 0.65; 95% CI: 0.46, 0.91; P for trend = 0.009).
Total fat intake was positively associated with risk of AMD, which may have been due to intakes of individual fatty acids, such as linolenic acid, rather than to total fat intakes per se. A high intake of fish may reduce the risk of AMD.
Dietary intake of α-linolenic acid and risk of age-related macular degeneration.
Wu J, Cho E, Giovannucci EL, Rosner BA, Sastry SM, Schaumberg DA, Willett WC.
Am J Clin Nutr. 2017 Jun;105(6):1483-1492. doi: 10.3945/ajcn.116.143453. Epub 2017 May 3.
Background: The relation between α-linolenic acid (ALA), a plant-derived omega-3 (n-3) fatty acid, and age-related macular degeneration (AMD) is unclear. European researchers reported that ≤40% of ALA can be present as trans forms.Objective: We aimed to evaluate the associations between intake of ALA and intermediate and advanced AMD.Design: Seventy-five thousand eight hundred eighty-nine women from the Nurses' Health Study and 38,961 men from Health Professionals Follow-Up Study were followed up from 1984 to 2012 and from 1986 to 2010, respectively. We assessed dietary intake by a validated food-frequency questionnaire at baseline and every 4 y thereafter. One thousand five hundred eighty-nine incident intermediate and 1356 advanced AMD cases (primarily neovascular AMD) were confirmed by medical record review.Results: The multivariable-adjusted HR for intermediate AMD comparing ALA intake at the top quintile to the bottom quintile was 1.28 (95% CI: 1.05, 1.56; P-trend = 0.01) in the analyses combining 2 cohorts. The HR in each cohort was in the positive direction but reached statistical significance only in the women. However, the positive association was apparent only in the pre-2002 era in each cohort and not afterward (P-time interaction = 0.003). ALA intake was not associated with advanced AMD in either time period. Using gas-liquid chromatography, we identified both cis ALA (mean ± SD: 0.13% ± 0.04%) and trans ALA isomers (0.05% ± 0.01%) in 395 erythrocyte samples collected in 1989-1990. In stepwise regression models, mayonnaise was the leading predictor of erythrocyte concentrations of cis ALA and one isomer of trans ALA. We also found trans ALA in mayonnaise samples.Conclusions: A high intake of ALA was associated with an increased risk of intermediate AMD before 2002 but not afterward. The period before 2002 coincides with the same time period when trans ALA was found in food and participants' blood; this finding deserves further study.
age-related macular degeneration; food-frequency questionnaire; omega-3 fatty acids; prospective cohort study; trans fat; α-linolenic acid
Dietary Methionine Restriction Regulates Liver Protein Synthesis and Gene Expression Independently of Eukaryotic Initiation Factor 2 Phosphorylation in Mice.
Pettit AP, Jonsson WO, Bargoud AR, Mirek ET, Peelor FF 3rd, Wang Y, Gettys TW, Kimball SR, Miller BF, Hamilton KL, Wek RC, Anthony TG.
J Nutr. 2017 Jun;147(6):1031-1040. doi: 10.3945/jn.116.246710. Epub 2017 Apr 26.
Background: The phosphorylation of eukaryotic initiation factor 2 (p-eIF2) during dietary amino acid insufficiency reduces protein synthesis and alters gene expression via the integrated stress response (ISR).Objective: We explored whether a Met-restricted (MR) diet activates the ISR to reduce body fat and regulate protein balance.Methods: Male and female mice aged 3-6 mo with either whole-body deletion of general control nonderepressible 2 (Gcn2) or liver-specific deletion of protein kinase R-like endoplasmic reticulum kinase (Perk) alongside wild-type or floxed control mice were fed an obesogenic diet sufficient in Met (0.86%) or an MR (0.12% Met) diet for ≤5 wk. Ala enrichment with deuterium was measured to calculate protein synthesis rates. The guanine nucleotide exchange factor activity of eIF2B was measured alongside p-eIF2 and hepatic mRNA expression levels at 2 d and 5 wk. Metabolic phenotyping was conducted at 4 wk, and body composition was measured throughout. Results were evaluated with the use of ANOVA (P < 0.05).Results: Feeding an MR diet for 2 d did not increase hepatic p-eIF2 or reduce eIF2B activity in wild-type or Gcn2-/- mice, yet many genes transcriptionally regulated by the ISR were altered in both strains in the same direction and amplitude. Feeding an MR diet for 5 wk increased p-eIF2 and reduced eIF2B activity in wild-type but not Gcn2-/- mice, yet ISR-regulated genes altered in both strains similarly. Furthermore, the MR diet reduced mixed and cytosolic but not mitochondrial protein synthesis in both the liver and skeletal muscle regardless of Gcn2 status. Despite the similarities between strains, the MR diet did not increase energy expenditure or reduce body fat in Gcn2-/- mice. Finally, feeding the MR diet to mice with Perk deleted in the liver increased hepatic p-eIF2 and altered body composition similar to floxed controls.Conclusions: Hepatic activation of the ISR resulting from an MR diet does not require p-eIF2. Gcn2 status influences body fat loss but not protein balance when Met is restricted.
ATF4; GCN2; PERK; eIF2B; integrated stress response
Dietary Patterns and Type 2 Diabetes: A Systematic Literature Review and Meta-Analysis of Prospective Studies.
Jannasch F, Kröger J, Schulze MB.
Background: Different methodologic approaches for constructing dietary patterns and differences in their composition limit conclusions on healthful patterns for diabetes prevention.Objective: We summarized evidence from prospective studies that examined associations of dietary patterns with type 2 diabetes by considering different methodologic approaches.Methods: The literature search (MEDLINE and Web of Science) identified prospective studies (cohorts or trials) that associated dietary patterns with diabetes incidence in nondiabetic and apparently healthy participants. We summarized evidence by meta-analyses and distinguished different methodologic approaches.Results: The search resulted in 48 articles comprising 16 cohorts. Adherence to the Mediterranean diet (RR for comparing extreme quantiles: 0.87; 95% CI: 0.82, 0.93), Dietary Approaches to Stop Hypertension (DASH) (RR: 0.81; 95% CI: 0.72, 0.92), and Alternative Healthy Eating Index (AHEI) (RR: 0.79; 95% CI: 0.69, 0.90) was associated with significant risk reductions of incident diabetes. Patterns from exploratory factor and principal component analyses characterized by red and processed meat, refined grains, high-fat dairy, eggs, and fried products ("mainly unhealthy") were positively associated with diabetes (RR: 1.44; 95% CI: 1.27, 1.62), whereas patterns characterized by vegetables, legumes, fruits, poultry, and fish ("mainly healthy") were inversely associated with diabetes (RR: 0.84; 95% CI: 0.77, 0.91). Reduced rank regression (RRR) used diabetes-related biomarkers to identify patterns. These patterns were characterized by high intakes of refined grains, sugar-sweetened soft drinks, and processed meat and were all significantly associated with diabetes risk.Conclusions: Our meta-analysis suggests that diets according to the Mediterranean diet, DASH, and AHEI have a strong potential for preventing diabetes, although they differ in some particular components. Exploratory dietary patterns were grouped based on concordant food groups and were significantly associated with diabetes risk despite single-component foods having limited evidence for an association. Still, they remain population-specific observations. Consistent positive associations with diabetes risk were observed for 3 RRR patterns.
dietary patterns; exploratory statistical methods; investigator-driven statistical methods; meta-analysis; systematic review; type 2 diabetes
Implications of US Nutrition Facts Label Changes on Micronutrient Density of Fortified Foods and Supplements.
McBurney MI, Hartunian-Sowa S, Matusheski NV.
J Nutr. 2017 Jun;147(6):1025-1030. doi: 10.3945/jn.117.247585. Epub 2017 May 10.
http://jn.nutrition.org/content/147/6/1025.abstract?etoc
http://jn.nutrition.org/content/147/6/1025.full.pdf+html
The US FDA published new nutrition-labeling regulations in May 2016. For the first time since the implementation of the Nutrition Labeling and Education Act of 1990, the Daily Value (DV) for most vitamins will change, as will the units of measurement used in nutrition labeling for some vitamins. For some food categories, the Reference Amounts Customarily Consumed (RACCs) will increase to reflect portions commonly consumed on a single occasion. These regulatory changes are now effective, and product label changes will be mandatory beginning 26 July 2018. This commentary considers the potential impact of these regulatory changes on the vitamin and mineral contents of foods and dietary supplements. Case studies examined potential effects on food fortification and nutrient density. The updated DVs may lead to a reduction in the nutrient density of foods and dietary supplements with respect to 8 vitamins (vitamin A, thiamin, riboflavin, niacin, vitamin B-6, vitamin B-12, biotin, and pantothenic acid) and 6 minerals (zinc, selenium, copper, chromium, molybdenum, and chloride), and have mixed effects on 2 vitamins where the amount required per serving is affected by chemical structure (i.e., form) (natural vitamin E compared with synthetic vitamin E and folic acid compared with folate). Despite an increased DV for vitamin D, regulations limit food fortification. The adoption of Dietary Folate Equivalents for folate labeling may lead to reductions in the quantity of folic acid voluntarily added per RACC. Finally, because of increased RACCs in some food categories to reflect portions that people typically eat at one time, the vitamin and mineral density of these foods may be affected adversely. In totality, the United States is entering an era in which the need to monitor dietary intake patterns and nutritional status is unprecedented.
DV; Daily Value; Nutrition Facts panel; RACC; RDI; Reference Amount Customarily Consumed; Reference Dietary Intake; fortification; nutrient density; vitamins
SIRT1 Polymorphisms and Serum-Induced SIRT1 Protein Expression in Aging and Frailty: The CHAMP Study.
Razi S, Cogger VC, Kennerson M, Benson VL, McMahon AC, Blyth FM, Handelsman DJ, Seibel MJ, Hirani V, Naganathan V, Waite L, de Cabo R, Cumming RG, Le Couteur DG.
J Gerontol A Biol Sci Med Sci. 2017 Jul 1;72(7):870-876. doi: 10.1093/gerona/glx018.
http://sci-hub.cc/10.1093/gerona/glx018
The nutrient sensing protein, SIRT1 influences aging and nutritional interventions such as caloric restriction in animals, however, the role of SIRT1 in human aging remains unclear. Here, the role of SIRT1 single-nucleotide polymorphisms (SNPs) and serum-induced SIRT1 protein expression (a novel assay that detects circulating factors that influence SIRT1 expression in vitro) were studied in the Concord Health and Ageing in Men Project (CHAMP), a prospective cohort of community dwelling men aged 70 years and older. Serum-induced SIRT1 expression was not associated with age or mortality, however participants within the lowest quintile were less likely to be frail (odds ratio (OR) 0.34, 95% confidence interval (CI) 0.17-0.69, N = 1,309). Serum-induced SIRT1 expression was associated with some markers of body composition and nutrition (height, weight, body fat and lean % mass, albumin, and cholesterol) but not disease. SIRT1 SNPs rs2273773, rs3740051, and rs3758391 showed no association with age, frailty, or mortality but were associated with weight, height, body fat and lean, and albumin levels. There were some weak associations between SIRT1 SNPs and arthritis, heart attack, deafness, and cognitive impairment. There was no association between SIRT1 SNPs and the serum-induced SIRT1 assay. SIRT1 SNPs and serum-induced SIRT1 expression in older men may be more closely associated with nutrition and body composition than aging and age-related conditions.
Body composition; Frailty; Mortality; Polymorphism; SIRT1; Sirtuin
xxbbb
Changes in the Lethality of Frailty Over 30 Years: Evidence From Two Cohorts of 70-Year-Olds in Gothenburg Sweden.
Bäckman K, Joas E, Falk H, Mitnitski A, Rockwood K, Skoog I.
J Gerontol A Biol Sci Med Sci. 2017 Jul 1;72(7):945-950. doi: 10.1093/gerona/glw160.
http://sci-hub.cc/10.1093/gerona/glw160
With aging, health deficits accumulate: people with few deficits for their age are fit, and those with more are frail. Despite recent reports of improved health in old age, how deficit accumulation is changing is not clear. Our objectives were to evaluate changes over 30 years in the degree of deficit accumulation and in the relationship between frailty and mortality in older adults.
We analyzed data from two population based, prospective longitudinal cohorts, assembled in 1971-1972 and 2000-2001, respectively. Residents of Gothenburg Sweden, systematically drawn from the Swedish population registry. The 1901-1902 cohort (N = 973) had a response rate of 84.8%; the 1930 cohort (N = 500) had a response rate of 65.1%. A frailty index using 36 deficits was calculated using data from physical examinations, assessments of physical activity, daily, sensory and social function, and laboratory tests. We evaluated mortality over 12.5 years in relation to the frailty index.
Mean frailty levels were the same (x¯ = 0.20, p = .37) in the 1901-1902 cohort as in the 1930 cohort. Although the frailty index was linked to the risk of death in both cohorts, the hazards ratio decreased from 1.67 per 0.1 increment in the frailty index for the first cohort to 1.32 for the second cohort (interaction term p = .005).
Although frailty was as common at age 70 as before, its lethality appears to be less. Just why this is so should be explored further.
Cohort effects; Deficit accumulation; Frail older adults; Frailty index; Mortality
Heterogeneity of Human Aging and Its Assessment.
Mitnitski A, Howlett SE, Rockwood K.
https://academic.oup.com/biomedgerontology/article/72/7/877/2629918/Heterogeneity-of-Human-Aging-and-Its-Assessment
Understanding the heterogeneity in health of older adults is a compelling question in the biology of aging. We analyzed the performance of five measures of health heterogeneity, judging them by their ability to predict mortality. Using clinical and biomarker data on 1,013 participants of the Canadian Study of Health and Aging who were followed for up to 6 years, we calculated two indices of biological age using the Klemera and Doubal method, which controversially includes using chronological age as a "biomarker," and three frailty indices (FIs) that do not include chronological age: a standard clinical FI, an FI from standard laboratory blood tests and blood pressure, and their combination (FI-combined). Predictive validity was tested using Cox proportional hazards analysis and discriminative ability by the area under the receiver-operating characteristic curves. All five measures showed moderate performance that was improved by combining measures to evaluate larger numbers of items. The greatest addition in explanatory power came from the FI-combined that showed the best mortality prediction in an age-adjusted model. More extensive comparisons across different databases are required, but these results do not support including chronological age as a biomarker.
Biological age; Biological aging; Biomarkers; Frailty indices; Health heterogeneity
Oral Disease and 3-Year Incidence of Frailty in Mexican Older Adults.
Castrejón-Pérez RC, Jiménez-Corona A, Bernabé E, Villa-Romero AR, Arrivé E, Dartigues JF, Gutiérrez-Robledo LM, Borges-Yáñez SA.
Poor oral health has been associated with some components of frailty. The objective of this study was to identify the association between clinical measures of oral health and the incidence of frailty among community-dwelling older adults aged 70 or older in Mexico City.
A 3-year cohort study with a probabilistic representative sample of home-dwelling elders of one district of Mexico City was performed. Baseline and follow-up interview and oral clinical evaluations were carried out by standardized examiners in participants' homes. Dependent variable was incident frailty defined according to the frailty phenotype. Independent variables were the utilization of dental services, the presence of xerostomia, the number of natural teeth, use of removable dental prostheses, presence of severe periodontitis, and presence of root remnants. Sociodemographic, behavioral, and health measures were included as confounders. The association between oral health conditions and incident frailty was modeled using Poisson regression models with robust variance estimators. The models were adjusted for confounders and interactions.
We identified a 14.8% cumulative incidence of frailty. Each additional tooth was associated with a lower probability of developing frailty by 5.0% (risk ratio = 0.90; 95% CI 1.02-1.10). The 3-year risk ratio of developing frailty was 2.13 times higher (95% CI 1.01-4.50) among participants having severe periodontitis.
The number of teeth and the presence of severe periodontitis are associated with the development of frailty after controlling for confounders. Further studies are needed on this topic.
Cohort; Frailty; Incidence; Oral health; Periodontitis; Tooth loss
Survival Comparison of Patients With Cystic Fibrosis in Canada and the United States: A Population-Based Cohort Study.
Stephenson AL, Sykes J, Stanojevic S, Quon BS, Marshall BC, Petren K, Ostrenga J, Fink AK, Elbert A, Goss CH.
Ann Intern Med. 2017 Apr 18;166(8):537-546. doi: 10.7326/M16-0858. Epub 2017 Mar 14.
In 2011, the median age of survival of patients with cystic fibrosis reported in the United States was 36.8 years, compared with 48.5 years in Canada. Direct comparison of survival estimates between national registries is challenging because of inherent differences in methodologies used, data processing techniques, and ascertainment bias.
To use a standardized approach to calculate cystic fibrosis survival estimates and to explore differences between Canada and the United States.
Population-based study.
42 Canadian cystic fibrosis clinics and 110 U.S. cystic fibrosis care centers.
Patients followed in the Canadian Cystic Fibrosis Registry (CCFR) and U.S. Cystic Fibrosis Foundation Patient Registry (CFFPR) between 1990 and 2013.
Cox proportional hazards models were used to compare survival between patients followed in the CCFR (n = 5941) and those in the CFFPR (n = 45 448). Multivariable models were used to adjust for factors known to be associated with survival.
Median age of survival in patients with cystic fibrosis increased in both countries between 1990 and 2013; however, in 1995 and 2005, survival in Canada increased at a faster rate than in the United States (P < 0.001). On the basis of contemporary data from 2009 to 2013, the median age of survival in Canada was 10 years greater than in the United States (50.9 vs. 40.6 years, respectively). The adjusted risk for death was 34% lower in Canada than the United States (hazard ratio, 0.66 [95% CI, 0.54 to 0.81]). A greater proportion of patients in Canada received transplants (10.3% vs. 6.5%, respectively [standardized difference, 13.7]). Differences in survival between U.S. and Canadian patients varied according to U.S. patients' insurance status.
LIMITATION:
Ascertainment bias due to missing data or nonrandom loss to follow-up might affect the results.
Differences in cystic fibrosis survival between Canada and the United States persisted after adjustment for risk factors associated with survival, except for private-insurance status among U.S. patients. Differential access to transplantation, increased posttransplant survival, and differences in health care systems may, in part, explain the Canadian survival advantage.
>>>>>>>>>>>>>>>>>>>>>>>>
The Cystic Fibrosis Survival Gap: Why Do Canadians Fare Better Than Americans?
Flume PA, VanDevanter DR.
Ann Intern Med. 2017 Apr 18;166(8):599-600. doi: 10.7326/M17-0564. Epub 2017 Mar 14. No abstract available.
http://sci-hub.cc/10.7326/M17-0564
Pharmacologic Treatment of Hypertension in Adults Aged 60 Years or Older to Higher Versus Lower Blood Pressure Targets: A Clinical Practice Guideline From the American College of Physicians and the American Academy of Family Physicians.
Qaseem A, Wilt TJ, Rich R, Humphrey LL, Frost J, Forciea MA; Clinical Guidelines Committee of the American College of Physicians and the Commission on Health of the Public and Science of the American Academy of Family Physicians..
Ann Intern Med. 2017 Mar 21;166(6):430-437. doi: 10.7326/M16-1785. Epub 2017 Jan 17.
The American College of Physicians (ACP) and the American Academy of Family Physicians (AAFP) jointly developed this guideline to present the evidence and provide clinical recommendations based on the benefits and harms of higher versus lower blood pressure targets for the treatment of hypertension in adults aged 60 years or older.
This guideline is based on a systematic review of published randomized, controlled trials for primary outcomes and observational studies for harms only (identified through EMBASE, the Cochrane Database of Systematic Reviews, MEDLINE, and ClinicalTrials.gov), from database inception through January 2015. The MEDLINE search was updated through September 2016. Evaluated outcomes included all-cause mortality, morbidity and mortality related to stroke, major cardiac events (fatal and nonfatal myocardial infarction and sudden cardiac death), and harms. This guideline grades the evidence and recommendations using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) method.
TARGET AUDIENCE AND PATIENT POPULATION:
The target audience for this guideline includes all clinicians, and the target patient population includes all adults aged 60 years or older with hypertension.
RECOMMENDATION 1:
ACP and AAFP recommend that clinicians initiate treatment in adults aged 60 years or older with systolic blood pressure persistently at or above 150 mm Hg to achieve a target systolic blood pressure of less than 150 mm Hg to reduce the risk for mortality, stroke, and cardiac events. (Grade: strong recommendation, high-quality evidence). ACP and AAFP recommend that clinicians select the treatment goals for adults aged 60 years or older based on a periodic discussion of the benefits and harms of specific blood pressure targets with the patient.
ACP and AAFP recommend that clinicians consider initiating or intensifying pharmacologic treatment in adults aged 60 years or older with a history of stroke or transient ischemic attack to achieve a target systolic blood pressure of less than 140 mm Hg to reduce the risk for recurrent stroke. (Grade: weak recommendation, moderate-quality evidence). ACP and AAFP recommend that clinicians select the treatment goals for adults aged 60 years or older based on a periodic discussion of the benefits and harms of specific blood pressure targets with the patient.
ACP and AAFP recommend that clinicians consider initiating or intensifying pharmacologic treatment in some adults aged 60 years or older at high cardiovascular risk, based on individualized assessment, to achieve a target systolic blood pressure of less than 140 mm Hg to reduce the risk for stroke or cardiac events. (Grade: weak recommendation, low-quality evidence). ACP and AAFP recommend that clinicians select the treatment goals for adults aged 60 years or older based on a periodic discussion of the benefits and harms of specific blood pressure targets with the patient.
The Accuracy of Heart Rate Monitoring by Some Wrist-Worn Activity Trackers.
Cadmus-Bertram L, Gangnon R, Wirkus EJ, Thraen-Borowski KM, Gorzelitz-Liebhauser J.
Ann Intern Med. 2017 Apr 18;166(8):610-612. doi: 10.7326/L16-0353. Epub 2017 Apr 11. No abstract available.
http://sci-hub.cc/10.7326/L16-0353
Accuracy in Wrist-Worn, Sensor-Based Measurements of Heart Rate and Energy Expenditure in a Diverse Cohort.
Shcherbina A, Mattsson CM, Waggott D, Salisbury H, Christle JW, Hastie T, Wheeler MT, Ashley EA.
J Pers Med. 2017 May 24;7(2). pii: E3. doi: 10.3390/jpm7020003.
Overweight: The Body Mass Index Category With an Identity Crisis.
Després JP.
Ann Intern Med. 2017 May 2;166(9):671-672. doi: 10.7326/M17-0566. Epub 2017 Apr 4. No abstract available.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
Weight History and All-Cause and Cause-Specific Mortality in Three Prospective Cohort Studies.
Yu E, Ley SH, Manson JE, Willett W, Satija A, Hu FB, Stokes A.
Ann Intern Med. 2017 May 2;166(9):613-620. doi: 10.7326/M16-1390. Epub 2017 Apr 4.
https://www.crsociety.org/topic/11801-als-papers-citations-and-possibly-links-and-excerpts-or-my-synopses/page-9?hl=28384755&do=findComment&comment=21322
The relationship between body mass index (BMI) and mortality is controversial.
To investigate the relationship between maximum BMI over 16 years and subsequent mortality.
3 prospective cohort studies.
Nurses' Health Study I and II and Health Professionals Follow-Up Study.
225 072 men and women with 32 571 deaths observed over a mean of 12.3 years of follow-up.
Maximum BMI over 16 years of weight history and all-cause and cause-specific mortality.
Maximum BMIs in the overweight (25.0 to 29.9 kg/m2) (multivariate hazard ratio {HR}, 1.06 [95% CI, 1.03 to 1.08]), obese I (30.0 to 34.9 kg/m2) (HR, 1.24 [CI, 1.20 to 1.29]), and obese II (≥35.0 kg/m2) (HR, 1.73 [CI, 1.66 to 1.80]) categories were associated with increases in risk for all-cause death. The pattern of excess risk with a maximum BMI above normal weight was maintained across strata defined by smoking status, sex, and age, but the excess was greatest among those younger than 70 years and never-smokers. In contrast, a significant inverse association between overweight and mortality (HR, 0.96 [CI, 0.94 to 0.99]) was observed when BMI was defined using a single baseline measurement. Maximum overweight was also associated with increased cause-specific mortality, including death from cardiovascular disease and coronary heart disease.
Residual confounding and misclassification.
The paradoxical association between overweight and mortality is reversed in analyses incorporating weight history. Maximum BMI may be a useful metric to minimize reverse causation bias associated with a single baseline BMI assessment.
Microbe from Yogurt Impedes Drug-Resistant Bacteria
By Aggie Mika
http://www.the-scientist.com/?articles.view/articleNo/49590/title/Microbe-from-Yogurt-Impedes-Drug-Resistant-Bacteria/&utm_campaign=NEWSLETTER_TS_The-Scientist-Daily_2016&utm_source=hs_email&utm_medium=email&utm_content=52722768&_hsenc=p2ANqtz-8T-J_Wem-avGS7-aXE61q4ccJIeh9yT8UsBsoTHBqUFcQT2XoPGhgtlxdlm2GQOUi4KOgqtgDvqH8rS-2spmncy_qKXQ&_hsmi=52722768
Lactobacillus parafarraginis metabolites hindered the growth of multiple, distantly related bacterial pathogens.
An Assessment of the Accuracy of Home Blood Pressure Monitors When Used in Device Owners.
Ringrose JS, Polley G, McLean D, Thompson A, Morales F, Padwal R.
Am J Hypertens. 2017 Jul 1;30(7):683-689. doi: 10.1093/ajh/hpx041.
To examine the accuracy of home blood pressure (BP) devices, on their owners, compared to auscultatory reference standard BP measurements.
Eighty-five consecutive consenting subjects ≥18 years of age, who owned an oscillometric home BP device (wrist or upper-arm device), with BP levels between 80-220/50-120 mm Hg, and with arm circumferences between 25-43 cm were studied. Pregnancy and atrial fibrillation were exclusion criteria. Device measurements from each subject's home BP device were compared to simultaneous 2-observer auscultation using a mercury sphygmomanometer. Between-group mean comparisons were conducted using paired t-tests. The proportion of patients with device-to-auscultatory differences of ≥5, 10, and 15 mm Hg were tabulated and predictors of systolic and diastolic BP differences were identified using linear regression.
Mean age was 66.4 ± 11.0 years, mean arm circumference was 32.7 ± 3.7 cm, 54% were female and 78% had hypertension. Mean BPs were 125.7 ± 14.0/73.9 ± 10.4 mm Hg for home BP devices vs. 129.0 ± 14.7/72.9 ± 9.3 for auscultation (difference of -3.3 ± 7.3/0.9 ± 6.1; P values <0.0001 for systolic and 0.17 for diastolic). The proportion of devices with systolic or diastolic BP differences from auscultation of ≥5, 10, and 15 mm Hg was 69%, 29%, and 7%, respectively. Increasing arm circumference was a statistically significant predictor of higher systolic (parameter estimate 0.61 per cm increase; P value 0.004) and diastolic (0.38; 0.03) BP.
Although mean differences from 2-observer auscultation were acceptable, when tested on their owners, most home BP devices were not accurate to within 5 mm Hg. Ensuring acceptable accuracy of the device-owner pairing should be prioritized.
auscultatory; blood pressure; blood pressure measurement; home blood pressure; hypertension; oscillometry; validation.
Healthy Lifestyle and Blood Pressure Variability in Young Adults.
Maseli A, Aeschbacher S, Schoen T, Fischer A, Jung M, Risch M, Risch L, Conen D.
http://sci-hub.cc/10.1093/ajh/hpx034
The aim of this study was to assess the relationships between healthy lifestyle metrics and blood pressure variability (BPV) in young and healthy adults.
A population-based sample of 1,999 individuals aged 25-41 years was investigated. A lifestyle-score from 0 (most unhealthy) to 7 (most healthy) was calculated by giving one point for each of the following components: never smoking cigarettes, adhering to a healthy diet, performing moderate or intense physical activity, having a body mass index <25 kg/m2, a total cholesterol <200 mg/dl, a glycated hemoglobin <5.7%, or a conventional BP <120/80 mm Hg. Standardized ambulatory 24-hour BP measurements were obtained in all individuals. BPV was defined as the SD of all individual ambulatory BP recordings. We constructed multivariable linear regression models to assess the relationships between the lifestyle-score and BPV. None of the results were adjusted for multiple testing.
Median age was 37 years and 46.8% were men. With increasing lifestyle-score, systolic and diastolic BPV is decreasing linearly (P for trend <0.0001), even after multivariable adjustment. Per 1-point increase in lifestyle-score, the β-coefficient (95% confidence interval) for systolic and diastolic 24-hour BPV was -0.03 (-0.03; -0.02) and -0.04 (-0.05; -0.03), respectively, both P for trend <0.0001. These relationships were attenuated but remained statistically significant after additional adjustment for mean individual BP.
In this study of young and healthy adults, adopting a healthy lifestyle was associated with a lower BPV. These associations were independent of mean BP levels.
blood pressure; blood pressure variability; healthy lifestyle; hypertension; lifestyle-score; population-based.
Healthy obesity and risk of accelerated functional decline and disability.
Bell JA, Sabia S, Singh-Manoux A, Hamer M, Kivimäki M.
Int J Obes (Lond). 2017 Mar 14. doi: 10.1038/ijo.2017.51. [Epub ahead of print]
https://www.nature.com/ijo/journal/v41/n6/full/ijo201751a.html
BACKGROUND/OBJECTIVES:
Some obese adults have a normal metabolic profile and are considered 'healthy', but whether they experience faster ageing than healthy normal-weight adults is unknown. We compared decline in physical function, worsening of bodily pain and likelihood of future mobility limitation and disability between these groups.
SUBJECTS/METHODS:
This was a population-based observational study using repeated measures over 2 decades (Whitehall II cohort data). Normal-weight (body mass index (BMI) 18.5-24.9 kg m-2), overweight (25.0-29.9 kg m-2) and obese (⩾30.0 kg m-2) adults were considered metabolically healthy if they had 0 or 1 of 5 risk factors (hypertension, low high-density lipoprotein cholesterol, high triacylglycerol, high blood glucose and insulin resistance) in 1991/1994. Decline in physical function and worsening of bodily pain based on change in Short Form Health Survey items using eight repeated measures over 18.8 years (1991/1994-2012/2013) were compared between metabolic-BMI groups using linear mixed models. Odds of mobility limitation based on objective walking speed (slowest tertile) and of disability based on limitations in ⩾1 of 6 basic activities of daily living, each using three repeated measures over 8.3 years (2002/2004-2012/2013), were compared using logistic mixed models.
In multivariable-adjusted mixed models on up to 6635 adults (initial mean age 50 years; 70% male), healthy normal-weight adults experienced a decline in physical function of -3.68 (95% CI=-4.19, -3.16) score units per decade; healthy obese adults showed an additional -3.48 (-4.88, -2.08) units decline. Healthy normal-weight adults experienced a -0.49 (-1.11, 0.12) score unit worsening of bodily pain per decade; healthy obese adults had an additional -2.23 (-3.78, -0.69) units worsening. Healthy obesity versus healthy normal-weight conferred 3.39 (2.29, 5.02) times higher odds of mobility limitation and 3.75 (1.94, 7.24) times higher odds of disability.
Our results suggest that obesity, even if metabolically healthy, accelerates age-related declines in functional ability and poses a threat to independence in older age.
Antihypertensive agents do not prevent blood-brain barrier dysfunction and cognitive deficits in dietary-induced obese mice.
Mamo JC, Lam V, Giles C, Coulson SH, Fimognari N, Mooranian A, Al-Salami H, Takechi R.
http://sci-hub.cc/10.1038/ijo.2017.57
While vascular risk factors including Western-styled diet and obesity are reported to induce cognitive decline and increase dementia risk, recent reports consistently suggest that compromised integrity of cerebrovascular blood-brain barrier (BBB) may have an important role in neurodegeneration and cognitive deficits. A number of studies report that elevated blood pressure increases the permeability of BBB.
In this study, we investigated the effects of antihypertensive agents, candesartan or ursodeoxycholic acid (UDCA), on BBB dysfunction and cognitive decline in wild-type mice maintained on high fat and fructose (HFF) diet for 24 weeks.
In HFF-fed mice, significantly increased body weight with elevated blood pressure, plasma insulin and glucose compared with mice fed with low-fat control chow was observed. Concomitantly, significant disruption of BBB and cognitive decline were evident in the HFF-fed obese mice. Hypertension was completely prevented by the coprovision of candesartan or UDCA in mice maintained on HFF diet, while only candesartan significantly reduced the body weight compared with HFF-fed mice. Nevertheless, BBB dysfunction and cognitive decline remained unaffected by candesartan or UDCA.
These data conclusively indicate that modulation of blood pressure and/or body weight may not be directly associated with BBB dysfunction and cognitive deficits in Western diet-induced obese mice, and hence antihypertensive agents may not be effective in preventing BBB disruption and cognitive decline. The findings may provide important mechanistical insights to obesity-associated cognitive decline and its therapy.
The Scientist » Multimedia » Infographics
Infographic: A Body Without Food
Mounting evidence suggests that intermittent fasting causes significant changes to various organs and tissue types.
By Bob Grant | June 1, 2017
http://www.the-scientist.com/?articles.view/articleNo/49505/title/Infographic--A-Body-Without-Food/&utm_campaign=NEWSLETTER_TS_The-Scientist-Daily_2016&utm_source=hs_email&utm_medium=email&utm_content=52775608&_hsenc=p2ANqtz--Kr9DIF_1EO08CEcfaE4xmeVvYJ8WHRfPE28X3RLVhuB-f9MkU1OixxxmAkmC2cEUE98HYf57tWgSrsOSTWPqTXarEog&_hsmi=52775608
The fasting signal likely starts in the liver, the body's central command for metabolism. But through changes in gene expression and alterations in complex enzymatic pathways, the effects of food deprivation spread throughout the body, from the brain and visceral fat to the muscles and more.
Fasting and time-restricted feeding increases insulin sensitivity, decreases insulin resistance, and lowers blood glucose levels. With prolonged periods of fasting, the liver's glycogen stores become depleted, and visceral fat is tapped as an energy source, which releases ketones that can be metabolized by neurons and muscle cells.
Periodic fasting reprograms T-cell populations, tamping down autoimmunity and rescuing immunosenescence. A lack of incoming calories appears to prune away autoimmune T cells, and with refeeding, hematopoietic stem cells are activated to replace T cells, lymphocytes, and other white blood cells. Several fasting studies have also pointed to a decrease in inflammatory cytokines.
Because triglycerides become mobilized for energy in the absence of incoming dietary calories, blood lipid levels tend to go down in a fasting body. Researchers have also seen decreases in blood pressure in fasting animals. In some animal studies of fasting, investigators have recorded decreases in cholesterol.
Intermittent fasting has improved memory, learning, and neurogenesis in rodents, and has been shown to repair some neurons in mouse models of ischemic stroke.
By making tumor cells more susceptible to chemotherapeutic agents while protecting healthy cells from the treatment's toxicity, intermittent fasting is showing promise in slowing the progression of breast cancers and melanoma in mice.
liver, immunology, heart, fasting, cancer and brain
Read the full story.
The Scientist » June 2017 Issue » Features » Cover Story
Regularly taking breaks from eating—for hours or days—can trigger changes both expected, such as in metabolic dynamics and inflammation, and surprising, as in immune system function and cancer progression.
http://www.the-scientist.com/?articles.view/articleNo/49462/title/Running-on-Empty/
The cannabis paradox: when age matters
Andrés Ozaita & Ester Aso
AffiliationsCorresponding authors
Nature Medicine 23, 661–662 (2017) doi:10.1038/nm.4348
Published online 06 June 2017
http://sci-hub.cc/10.1038/nm.4348
New evidence in mouse models reveals that exposure to Δ9-tetrahydrocannabinol (THC), the main psychoactive component in Cannabis sativa, might improve cognitive performance in aging animals.
Subject terms: Ageing Cellular neuroscience
A chronic low dose of Δ<sup>9</sup>-tetrahydrocannabinol (THC) restores cognitive function in old mice.
Bilkei-Gorzo A, Albayram O, Draffehn A, Michel K, Piyanova A, Oppenheimer H, Dvir-Ginzberg M, Rácz I, Ulas T, Imbeault S, Bab I, Schultze JL, Zimmer A.
Nat Med. 2017 May 8. doi: 10.1038/nm.4311. [Epub ahead of print]
The balance between detrimental, pro-aging, often stochastic processes and counteracting homeostatic mechanisms largely determines the progression of aging. There is substantial evidence suggesting that the endocannabinoid system (ECS) is part of the latter system because it modulates the physiological processes underlying aging. The activity of the ECS declines during aging, as CB1 receptor expression and coupling to G proteins are reduced in the brain tissues of older animals and the levels of the major endocannabinoid 2-arachidonoylglycerol (2-AG) are lower. However, a direct link between endocannabinoid tone and aging symptoms has not been demonstrated. Here we show that a low dose of Δ9-tetrahydrocannabinol (THC) reversed the age-related decline in cognitive performance of mice aged 12 and 18 months. This behavioral effect was accompanied by enhanced expression of synaptic marker proteins and increased hippocampal spine density. THC treatment restored hippocampal gene transcription patterns such that the expression profiles of THC-treated mice aged 12 months closely resembled those of THC-free animals aged 2 months. The transcriptional effects of THC were critically dependent on glutamatergic CB1 receptors and histone acetylation, as their inhibition blocked the beneficial effects of THC. Thus, restoration of CB1 signaling in old individuals could be an effective strategy to treat age-related cognitive impairments.
Aging Reversal and Healthy Longevity is in Reach: Dependence on Mitochondrial DNA Heteroplasmy as a Key Molecular Target.
Stefano GB, Kream RM.
Med Sci Monit. 2017 Jun 5 [revised 2017 Jun 5];23:2732-2735. doi: 10.12659/MSM.902515.
Recent trends in biomedical research have highlighted the potential for effecting significant extensions in longevity with enhanced quality of life in aging human populations. Within this context, any proposed method to achieve enhanced life extension must include therapeutic approaches that draw upon essential biochemical and molecular regulatory processes found in relatively simple single cell organisms that are evolutionarily conserved within complex organ systems of higher animals. Current critical thinking has established the primacy of mitochondrial function in maintaining good health throughout plant and animal phyla. The mitochondrion represents an existentially defined endosymbiotic model of complex organelle development driven by evolutionary modification of a permanently enslaved primordial bacterium. Cellular mitochondria are biochemically and morphologically tailored to provide exponentially enhanced ATP-dependent energy production accordingly to tissue- and organ-specific physiological demands. Thus, individual variations in longevity may then be effectively sorted according to age-dependent losses of single-cell metabolic integrity functionally linked to impaired mitochondrial bioenergetics within an aggregate presentation of compromised complex organ systems. Recent empirical studies have focused on the functional role of mitochondrial heteroplasmy in the regulation of normative cellular processes and the initiation and persistence of pathophysiological states. Accordingly, elucidation of the multifaceted functional roles of mitochondrial heteroplasmy in normal aging and enhanced longevity will provide both a compelling genetic basis and potential targets for therapeutic intervention to effect meaningful life extension in human populations.
DNA, Mitochondrial; Free Radicals; Genome, Mitochondrial; Longevity; Mutation; Oocytes
Body height and mortality - mortality follow-up of three Swiss surveys.
Rohrmann S, Haile SR, Staub K, Bopp M, Fäh D; Swiss National Cohort Study Group.
Prev Med. 2017 Jun 1. pii: S0091-7435(17)30185-8. doi: 10.1016/j.ypmed.2017.05.023. [Epub ahead of print]
Adult body height is largely determined by genetics, but also by dietary factors, which in turn depend on socioeconomic status and lifestyle. We examined the association between adult body height and mortality in Switzerland, a country with three main language regions with different cultural background.
We included 16,831 men and 18,654 women, who participated in Swiss population-based health surveys conducted 1977-1993 and who were followed up until end of 2008. Multivariable Cox proportional hazards models were computed to examine the association of body height with overall, cardiovascular, and cancer mortality.
We observed a positive association between adult body height and all-cause mortality in women (HR=1.34, 95% CI 1.10-1.62, tallest vs. average women). In men, mortality risk decreased with increasing height, with shortest men tending to have higher (1.06, 0.94-1.19) and tallest men a lower (0.94, 0.77-1.14) risk compared with men of average height (p-trend 0.0001). Body height was associated with cancer mortality in women, such that tallest women had a higher risk of dying from cancer than women of average height (1.37, 1.02-1.84), but there was no such association in men (0.95, 0.69-1.30). In both sexes, height was not associated with cardiovascular mortality in a statistically significant manner.
Our study does not support an inverse association of body height with all-cause mortality. On the contrary, our data suggests a higher overall risk in taller women, mainly driven by a positive association between body height and cancer mortality.
Cognitive decline in normal aging and its prevention: a review on non-pharmacological lifestyle strategies.
Klimova B, Valis M, Kuca K.
Clin Interv Aging. 2017 May 25;12:903-910. doi: 10.2147/CIA.S132963. eCollection 2017. Review.
The purpose of this study is to examine the effects of the selected non-pharmacological lifestyle activities on the delay of cognitive decline in normal aging. This was done by conducting a literature review in the four acknowledged databases Web of Science, Scopus, MEDLINE, and Springer, and consequently by evaluating the findings of the relevant studies. The findings show that physical activities, such as walking and aerobic exercises, music therapy, adherence to Mediterranean diet, or solving crosswords, seem to be very promising lifestyle intervention tools. The results indicate that non-pharmacological lifestyle intervention activities should be intense and possibly done simultaneously in order to be effective in the prevention of cognitive decline. In addition, more longitudinal randomized controlled trials are needed in order to discover the most effective types and the duration of these intervention activities in the prevention of cognitive decline, typical of aging population groups.
benefits; cognitive impairment; healthy older individuals; intervention
Decreased alertness due to sleep loss increases pain sensitivity in mice.
Alexandre C, Latremoliere A, Ferreira A, Miracca G, Yamamoto M, Scammell TE, Woolf CJ.
Extended daytime and nighttime activities are major contributors to the growing sleep deficiency epidemic, as is the high prevalence of sleep disorders like insomnia. The consequences of chronic insufficient sleep for health remain uncertain. Sleep quality and duration predict presence of pain the next day in healthy subjects, suggesting that sleep disturbances alone may worsen pain, and experimental sleep deprivation in humans supports this claim. We demonstrate that sleep loss, but not sleep fragmentation, in healthy mice increases sensitivity to noxious stimuli (referred to as 'pain') without general sensory hyper-responsiveness. Moderate daily repeated sleep loss leads to a progressive accumulation of sleep debt and also to exaggerated pain responses, both of which are rescued after restoration of normal sleep. Caffeine and modafinil, two wake-promoting agents that have no analgesic activity in rested mice, immediately normalize pain sensitivity in sleep-deprived animals, without affecting sleep debt. The reversibility of mild sleep-loss-induced pain by wake-promoting agents reveals an unsuspected role for alertness in setting pain sensitivity. Clinically, insufficient or poor-quality sleep may worsen pain and this enhanced pain may be reduced not by analgesics, whose effectiveness is reduced, but by increasing alertness or providing better sleep.
Obesity/overweight reduces the risk of active tuberculosis: a nationwide population-based cohort study in Taiwan.
Yen YF, Hu HY, Lee YL, Ku PW, Lin IF, Chu D, Lai YJ.
Obesity affects immune function by increasing the number of T helper lymphocytes, which may reduce the risk of tuberculosis (TB) infection. However, the effect of obesity on TB development has not been extensively studied. This nationwide population-based cohort study investigated the effect of obesity on TB development in Taiwanese adults.
We included 46 028 adult participants (age ⩾18 years) from three rounds (2001, 2005 and 2009) of the Taiwan National Health Interview Survey. Obesity and overweight were defined as a body mass index (BMI) ⩾27 and 24-26.9 (kg/m2), respectively. Data on BMI and other covariates at baseline were collected by in-person interviews. Incident cases of active TB were identified from the National Health Insurance database. Multivariable logistic regression was used to estimate the associations of obesity and overweight with active TB, with adjustment for age, sex, smoking, alcohol consumption, socioeconomic status and other covariates.
In total, 241 new cases of active TB occurred during the study period. Obesity (adjusted odds ratio [AOR], 0.43; 95% confident interval [CI], 0.28-0.67) and overweight (AOR, 0.67; 95% CI, 0.49-0.91) were associated with lower risk of incident TB, after adjusting for demographic characteristics and comorbidities. There was a linear dose-response relation of BMI with active TB incidence (AOR per unit change in BMI, 0.92; 95% CI, 0.88-0.95; P <0.001).
Obesity and overweight are associated with lower risk of active TB. Future studies should investigate the underlying mechanisms and clinical and epidemiological consequences of these findings.
Food components and ocular pathophysiology: a critical appraisal of the role of oxidative mechanisms.
Raman R, Vaghefi E, Braakhuis AJ.
Asia Pac J Clin Nutr. 2017;26(4):572-585. doi: 10.6133/apjcn.082016.01.
Three of the major ocular diseases, namely cataracts, age-related macular degeneration and glaucoma are associated with oxidative damage. Disease risk and progression may be reduced through consumption of dietary components. To critically examine the literature on dietary and supplemental intakes of fruit and vegetables, meat, antioxidants (vitamins C, E and A), calcium, folate, iron, and their association with ocular disease.
METHODS AND STUDY DESIGN:
Google Scholar and key references from texts and publications were searched using search terms (eye disease, antioxidants), (vision, nutrition), no date restriction, only articles in English were included.
We found probable evidence that dietary intake of fruits and vegetables, and vitamin C lowered incidence of cataracts and age-related macular degeneration. In high supplemental doses, vitamin C increases macular degeneration risk. Vitamin A from food was protective for cataracts and glaucoma, but not in supplemental form. Vitamin A was associated with lower incidence of macular degeneration. We also found probable evidence that higher intakes of meat increased the risk of cataracts and macular degeneration. Dietary calcium and iron appeared protective against glaucoma, but not in supplemental form.
While a nutrient rich diet high in fruit and vegetables, and associated antioxidants appeared to be protective, we would caution intake of supplementary antioxidants for those with ocular disease.
Satisfying America's Fruit Gap: Summary of an Expert Roundtable on the Role of 100% Fruit Juice.
Byrd-Bredbenner C, Ferruzzi MG, Fulgoni VL 3rd, Murray R, Pivonka E, Wallace TC.
J Food Sci. 2017 Jun 6. doi: 10.1111/1750-3841.13754. [Epub ahead of print] Review.
http://sci-hub.cc/10.1111/1750-3841.13754
The 2015 to 2020 Dietary Guidelines for Americans (DGAs) recognize the role of 100% fruit juice in health and in helping people meet daily fruit recommendations and state that 100% fruit juice is a nutrient-dense beverage that should be a primary choice, along with water and low-fat/fat-free milk. The DGAs note that children are consuming 100% fruit juice within recommendations (that is, 120 to 180 mL/d for children aged 1 to 6 y and 236 to 355 mL/d for children aged 7 to 18 y). Evidence shows that compared to nonconsumers, those who consume 100% fruit juice come closer to meeting daily fruit needs and have better diet quality. In children, 100% fruit juice is associated with increased intakes of nutrients such as vitamin C, folate, and potassium. When consumed within the DGA recommendations, 100% fruit juice is not associated with overweight/obesity or childhood dental caries and does not compromise fiber intake. Preliminary data suggest that polyphenols in some 100% fruit juices may inhibit absorption of naturally occurring sugars. Given its role in promoting health and in helping people meet fruit needs, experts participating in a roundtable discussion agreed that there is no science-based reason to restrict access to 100% fruit juice in public health nutrition policy and programs such as the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). Reducing or eliminating 100% fruit juice could lead to unintended consequences such as reduced daily fruit intake and increased consumption of less nutritious beverages (for example, sugar-sweetened beverages).
100% fruit juice; diet quality; dietary guidelines; nutrient intake; nutrition policy
"Author Disclosures: CBB has no conflicts of interest to disclose; MGF
is a member of the Advisory Boards for Alliance for Potato Research and
Education, Sensient Technologies, and Welch's. MGF is consultant for General
Mills and Unilever and has research funded, in part, by General Mills, Welch's,
Pepsico Global, and Alliance for Potato Research and Education; VLF III is the
Senior Vice President of Nutrition Impact LLC, where performs consulting
and database analyses for various food and beverage companies and related
entities; RM is a member of the Speaker's Bureau for National Dairy Council
and Abbott Nutrition, a consultant for Dannon Co., Sabra Dipping Co.,
Egg Nutrition Board, Hass Avocado Board, and National Cattlemen's Beef
Association; EP is the CEO of the Produce for Better Health, which receives
contributions from more than 350 members of the fruit/vegetable industry,
including $10000 annually from Welch's; TCW is the Principal Consultant
at Think Healthy Group, LLC, performs consulting and clinical research for
various food, beverage and dietary supplement companies.
All authors received an honorarium for their participation in the roundtable,
which was hosted by Welch's and facilitated by FoodMinds LLC, a food and
nutrition communications and consulting company that works with Welch's."
Discrimination ability of comorbidity, frailty, and subjective health to predict mortality in community-dwelling older people: Population based prospective cohort study.
Kusumastuti S, Gerds TA, Lund R, Mortensen EL, Westendorp RGJ.
Eur J Intern Med. 2017 Jun 2. pii: S0953-6205(17)30214-5. doi: 10.1016/j.ejim.2017.05.016. [Epub ahead of print]
http://sci-hub.cc/10.1016/j.ejim.2017.05.016
To investigate the added value of comorbidity, frailty, and subjective health to mortality predictions in community-dwelling older people and whether it changes with increasing age.
36,751 community-dwelling subjects aged 50-100 from the longitudinal Survey of Health, Ageing, and Retirement in Europe.
Mortality risk associated with Comorbidity Index, Frailty Index, Frailty Phenotype, and subjective health was analysed using Cox regression. The extent to which health indicators modified individual mortality risk predictions was examined and the added ability to discriminate mortality risks was assessed.
MAIN OUTCOME MEASURES:
Three-year mortality risks, hazard ratios, change in individual mortality risks, three-year area under the receiver operating characteristic curve (AUC).
Three-year mortality risks increased 41-folds within an age span of 50years. Hazard ratios per change in health indicator became less significant with increasing age (p-value<0·001). AUC for three-year mortality prediction based on age and sex was 76·9% (95% CI 75·5% to 78·3%). Information on health indicators modified individual three-year mortality risk predictions up to 30%, both upwards and downwards, each adding <2% discriminative power. The added discrimination ability of all health indicators gradually declined from an extra 4% at age 50-59 to <1% in the oldest old. Trends were similar for one-year mortality and not different between sexes, levels of education, and household income.
Calendar age encompasses most of the discrimination ability to predict mortality. The added value of comorbidity, frailty, and subjective health to mortality predictions decreases with increasing age.
Mortality; Old age; Prognosis; Risk assessment; Survival analysis
Male Centenarians: How and Why Are They Different from Their Female Counterparts?
Perls TT.
J Am Geriatr Soc. 2017 Jun 6. doi: 10.1111/jgs.14978. [Epub ahead of print] No abstract available.
http://onlinelibrary.wiley.com.sci-hub.cc/doi/10.1111/jgs.14978/abstract
Characteristics and Incidence of Chronic Illness in Community-Dwelling Predominantly Male U.S. Veteran Centenarians.
Kheirbek RE, Fokar A, Shara N, Bell-Wilson LK, Moore HJ, Olsen E, Blackman MR, Llorente MD.
J Am Geriatr Soc. 2017 Apr 19. doi: 10.1111/jgs.14900. [Epub ahead of print]
http://sci-hub.cc/10.1111/jgs.14900
To assess the incidence of chronic illness and its effect on veteran centenarians.
Retrospective longitudinal cohort study.
United States Veterans Affairs Corporate Data Warehouse (CDW).
Community-dwelling veterans born between 1910 and 1915 who survived to at least age 80 (N = 86,892; 31,121 octogenarians, 52,420 nonagenarians, 3,351 centenarians).
The Kaplan-Meier method was used to estimate cumulative incidence of chronic conditions according to age group. Incidence rates were compared using the log-rank test. Cox proportional hazards models were used to estimate unadjusted hazard ratios.
Ninety-seven percent of Centenarians were male, 88.0% were white, 31.8% were widowed, 87.5% served in World War II, and 63.9% did not have a service-related disability. The incidence rates of chronic illnesses were higher in octogenarians than centenarians (atrial fibrillation, 15.0% vs 0.6%, P < .001; heart failure, 19.3% vs 0.4%, P < .001; chronic obstructive pulmonary disease, 17.9% vs 0.6%, P < .001; hypertension, 29.6% vs 3.0%, P < .001; end-stage renal disease, 7.2% vs 0.1%, P < .001; malignancy, 14.1% vs 0.6%, P < .001; diabetes mellitus, 11.1% vs 0.4%, P < .001; stroke, 4.6% vs 0.4%, P < .001) and in nonagenarians than centenarians (atrial fibrillation, 13.2% vs 3.5%, P < .001; heart failure, 15.8% vs 3.3%, P < .001; chronic obstructive pulmonary disease, 11.8% vs 3.5%, P < .001; hypertension, 27.2% vs 12.8%, P < .001; end-stage renal disease, 11.9% vs 4.5%, P < .001; malignancy, 8.6% vs 2.3%, P < .001; diabetes mellitus, 7.5% vs 2.2%, P < .001; and stroke, 3.5% vs 1.3%, P < .001).
In a large cohort of predominantly male community-dwelling elderly veterans, centenarians had a lower incidence of chronic illness than those in their 80s and 90s, demonstrating similar compression of morbidity and extension of health span observed in other studies.
centenarians; chronic illness; incidence; nonagenarians; octogenarians; veterans
Deletion of ghrelin prevents aging-associated obesity and muscle dysfunction without affecting longevity.
Guillory B, Chen JA, Patel S, Luo J, Splenser A, Mody A, Ding M, Baghaie S, Anderson B, Iankova B, Halder T, Hernandez Y, Garcia JM.
Aging Cell. 2017 Jun 6. doi: 10.1111/acel.12618. [Epub ahead of print]
During aging, decreases in energy expenditure and locomotor activity lead to body weight and fat gain. Aging is also associated with decreases in muscle strength and endurance leading to functional decline. Here, we show that lifelong deletion of ghrelin prevents development of obesity associated with aging by modulating food intake and energy expenditure. Ghrelin deletion also attenuated the decrease in phosphorylated adenosine monophosphate-activated protein kinase (pAMPK) and downstream mediators in muscle, and increased the number of type IIa (fatigue resistant, oxidative) muscle fibers, preventing the decline in muscle strength and endurance seen with aging. Longevity was not affected by ghrelin deletion. Treatment of old mice with pharmacologic doses of ghrelin increased food intake, body weight, and muscle strength in both ghrelin wild-type and knockout mice. These findings highlight the relevance of ghrelin during aging and identify a novel AMPK-dependent mechanism for ghrelin action in muscle.
Sarcopenia; frailty; growth hormone; growth hormone secretagogue receptor; inflammation; wasting
Dietary behaviours, weight loss attempts and change in waist circumference: 15-year longitudinal study in Australian adults.
Arabshahi S, Lahmann PH, Hughes MC, Williams GW, van der Pols JC.
Dietary behaviours are suitable as clearly identifiable targets of dietary counselling to prevent weight gain. We therefore investigated associations between dietary behaviours, weight loss attempts and waist circumference change.
Participants were a community-based sample population residing in Nambour, Australia, including 1,317 adults, aged 25-75 years at baseline. Waist circumference was measured in 1992 and 2007, and dietary behaviours data were derived concurrently from repeated self-completed short dietary questions. Multivariable models, stratified by sex, were adjusted for potential confounders.
In men, consumption of visible fat on meat and in women, weight loss attempts in the last 10 years were the most important predictors of waist circumference gain independent of socio-demographic and lifestyle characteristics and energy intake. Men who consumed most visible fat on meat had a 2.6 times larger yearly increase in waist circumference than men who tended to cut the fat off meat: 0.47 (95% CI 0.23, 0.72) vs 0.18 (95% CI 0.01, 0.34) cm/year, p=0.01. Women who reported that they were always trying to lose weight had a 2.7 times larger yearly increase in waist circumference than women who never tried to lose weight: 0.78 (0.54, 1.02) vs 0.29 (0.06, 0.52) cm/year, p=0.0001. Other dietary behaviours were not associated with change in waist circumference.
Consumption of visible fat on meat by men and more frequent attempts to lose weight by women were main dietary behaviours associated with gain in abdominal adiposity in Australian adults.
High consumption of salt-fermented vegetables and hypertension risk in adults: a 12-year follow-up study.
Song HJ, Park SJ, Jang DJ, Kwon DY, Lee HJ.
The aim of this study was to investigate the causal relationship between high consumption of salt-fermented vegetables and hypertension risk in adults.
Data came from the Korean Genome and Epidemiology Study, an ongoing community-based cohort study that began in 2001. In the final analysis, a total of 5,932 participants (men=2,822, women=3,110) was included. Daily energy, nutrient, and major salt-fermented vegetables for Korean (kimchi) intakes were assessed using a semi-quantitative food frequency questionnaire. Relative risks and 95% CIs associated with kimchi intake by gender and body mass index (BMI) were estimated using the multivariate Cox proportional hazards regression model.
Out of the 5,932 participants, 1,798 (905 men, 893 women) developed hypertension during the 12-year follow-up period. A significant difference in baseline BMI was shown between the non-hypertension and hypertension groups. There was no significant difference with regard to the risk of developing hypertension across quintiles for total kimchi intake and quartile or quartiles for specific kimchi intake in multivariate models by gender and baseline BMI. The trend for increased risk of hypertension according to increasing quartile of watery kimchi intake was significant for obese men in the multivariate model (p<0.05).
High consumption of salt-fermented vegetables was not shown to be associated with increased risk of hypertension. The trend for increased risk of hypertension according to increasing quartile of watery kimchi intake was significant only in obese men.
Long-term a posteriori dietary patterns and risk of hip fractures in a cohort of women.
Warensjö Lemming E, Byberg L, Melhus H, Wolk A, Michaëlsson K.
Eur J Epidemiol. 2017 Jun 5. doi: 10.1007/s10654-017-0267-6. [Epub ahead of print]
Dietary pattern analysis is a useful tool to study the importance of food components in the context of a diet and how they relate to health and disease. The association between dietary patterns and fractures is at present uncertain. We aimed to study associations between dietary patterns and risk of hip fracture in the Swedish Mammography Cohort, including 56,736 women (median baseline age 52 years). Diet data was collected in food frequency questionnaires at two investigations and dietary patterns were defined by principal component analysis using 31 food groups. Information on hip fractures was collected from the Swedish National Patient Register. Multivariable adjusted hazard ratios (HR) with 95% confidence intervals (CI) were estimated in Cox proportional hazards regression analysis. The two patterns identified-the healthy and Western/convenience dietary patterns-were time-updated and analysed. During a median follow-up time of 25.5 years, 4997 women experienced a hip fracture. Hip fracture rate was 31% lower in the highest compared to the lowest quartile of the healthy dietary pattern [hr (95% CI) 0.69 (0.64; 0.75)]. In contrast, women in the highest compared to the lowest quartile of the Western/convenience dietary pattern had a 50% higher [hr (95% CI) 1.50 (1.38; 1.62)] hip fracture rate. Further, in each stratum of a Western/convenience dietary pattern a higher adherence to a healthy dietary pattern was associated with less hip fractures. The present results suggest that a varied healthy diet may be beneficial for the prevention of fragility fractures in women.
Dietary pattern; Food frequency questionnaire; Healthy dietary pattern; Hip fractures; Principal component analysis; Western dietary pattern
Effect of probiotic Lactobacillus on lipid profile: A systematic review and meta-analysis of randomized, controlled trials.
Wu Y, Zhang Q, Ren Y, Ruan Z.
PLoS One. 2017 Jun 8;12(6):e0178868. doi: 10.1371/journal.pone.0178868. eCollection 2017.
To assess the efficacy of probiotic Lactobacillus on serum lipids using a meta-analysis of randomized, controlled trials.
Fifteen studies containing 15 trials, with 976 subjects were included. The pooled WMD was calculated by random effects model.
Probiotic Lactobacillus consumption significantly reduced TC by 0.26mmol/l (95% CI, -0.40 to -0.12) and LDL-C by 0.23mmol/l (95% CI, -0.36 to -0.10). Subgroup analysis of trials found significantly reduction of TC using L. plantarum and reduction of LDL-C using L. plantarum or L. reuteri. No significant effects were found on TG and HDL-C levels after supplementation with probiotic Lactobacillus. While, subgroup analysis found significantly beneficial effects on TG and HDL-C by consuming synbiotic food, containing L. sporogenes and inulin.
Consuming probiotic Lactobacillus, especially L. reuteri and L. plantarm, could reduce TC and LDL-C significantly. The study also suggested significantly beneficial effects on TG and HDL-C by consuming synbiotic food, containing L. sporogenes and inulin.
Effects of vitamin D or its analogues on the mortality of patients with chronic kidney disease: an updated systematic review and meta-analysis.
Lu RJ, Zhu SM, Tang FL, Zhu XS, Fan ZD, Wang GL, Jiang YF, Zhang Y.
Eur J Clin Nutr. 2017 Jun;71(6):683-693. doi: 10.1038/ejcn.2017.59. Epub 2017 May 10. Review.
The objective of this study was to assess whether vitamin D (VD) treatment alters the overall all-cause and cardiovascular mortalities in a chronic kidney disease (CKD) population. We systematically searched PubMed, EMBASE, Web of Science, and Cochrane Central Register of Controlled Trials without language restriction, until the publication date of 22 February 2016. All related literatures that compared VD treatment with non-VD treatment and reported the mortality of patients with CKD (including those undergoing dialysis) were identified. Pooled risk ratios (RR) and 95% confidence intervals (CI) were calculated by using the random- and fixed-effects models. Randomised controlled trials (RCTs) that used the intention-to-treat principle and observational studies (OSs) were analysed separately. For this study, 38 studies involving 223 429 patients (17 RCTs, n=1819 and 21 OSs, n=221610) were included. In the OSs, VD treatment was significantly associated with reductions in both all-cause and cardiovascular mortalities; however, such significant association was not found in the RCTs. The existing RCTs do not provide sufficient or precise evidence that VD supplementation affects the mortality of patients with CKD, although subsets of patients that could potentially benefit from VD treatment can be identified by using the existing data from the RCTs. Nevertheless, large-size RCTs are needed in the future to assess any potential differences in survival prospectively.
Income and Cancer Overdiagnosis — When Too Much Care Is Harmful
H. Gilbert Welch, M.D., M.P.H., and Elliott S. Fisher, M.D., M.P.H.
N Engl J Med 2017; 376:2208-2209June 8, 2017DOI: 10.1056/NEJMp1615069
http://sci-hub.cc/10.1056/NEJMp1615069
There are reasons to wonder whether people with higher incomes receive too much medical care. Cancer screening is one area where overutilization can cause harm, resulting in overdiagnosis and potentially unnecessary treatment.
Synchronic inverse seasonal rhythmus of energy density of food intake and sleep quality: a contribution to chrono-nutrition from a Polish adult population.
Stelmach-Mardas M, Iqbal K, Mardas M, Schwingshackl L, Walkowiak J, Tower RJ, Boeing H.
Eur J Clin Nutr. 2017 Jun;71(6):718-722. doi: 10.1038/ejcn.2016.229. Epub 2016 Nov 30.
There is evidence which suggests that sleep behavior and dietary intake are interlinked. Thus, we investigated whether a seasonal rhythm in food-energy density exists, and how this relates to quality of sleep.
Two hundred and thirty adult volunteers were investigated across the four seasons. Anthropometrical measurements were obtained and The Pittsburgh Sleep Quality Index was used for an assessment of sleep quality and disturbances. The dietary intake was evaluated using a 24 h dietary recall. Generalized estimating equations were used to estimate seasonal changes in energy density and sleep quality, as well as the association of energy density with sleep quality. All analyses were adjusted for age, sex, education, occupation and shift-work.
Mean food energy density was significantly higher in winter as compared with other seasons (P<0.05), although no seasonal variations were observed in macronutrient intake (fat and protein). Overall, the sleep quality was low (score value >5) in all seasons, with the lowest quality occurring in winter and the highest in spring (P<0.05). The components of sleep quality score showed that winter had statistically (P<0.05) poorer subjective sleep quality, sleep latency and sleep disturbances, but lower daytime dysfunction compared with spring and summer. After adjusting for seasonal effects (correlated outcome data) and shift-work, energy density was found to be inversely associated (P<0.0001) with sleep quality.
An inverse association between seasonal fluctuation of food energy density and sleep quality was found with winter time, associated with the intake of higher energy dense food products and the lowest sleep quality.
Vegetarian diet as a risk factor for symptomatic gallstone disease.
McConnell TJ, Appleby PN, Key TJ.
Eur J Clin Nutr. 2017 Jun;71(6):731-735. doi: 10.1038/ejcn.2016.252. Epub 2017 Mar 8.
Previous small studies have shown either no difference or a lower risk of symptomatic gallstone disease in vegetarians than in non-vegetarians. This study examined the incidence of symptomatic gallstone disease in a cohort of British vegetarians and non-vegetarians, and investigated the associations between nutrient intake and risk of symptomatic gallstone disease.
The data were analysed from 49 652 adults enroled in the European Prospective Investigation into Cancer and Nutrition (EPIC)-Oxford study, one-third of whom were vegetarian. The linked databases of hospital records were used to identify incident cases. Risk by diet group was estimated using Cox proportional hazards models. Further analysis quantified risk by intakes of selected macronutrients.
There were 1182 cases of symptomatic gallstone disease during 687 822 person-years of follow-up (mean=13.85 years). There was a large significant association between increasing body mass index (BMI) and risk of developing symptomatic gallstone disease (overall trend P<0.001). After adjustment for BMI and other risk factors, vegetarians had a moderately increased risk compared with non-vegetarians (HR: 1.22; 95% CI: 1.06-1.41; P=0.006). Although starch consumption was positively associated with gallstones risk (P=0.002 for trend), it did not explain the increased risk in vegetarians.
There is a highly significant association of increased BMI with risk of symptomatic gallstone disease. After adjusting for BMI, there is a small but statistically significant positive association between vegetarian diet and symptomatic gallstone disease.
Ketones and Human Performance.
Scott JM, Deuster PA.
J Spec Oper Med. 2017 Summer;17(2):112-116.
Everyone is seeking nutritional strategies that might benefit performance. One approach receiving much attention is ketones, or ketosis. Ketones are very simple compounds made of hydrogen, carbon, and oxygen, and ketosis is a metabolic state whereby the body uses predominantly ketones. Ketosis can be achieved by fasting for longer than 72 hours or by following a very lowcarbohydrate, high-fat diet (ketogenic diet) for several days to weeks. Alternatively, ketone supplements purportedly induce ketosis rapidly and do not require strict adherence to any specific type of diet; however, much of the touted benefits are anecdotal. A potential role for ketosis as a performance enhancer was first introduced in 1983 with the idea that chronic ketosis without caloric restriction could preserve submaximal exercise capability by sparing glycogen or conserving the limited carbohydrate stores. Few human studies on the effects of a ketogenic diet on performance have yielded positive results, and most studies have yielded equivocal or null results, and a few negative results. Many questions about ketones relevant to Special Operations Forces (SOF) remain unanswered. At present, a ketogenic diet and/or a ketone supplement do not appear confer performance benefits for SOF. Instead, Operators should engage with their unit dietitian to develop individualized nutritional strategies based on unique mission requirements. The authors review the concept of a ketogenic diet, describe some potential benefits and risks of ketosis, review the performance literature and how to measure ketone status, and then summarize the landscape in 2017.
Rapamycin reduces <i>Drosophila</i> longevity under low nutrition.
Villa-Cuesta E, Fan F, Rand DM.
IOSR J Pharm. 2014 Aug;4(8):43-51. doi: 10.9790/3013-0408043051.
Rapamycin treatment is considered a pharmacological intervention with the potential to mimic the longevity benefits of dietary manipulations. However, how rapamycin interacts with nutrition is not fully understood. Here we studied the effect of rapamycin on the longevity of Drosophila under a range of dietary conditions. In diets low in nutrients, rapamycin reduced longevity in a dosage-dependent manner. This dosage effect requires some nutrients as rapamycin has no impact on survival under starvation conditions. Under a balanced diet of yeast and sugar, rapamycin had no repeatable beneficial effect on organismal longevity. These results show that the effect of rapamycin on longevity is sensitive to the nutritional environment and it can reduce lifespan when nutrients are limited.
Longevity; Nutrition; Rapamycin
Intake of dairy foods and risk of Parkinson disease.
Hughes KC, Gao X, Kim IY, Wang M, Weisskopf MG, Schwarzschild MA, Ascherio A.
Neurology. 2017 Jun 8. pii: 10.1212/WNL.0000000000004057. doi: 10.1212/WNL.0000000000004057. [Epub ahead of print]
To prospectively examine the association between commonly consumed dairy products and the risk of Parkinson disease (PD) in women and men.
Analyses were based on data from 2 large prospective cohort studies, the Nurses' Health Study (n = 80,736) and the Health Professionals Follow-up Study (n = 48,610), with a total of 26 and 24 years of follow-up, respectively. Both US-based studies were conducted via mailed biennial questionnaires. Dietary intake was assessed with food frequency questionnaires administered repeatedly over the follow-up period. Incident cases of PD (n = 1,036) were identified via questionnaires and subsequently confirmed by reviewing medical records. We also conducted a meta-analysis to combine our study with 3 previously published prospective studies on total milk intake and PD risk and 1 study on total dairy intake and PD risk.
While total dairy intake was not significantly associated with PD risk in our cohorts, intake of low-fat dairy foods was associated with PD risk. The pooled, multivariable-adjusted hazard ratio (HR) comparing people who consumed at least 3 servings of low-fat dairy per day to those who consumed none was 1.34 (95% confidence interval [CI] 1.01-1.79, p trend = 0.04). This association appeared to be driven by an increased risk of PD associated with skim and low-fat milk (HR 1.39, 95% CI 1.12-1.73, p trend <0.01). Results were similar in women and men (p for heterogeneity >0.05). In the meta-analysis, the pooled relative risk comparing extreme categories of total milk intake was 1.56 (95% CI 1.30-1.88), and the association between total dairy and PD became significant (HR 1.27, 95% CI 1.04-1.55).
Frequent consumption of dairy products appears to be associated with a modest increased risk of PD in women and men.
Association of Vegetable Nitrate Intake With Carotid Atherosclerosis and Ischemic Cerebrovascular Disease in Older Women.
Bondonno CP, Blekkenhorst LC, Prince RL, Ivey KL, Lewis JR, Devine A, Woodman RJ, Lundberg JO, Croft KD, Thompson PL, Hodgson JM.
Stroke. 2017 Jun 8. pii: STROKEAHA.117.016844. doi: 10.1161/STROKEAHA.117.016844. [Epub ahead of print]
BACKGROUND AND PURPOSE:
A short-term increase in dietary nitrate (NO3-) improves markers of vascular health via formation of nitric oxide and other bioactive nitrogen oxides. Whether this translates into long-term vascular disease risk reduction has yet to be examined. We investigated the association of vegetable-derived nitrate intake with common carotid artery intima-media thickness (CCA-IMT), plaque severity, and ischemic cerebrovascular disease events in elderly women (n=1226).
Vegetable nitrate intake, lifestyle factors, and cardiovascular disease risk factors were determined at baseline (1998). CCA-IMT and plaque severity were measured using B-mode carotid ultrasound (2001). Complete ischemic cerebrovascular disease hospitalizations or deaths (events) over 14.5 years (15 032 person-years of follow-up) were obtained from the West Australian Data Linkage System.
Higher vegetable nitrate intake was associated with a lower maximum CCA-IMT (B=-0.015, P=0.002) and lower mean CCA-IMT (B=-0.012, P=0.006). This relationship remained significant after adjustment for lifestyle and cardiovascular risk factors (P≤0.01). Vegetable nitrate intake was not a predictor of plaque severity. In total 186 (15%) women experienced an ischemic cerebrovascular disease event. For every 1 SD (29 mg/d) higher intake of vegetable nitrate, there was an associated 17% lower risk of 14.5-year ischemic cerebrovascular disease events in both unadjusted and fully adjusted models (P=0.02).
Independent of other risk factors, higher vegetable nitrate was associated with a lower CCA-IMT and a lower risk of an ischemic cerebrovascular disease event.
atherosclerosis; cardiovascular disease; cerebrovascular disease; nitrates; vegetables
Insulin-like growth factor-1, IGF binding protein-3, and the risk of esophageal cancer in a nested case-control study.
Adachi Y, Nojima M, Mori M, Yamashita K, Yamano HO, Nakase H, Endo T, Wakai K, Sakata K, Tamakoshi A.
World J Gastroenterol. 2017 May 21;23(19):3488-3495. doi: 10.3748/wjg.v23.i19.3488.
To assess the relationship between serum levels of insulin-like growth factor-1 (IGF1)/IGF-binding protein-3 (IGFBP3) and the risk of esophageal carcinoma.
We assessed the relationship between the serum levels of these molecules and the risk of esophageal cancer in a prospective, nested case-control study of participants from the Japan Collaborative Cohort Study. A baseline survey was conducted from 1988 to 1990. Of the 110585 enrolled participants, 35% donated blood samples. Those who had been diagnosed with esophageal cancer were considered cases for nested case-control studies. A conditional logistic model was used to estimate odds ratios for the incidence of esophageal cancer associated with serum IGF1 and IGFBP3 levels.
Thirty-one cases and 86 controls were eligible for the present assessment. The molar ratio of IGF1/IGFBP3, which represents the free and active form of IGF1, was not correlated with the risk of esophageal carcinoma. A higher molar difference between IGFBP3 and IGF1, which estimates the free form of IGFBP3, was associated with a decreased risk of esophageal carcinoma (P = 0.0146), and people in the highest tertile had the lowest risk (OR = 0.107, 95%CI: 0.017-0.669). After adjustment for body mass index, tobacco use, and alcohol intake, the molar difference of IGFBP3-IGF1 was inversely correlated with the risk of esophageal carcinoma (P = 0.0150).
The free form of IGFBP3, which is estimated by this molar difference, may be inversely associated with esophageal cancer incidence.
Esophageal cancer; Insulin-like growth factor; Insulin-like growth factor binding protein; Nested case-control study; Odds ratio
https://en.wikipedia.org/wiki/Ursolic_acid#Natural_occurrence
https://en.wikipedia.org/wiki/Ursolic_acid#Potential_biochemical_effects
Effect of Ursolic Acid on Metabolic Syndrome, Insulin Sensitivity, and Inflammation.
Ramírez-Rodríguez AM, González-Ortiz M, Martínez-Abundis E, Acuña Ortega N.
J Med Food. 2017 Jun 9. doi: 10.1089/jmf.2017.0003. [Epub ahead of print]
To evaluate the effect of ursolic acid on metabolic syndrome, insulin sensitivity, and inflammation. A randomized, double-blind, placebo-controlled clinical trial was carried out in 24 patients (30-60 years) with a diagnosis of metabolic syndrome without treatment. They were randomly assigned to two groups of 12 patients, each to receive orally 150 mg of ursolic acid or homologated placebo once a day for 12 weeks. Before and after the intervention, the components of metabolic syndrome, insulin sensitivity (Matsuda index), and inflammation profile (interleukin-6 and C-reactive protein) were evaluated. After ursolic acid administration, the remission of metabolic syndrome occurred in 50% of patients (P = .005) with significant differences in body weight (75.7 ± 11.5 vs. 71 ± 11 kg, P = .002), body mass index (BMI) (29.9 + 3.6 vs. 24.9 ± 1.2 kg/m2, P = .049), waist circumference (93 ± 8.9 vs. 83 + 8.6 cm, P = .008), fasting glucose (6.0 ± 0.5 vs. 4.7 ± 0.4 mmol/L, P = .002), and insulin sensitivity (3.1 ± 1.1 vs. 4.2 ± 1.2, P = .003). Ursolic acid administration leads to transient remission of metabolic syndrome, reducing body weight, BMI, waist circumference and fasting glucose, as well as increasing insulin sensitivity.
inflammation; insulin sensitivity; metabolic syndrome; ursolic acid
Liu P, Shen WQ, Chen HL.
J Wound Care. 2017 Jun 2;26(6):319-323. doi: 10.12968/jowc.2017.26.6.319.
Arginine improves healing and modulates inflammation and the immune response. This systematic review aimed to assess the effect of arginine-enriched enteral formulas in pressure ulcer (PU) healing.
Systematic computerised searches of PubMed, Web of Knowledge, Scopus, ENTRAL and CINAHL databases were performed from their inception to 20 January 2016. Randomised controlled trials (RCTs) were included in this systematic review. We used the Jadad scale as a quality assessment tool.
There were seven RCTs with 369 patients included in this systematic review; four RCTs assessed healing by PU area reduction. All of them reported arginine-enriched enteral nutrition led to a significant improved PU healing compared with standard hospital diet in 2-12 weeks follow-up. Among these four RCTs, one enrolled malnourished patients, one enrolled non-malnourished patients, and the other two studies did not restrict the nutritional status of the patients. Using the Pressure Ulcer Scale for Healing (PUSH) four RCTs assessed healing of PU, all reporting arginine-enriched enteral nutrition resulted in a significant PUSH score improvement compared with control at follow-up. Using the Pressure Sore Status Tool (PSST) one RCT assessed healing of PUs, finding patients receiving arginine had the lowest PSST scores compared with controls. An RCT compared healing with two doses of arginine (4.5g versus 9g), but no difference was found between the doses.
Evidence showed that arginine-enriched enteral nutrition led to a significant improvement in PU healing. It was effective not only in malnourished patients, but also in non-malnourished patients.
arginine; enteral nutrition; healing; pressure ulcer; systematic review
Changes in perceived uselessness and risks for mortality: evidence from a National sample of older adults in China.
Zhao Y, Dupre ME, Qiu L, Gu D.
BMC Public Health. 2017 Jun 9;17(1):561. doi: 10.1186/s12889-017-4479-1.
Self-perception of uselessness is associated with increased mortality risk in older adults. However, it is unknown whether and to what extent changes in perceived uselessness are associated with mortality risk.
Using four waves of national longitudinal data of older adults from China (2005, 2008, 2011, and 2014), this study examines the association between changes in perceived uselessness and risk of subsequent mortality. Perceived uselessness is classified into three major categories: high levels (always/often), moderate levels (sometimes), and low levels (seldom/never). Five categories are used to measure change over three-year intervals: (1) persistently high levels, (2) increases to moderate/high levels, (3) persistent moderate levels, (4) decreases to moderate/low levels, and (5) persistently low levels. Cox proportional hazard models were used to estimate mortality risk associated with changes in levels of perceived uselessness.
Compared to those with persistently low levels of perceived uselessness, those with persistently high levels of feeling useless had 80% increased hazard ratio (HR) in mortality [hr 1.80, 95% CIs: 1.57-2.08, p < 0.001]; and those with increasing levels, persistently moderate levels, and decreasing levels of perceived uselessness had 42% [hr 1.42, 95% CIs: 1.27-159, p < 0.001], 50% [hr 1.50, 95% CIs: 1.32-1.71, p < 0.001], and 23% [hr 1.23, 95% CIs: 1.09-1.37, p < 0.001] increased hazard ratio in mortality, respectively, when background characteristics were taken into account. The associations were partially attenuated when socioeconomic, family/social support, behavioral, and health-related covariates were individually taken into account. Older adults with persistently high and moderate levels of perceived uselessness still exhibited significantly higher risks of mortality (16% [hr 1.16, 95% CIs: 1.00-1.135, p < 0.05] and 22% [hr 1.16, 95% CIs: 1.06-1.139, p < 0.015], respectively) after adjusting for all covariates, although no significant mortality risks were found for either increasing to moderate/high levels or decreasing to moderate/low levels of perceived uselessness.
Persistently high and moderate levels of perceived uselessness are associated with significant increases in mortality risk. These findings have important implications for promoting successful aging in China.
China; Older adults; Oldest-old; Perceived uselessness; Self-perceptions of aging; Young-old
Diet quality is associated with reduced incidence of cancer and self-reported chronic disease: Observations from Alberta's Tomorrow Project.
Solbak NM, Xu JY, Vena JE, Csizmadi I, Whelan HK, Robson PJ.
Prev Med. 2017 Jun 7. pii: S0091-7435(17)30213-X. doi: 10.1016/j.ypmed.2017.06.009. [Epub ahead of print]
The objective of this study was to assess diet quality using the Healthy Eating Index-2005 Canada (HEI-2005-Canada) and its association with risk of cancer and chronic disease in a sample of Alberta's Tomorrow Project (ATP) participants. Food frequency questionnaires completed by 25,169 participants (38% men; mean age 50.3 (9.2)) enrolled between 2000 and 2008 were used to calculate HEI-2005-Canada scores. Data from a subset of participants (n=10,735) who reported no chronic disease at enrollment were used to investigate the association between HEI-2005-Canada score and development of self-reported chronic disease at follow-up (2008). Participants were divided into HEI-2005-Canada score quartiles. Cox proportional hazards models were used to estimate hazard ratios (HR) and 95% confidence intervals (CI) for cancer and chronic disease incidence. In this cohort, mean HEI-2005-Canada scores for men and women were 50.9 and 55.5 (maximum range 0-100), respectively. In men, higher HEI-2005-Canada score (Q4 vs. Q1) was associated with lower cancer risk (HR (95% CI) 0.63 (0.49-0.83)) over the course of follow-up (mean (SD)=10.4 (2.3) years); the same was not observed in women. In contrast, higher overall HEI-2005-Canada score (Q4 vs. Q1) was associated with lower risk of self-reported chronic disease (0.85 (0.75-0.97)) in both men and women over follow-up (4.2 (2.3) years). In conclusion, in this cohort better diet quality was associated with a lower risk of cancer in men and lower risk of chronic disease in both sexes. Future studies with longer follow-up and repeated measures of diet may be helpful to elucidate sex-specific associations between dietary quality and disease outcomes.
Chronic disease; Cohort studies; Diet; Incidence; Neoplasms; Nutrition policy (guidelines)
Are dietary vitamin D, omega-3 fatty acids and folate associated with treatment results in patients with early rheumatoid arthritis? Data from a Swedish population-based prospective study.
Lourdudoss C, Wolk A, Nise L, Alfredsson L, Vollenhoven RV.
BMJ Open. 2017 Jun 10;7(6):e016154. doi: 10.1136/bmjopen-2017-016154.
Dietary intake of vitamin D and omega-3 fatty acids (FA) may be associated with superior response to antirheumatic treatments. In addition, dietary folate intake may be associated with worse response to methotrexate (MTX). The aim of this study was to investigate the association between dietary vitamin D, omega-3 FA, folate and treatment results of disease-modifying antirheumatic drugs (DMARDs) in patients with rheumatoid arthritis (RA).
This prospective study was based on data from the Epidemiological Investigation of Rheumatoid Arthritis (EIRA) study, and included 727 patients with early RA from 10 hospitals in Sweden. Data on dietary vitamin D, omega-3 FA and folate intake based on food frequency questionnaires were linked with data on European League Against Rheumatism (EULAR) response after 3 months of DMARD treatment. Associations between vitamin D, omega-3 FA, folate and EULAR response were analysed with logistic regression adjusted for potential confounders.
The majority of patients (89.9%) were initially treated with MTX monotherapy and more than half (56.9%) with glucocorticoids. Vitamin D and omega-3 FA were associated with good EULAR response (OR 1.80 (95% CI 1.14 to 2.83) and OR 1.60 (95% CI 1.02 to 2.53), respectively). Folate was not significantly associated with EULAR response (OR 1.20 (95% CI 0.75 to 1.91)). Similar results were seen in a subgroup of patients who were initially treated with MTX monotherapy at baseline.
Higher intake of dietary vitamin D and omega-3 FA during the year preceding DMARD initiation may be associated with better treatment results in patients with early RA. Dietary folate intake was not associated with worse or better response to treatment, especially to MTX. Our results suggest that some nutrients may be associated with enhanced treatment results of DMARDs.
Epidemiology; Nutrition & Dietetics; Rheumatology
A systematic review of peer-supported interventions for health promotion and disease prevention.
Ramchand R, Ahluwalia SC, Xenakis L, Apaydin E, Raaen L, Grimm G.
Prev Med. 2017 Jun 7. pii: S0091-7435(17)30212-8. doi: 10.1016/j.ypmed.2017.06.008. [Epub ahead of print] Review.
Prior research has examined peer programs with respect to specific peer roles (e.g.; peer support) or specific health/wellness domains (e.g.; exercise/diet), or have aggregated effects across roles and domains. We sought to conduct a systematic review that categorizes and assesses the effects of peer interventions to promote health and wellness by peer role, intervention type, and outcomes. We use evidence mapping to visually catalog and synthesize the existing research. We searched PubMed and WorldCat databases (2005 to 2015) and New York Academy of Medicine Grey Literature Report (1999 to 2016) for English-language randomized control trials. We extracted study design, study participants, type of intervention(s), peer role(s), outcomes assessed and measures used, and effects from 116 randomized controlled trials. Maps were created to provide a visual display of the evidence by intervention type, peer role, outcome type, and significant vs null or negative effects. There are more null than positive effects across peer interventions, with notable exceptions: group-based interventions that use peers as educators or group facilitators commonly improve knowledge, attitudes, beliefs, and perceptions; peer educators also commonly improved social health/connectedness and engagement. Dyadic peer support influenced behavior change and peer counseling shows promising effects on physical health outcomes. Programs seeking to use peers in public health campaigns can use evidence maps to identify interventions that have previously demonstrated beneficial effects. Those seeking to produce health outcomes may benefit from identifying the mechanisms by which they expect their program to produce these effects and associated proximal outcomes for future evaluations.
PROSPERO REGISTRATION NUMBER:
Although we attempted to register our protocol with PROSPERO, we did not meet eligibility criteria because we were past the data collection phase. The full PROSPERO-aligned protocol is available from the authors.
Peer group; Peer influence; Review
GREY HAIR LINKED WITH INCREASED HEART DISEASE RISK IN MEN
Topic(s): Prevention
– 8 April 2017:
Grey hair has been linked with an increased risk of heart disease in men, in research presented today at EuroPrevent 2017.1
https://www.escardio.org/Sub-specialty-communities/European-Association-of-Preventive-Cardiology-(EAPC)/News/grey-hair-linked-with-increased-heart-disease-risk-in-men
>>>>>>>>>>>>>>>>>>>>>>>
Abstract: 760
The degree of hair graying in male gender as an independent risk factor for coronary artery disease, a prospective study
AMR Elfaramawy1, IRINI Samuel1, REHAM Darweesh1, AHMED Shehata1, HEBA Farouk1, HOSSAM Kandil1, 1Cairo University, Kasr Al-Ainy Hospital-Faculty of Medicine, Department of Cardiology - Cairo - Egypt,
Risk factors: others
European Journal of Preventive Cardiology ( April 2017 ) 24 ( Supplement 1 ), 168
Background: Cardiovascular disease is a leading cause of death worldwide. Aging is an unavoidable coronary risk factor and is associated with dermatological signs that could be a marker for increase coronary risk. We tested the hypothesis that hair graying as a visible marker of aging is associated with risk of coronary artery disease independent of chronological age
Method: This prospective observation study included 545 adult males who underwent a multi-slice computed tomography coronary angiography (MSCT CA) for suspicion of coronary artery disease (CAD), patients were divided into different subgroups according to the percentage of gray/white hairs (Hair Whitening Score, HWS: 1-5) and to absence or presence of CAD
Results: CAD was prevalent in 80% of our studied population, (46.8 %) had three vessels disease with mean age of 53.2 ± 10.7 yrs. Hypertension, diabetes and dyslipidemia were more prevalent in CAD group (P=0.001, P=0.001, and P=0.003 respectively). Patients with CAD had statistically significant higher HWS (3 or more, predominately white hair), (32.1 % Vs 60.1 %, p < 0.001) and significant coronary artery calcification (<0.001). Multivariate regression analysis showed that age (OR: 2.40, 95% CI: [1.31-4.39], p= 0.004), Hair Whitening Score (OR: 1.31, 95% CI: [1.09-1.57], p= 0.004), hypertension (OR: 1.63, 95% CI: [1.03-2.58], p=0.036), and dyslipidemia (OR: 1.61, 95% CI: [1.02-2.54], p=0.038) were independent predictors of presence of atherosclerotic CAD and only age (p < 0.001) was found as independent predictor of hair graying.
Conclusion: In our population, high hair whitening score was associated with increased risk of CAD independent of chronological age and other established cardiovascular risk factors.
Alcohol attenuates myocardial ischemic injury.
Scrimgeour LA, Potz BA, Elmadhun NY, Chu LM, Sellke FW.
Surgery. 2017 Jun 8. pii: S0039-6060(17)30313-6. doi: 10.1016/j.surg.2017.04.014. [Epub ahead of print]
Moderate alcohol consumption is cardioprotective but the mechanism of action remains unclear. Nuclear factor κ-B regulates the expression of genes involved in inflammation, stress, and apoptosis. We used a swine model of diet-induced metabolic syndrome to investigate the effects of red wine and vodka on nuclear factor κ-B signaling and cytokine activity in chronically ischemic myocardium.
Yorkshire swine were given a high-fat diet for 4 weeks; an ameroid constrictor was then placed on the left circumflex artery. The high-fat diet was continued and the swine were divided into 3 groups for 7 weeks: hypercholesterolemic diet alone (control, n = 8), hypercholesterolemic diet with vodka (vodka, n = 8), and hypercholesterolemic diet with wine (wine, n = 8). Ischemic myocardium was analyzed by Western blot and cytokine array.
Administration of alcohol was associated with decreased expression of inhibitor of κ-B kinase complex α, inhibitor of κ-B kinase complex β, and phosphorylated inhibitor of κ-B β in the ischemic myocardium compared with the control group. Alcohol administration demonstrated an increase in nuclear factor κ-B in the ischemic myocardium. Both wine and vodka demonstrated a significant decrease in leptin, interleukin-1α, IL-13, IL-15, and interferon-γ. Vodka demonstrated a significant decrease in phosphorylated BCL-2 and caspase-9.
In ischemic myocardium, alcohol modulates the nuclear factor κ-B pathway, which may contribute to the adaptive response of tissues to the stress of ischemia. Furthermore, both wine and vodka decreased multiple proinflammatory cytokines. This study provides a mechanism by which alcohol may be cardioprotective in ischemic myocardium.
Dental Status and Compression of Life Expectancy with Disability.
Matsuyama Y, Aida J, Watt RG, Tsuboya T, Koyama S, Sato Y, Kondo K, Osaka K.
J Dent Res. 2017 Jun 1:22034517713166. doi: 10.1177/0022034517713166. [Epub ahead of print]
http://sci-hub.cc/10.1177/0022034517713166
This study examined whether the number of teeth contributes to the compression of morbidity, measured as a shortening of life expectancy with disability, an extension of healthy life expectancy, and overall life expectancy. A prospective cohort study was conducted. A self-reported baseline survey was given to 126,438 community-dwelling older people aged ≥65 y in Japan in 2010, and 85,161 (67.4%) responded. The onset of functional disability and all-cause mortality were followed up for 1,374 d (follow-up rate = 96.1%). A sex-stratified illness-death model was applied to estimate the adjusted hazard ratios (HRs) for 3 health transitions (healthy to dead, healthy to disabled, and disabled to dead). Absolute differences in life expectancy, healthy life expectancy, and life expectancy with disability according to the number of teeth were also estimated. Age, denture use, socioeconomic status, health status, and health behavior were adjusted. Compared with the edentulous participants, participants with ≥20 teeth had lower risks of transitioning from healthy to dead (adjusted HR, 0.58 [95% confidence interval (CI), 0.50-0.68] for men and 0.70 [95% CI, 0.57-0.85] for women) and from healthy to disabled (adjusted HR, 0.52 [95% CI, 0.44-0.61] for men and 0.58 [95% CI, 0.49-0.68] for women). They also transitioned from disabled to dead earlier (adjusted HR, 1.26 [95% CI, 0.99-1.60] for men and 2.42 [95% CI, 1.72-3.38] for women). Among the participants aged ≥85 y, those with ≥20 teeth had a longer life expectancy (men: +57 d; women: +15 d) and healthy life expectancy (men: +92 d; women: +70 d) and a shorter life expectancy with disability (men: -35 d; women: -55 d) compared with the edentulous participants. Similar associations were observed among the younger participants and those with 1 to 9 or 10 to 19 teeth. The presence of remaining teeth was associated with a significant compression of morbidity: older Japanese adults' life expectancy with disability was compressed by 35 to 55 d within the follow-up of 1,374 d.
Intake of B vitamins and impairment in physical function in older adults.
Struijk EA, Lana A, Guallar-Castillón P, Rodríguez-Artalejo F, Lopez-Garcia E.
Clin Nutr. 2017 May 23. pii: S0261-5614(17)30177-2. doi: 10.1016/j.clnu.2017.05.016. [Epub ahead of print]
http://sci-hub.cc/10.1016/j.clnu.2017.05.016
The effect of vitamin B intake on physical function is not well known.
To examine the prospective association of the intake of vitamins B6, B12 and folate with physical function impairment in older adults.
We performed a prospective cohort study with 1630 participants from the Seniors-ENRICA study, a cohort of community-dwelling adults aged ≥60 years who were free of physical function impairment at baseline. In 2008-2010, nutrient intake was obtained through a validated computer-assisted face-to-face diet history. Study participants were followed-up through 2012 to assess incident impairment in agility and mobility, as well as impairment in overall physical functioning, defined as a decrease in the physical component summary of the 12-Item Short-Form Health Survey.
Over a median follow-up of 3.5 years, we identified 343 individuals with agility limitation, 212 with mobility limitation, and 457 with decreased overall physical functioning. A significant association was observed between intake of vitamin B6 and lower risk of impaired mobility (odds ratio [OR] for highest vs. lowest tertile: 0.66; 95% confidence interval [CI]:0.44-0.99; p-trend = 0.05). The results lost significance when additionally adjusted for vitamin B12 and folate, however the OR did not materially change. A higher consumption of important sources of vitamin B6, such as fish or fruit, was also related to a lower risk of impaired mobility (OR 100-g increase in fish: 0.50; 95% CI: 0.32-0.79; OR 100-g increase in fruit: 0.92; 95% CI: 0.84-1.01). No association was found between vitamin B12 and folate intake and physical function.
A higher intake of vitamin B6 and of several of its main sources, such as fish and fruit, was associated with lower risk of impaired mobility in Spanish older adults.
Agility; B-vitamins; Elderly; Mobility; Physical functioning
Adding Soy Protein to Milk Enhances the Effect of Resistance Training on Muscle Strength in Postmenopausal Women.
Orsatti FL, Maestá N, de Oliveira EP, Nahas Neto J, Burini RC, Nunes PRP, Souza AP, Martins FM, Nahas EP.
J Diet Suppl. 2017 Jun 12:1-14. doi: 10.1080/19390211.2017.1330794. [Epub ahead of print]
Resistance training (RT) and high-quality protein ingestion improves muscle mass (MM) and strength (MS). However, no study has evaluated the effect of ingesting milk plus soy protein (SOY) on MM and MS in postmenopausal women (PW). Thus, the aim of this study was to evaluate the effects of adding SOY to milk on MM and MS after 16 weeks of RT. Thirty-two PW were randomized and allocated into two groups: placebo and RT (PL+RT, n = 16) and SOY and RT (SOY+RT, n = 16). The SOY+RT received 25 g of SOY while the PL+RT received 25 g of maltodextrin (placebo). All supplements were given in the form of a chocolate-flavored powder added to 200 mL of milk. The RT protocol consisted of eight total body exercises at 70% of one repetition maximum (1RM), three sets of 8-12 repetitions, 2-3 times/week. No differences were found in the baseline measures between groups (age, menopause status, anthropometric and nutrition patterns), except for protein intake, which was higher in the SOY+RT. Both groups increased the MM (bioimpedance) showing no difference between groups (PL+RT = 1.5 kg; SOY+RT = 1.1 kg). For MS, the SOY+RT showed a larger (p < .05) increase in 1RM of bench press (PL+RT = 6.7 kg; SOY+RT = 12.5 kg), knee extension (PL+RT = 3.7 kg; SOY+RT = 6.7 kg), total load (PL+RT = 15.1 kg; SOY+RT = 24.2 kg), and the total load exercises/MM (PL+RT = 0.3 kg; SOY+RT = 0.9 kg). These results suggest that adding SOY to milk combined with 16 weeks of RT resulted in more significant increases in MS in PW.
menopause; sarcopenia; supplementation; weight-lifting exercise program
Trans-Resveratrol Supplementation and Endothelial Function during the Fasting and Postprandial Phase: A Randomized Placebo-Controlled Trial in Overweight and Slightly Obese Participants.
Made SMV, Plat J, Mensink RP.
Nutrients. 2017 Jun 12;9(6). pii: E596. doi: 10.3390/nu9060596.
Studies on the effects of the long-term intake of trans-resveratrol on vascular function are conflicting. In addition, postprandial effects of long-term trans-resveratrol intake on endothelial function are not known. We therefore supplemented 45 overweight and slightly obese volunteers (25 men and 20 women) with a mean (±SD) age of 61 ± 7 years and body mass index of 28.3 ± 3.2 kg/m² in random order trans-resveratrol (2 × 75 mg/day) or placebo capsules for 4 weeks, separated by a washout period of at least 4 weeks. At the end of each intervention period, brachial artery flow-mediated vasodilation (FMD) was measured before and after meal consumption. Plasma biomarkers for endothelial function, inflammation, and glucose and lipid metabolism were also determined. Compared with the placebo, trans-resveratrol did not affect fasting FMD (2.9 ± 1.4% vs. 3.0 ± 1.9%; p = 0.69). After the postprandial test, changes in FMD (-0.7 ± 2.3% vs. 0.2 ± 2.6%; p = 0.13) were also not significantly different. Postprandial changes in biomarkers were also comparable. In conclusion, for overweight and slightly obese volunteers, a daily intake of 150 mg of trans-resveratrol for 4 weeks does not change plasma biomarkers of endothelial function or inflammation in the fasting state or postprandial phase.
flow-mediated vasodilation; humans; postprandial; trans-resveratrol; vascular function
Preventive Interventions for the Second Half of Life: A Systematic Review.
Hajat C, Selwyn A, Harris M, Yach D.
Am J Health Promot. 2017 Jan 1:890117117712355. doi: 10.1177/0890117117712355. [Epub ahead of print]
Recent improvements in life expectancy globally require intensified focus on noncommunicable diseases and age-related conditions. The purpose of this article is to inform the development of age-specific prevention guidelines for adults aged 50 and above, which are currently lacking.
PubMed, Cochrane database, and Google Scholar and explicit outreach to experts in the field.
STUDY INCLUSION AND EXCLUSION CRITERIA:
Meta-analyses, intervention-based, and prospective cohort studies that reported all-cause mortality, disease-specific mortality, or morbidity in adults were included.
DATA EXTRACTION:
A systematic review was undertaken in 2015 using search terms of a combination of <risk factor> and "intervention," "mortality," "reduction," "improvement," "death," and "morbidity."
DATA SYNTHESIS:
Interventions were categorized according to the Center for Evidence-Based Medicine Level of Evidence framework.
A summary table reports for each intervention the impact, strength of evidence, initiation, duration, and details of the intervention. Age-decade-specific preventive recommendations have been proposed relating to physical activity, diet, tobacco and alcohol use, medication adherence, screening and vaccination, and mental and cognitive health.
Clear recommendations have been made according to the existing evidence base, but further research investment is needed to fill the many gaps. Further, personalized approaches to healthy aging complemented by population-wide approaches and broader cross-sector partnerships will help to ensure greater longevity is an opportunity, rather than a burden, for society.
age-specific; aging; lifestyle; longevity; morbidity; mortality; prevention
Association of Donor Age and Sex With Survival of Patients Receiving Transfusions.
Edgren G, Ullum H, Rostgaard K, Erikstrup C, Sartipy U, Holzmann MJ, Nyrén O, Hjalgrim H.
JAMA Intern Med. 2017 Jun 1;177(6):854-860. doi: 10.1001/jamainternmed.2017.0890.
Following animal model data indicating the possible rejuvenating effects of blood from young donors, there have been at least 2 observational studies conducted with humans that have investigated whether donor age affects patient outcomes. Results, however, have been conflicting.
To study the association of donor age and sex with survival of patients receiving transfusions.
A retrospective cohort study based on the Scandinavian Donations and Transfusions database, with nationwide data, was conducted for all patients from Sweden and Denmark who received at least 1 red blood cell transfusion of autologous blood or blood from unknown donors between January 1, 2003, and December 31, 2012. Patients were followed up from the first transfusion until death, emigration, or end of follow-up. Data analysis was performed from September 15 to November 15, 2016.
The number of transfusions from blood donors of different age and sex. Exposure was treated time dependently throughout follow-up.
Hazard ratios (HRs) for death and adjusted cumulative mortality differences, both estimated using Cox proportional hazards regression.
Results of a crude analysis including 968 264 transfusion recipients (550 257 women and 418 007 men; median age at first transfusion, 73.0 years [interquartile range, 59.8-82.4 years]) showed a U-shaped association between age of the blood donor and recipient mortality, with a nadir in recipients for the most common donor age group (40-49 years) and significant and increasing HRs among recipients of blood from donors of successively more extreme age groups (<20 years: HR, 1.12; 95% CI, 1.10-1.14; ≥70 years: HR, 1.25; 95% CI, 1.08-1.44). Higher mortality was also noted among recipients of blood from female donors (HR, 1.07; 95% CI, 1.07-1.07). Adjustments for number of transfusions with a linear term attenuated the associations, but the increased mortality for recipients of blood from young, old, and female donors was not eliminated. Closer examination of the association between number of transfusions and mortality revealed a nonlinear pattern. After adjustments to accommodate nonlinearity, donor age and sex were no longer associated with patient mortality.
Donor age and sex were not associated with patient survival and need not be considered in blood allocation. Any comparison between common and less common categories of transfusions will inevitably be confounded by the number of transfusions, which drives the probability of receiving the less common blood components. Previous positive findings regarding donor age and sex are most likely explained by residual confounding.
Statistical Caution in Big Data Approaches to Transfusion Medicine Research.
Roubinian N, Brambilla D, Murphy EL.
JAMA Intern Med. 2017 Jun 1;177(6):860-861. doi: 10.1001/jamainternmed.2017.0914. No abstract available.
http://sci-hub.cc/10.1001/jamainternmed.2017.0914
Nearly one-third of the world is overweight, risking illness and death
Excess weight affected 2.2 billion people in 2015 — with about 10% of the population considered obese
Thomson Reuters Posted: Jun 12, 2017
http://www.cbc.ca/news/health/global-obesity-increasing-in-children-1.4156512
Health Effects of Overweight and Obesity in 195 Countries over 25 Years
The GBD 2015 Obesity Collaborators
June 12, 2017DOI: 10.1056/NEJMoa1614362
Although the rising pandemic of obesity has received major attention in many countries, the effects of this attention on trends and the disease burden of obesity remain uncertain.
We analyzed data from 68.5 million persons to assess the trends in the prevalence of overweight and obesity among children and adults between 1980 and 2015. Using the Global Burden of Disease study data and methods, we also quantified the burden of disease related to high body-mass index (BMI), according to age, sex, cause, and BMI in 195 countries between 1990 and 2015.
In 2015, a total of 107.7 million children and 603.7 million adults were obese. Since 1980, the prevalence of obesity has doubled in more than 70 countries and has continuously increased in most other countries. Although the prevalence of obesity among children has been lower than that among adults, the rate of increase in childhood obesity in many countries has been greater than the rate of increase in adult obesity. High BMI accounted for 4.0 million deaths globally, nearly 40% of which occurred in persons who were not obese. More than two thirds of deaths related to high BMI were due to cardiovascular disease. The disease burden related to high BMI has increased since 1990; however, the rate of this increase has been attenuated owing to decreases in underlying rates of death from cardiovascular disease.
The rapid increase in the prevalence and disease burden of elevated BMI highlights the need for continued focus on surveillance of BMI and identification, implementation, and evaluation of evidence-based interventions to address this problem. (Funded by the Bill and Melinda Gates Foundation.)
High reproductive effort is associated with decreasing mortality late in life in captive ruffed lemurs.
Tidière M, Lemaître JF, Douay G, Whipple M, Gaillard JM.
Am J Primatol. 2017 Jun 13. doi: 10.1002/ajp.22677. [Epub ahead of print]
http://onlinelibrary.wiley.com.sci-hub.cc/doi/10.1002/ajp.22677/abstract;jsessionid=950BB151E37F5140281A54933143FB68.f02t02
Evolutionary theories of senescence predict that a high allocation to reproduction during early life should have long-term deleterious consequences on future reproduction or survival because individuals have to face an energy allocation trade-off between reproductive effort and the maintenance of body condition. Using a high-quality dataset from 1,721 red ruffed lemurs (RRL, Varecia rubra) and 3,637 black and white ruffed lemurs (BWRM, V. variegata) living in captivity, we tested the existence of a trade-off between reproductive effort and late-life survival after accounting for possible confounding effects of natal environmental conditions. We report clear evidence of actuarial senescence (i.e., the decline of annual survival with increasing age) in both sexes and for both species of ruffed lemurs. RRL had a lower baseline mortality and senesced faster than BWRL, resulting in similar distributions of longevities for both species. No between-sex difference was observed in any species. Lastly, a higher reproductive effort was positively associated with an increase of survival late in life, and thereby an increased longevity. These findings indicate that individual quality rather than trade-off drives the association between reproductive success and survival pattern among individual lemurs of both species in the protected environment provided by zoos. Lemurs are among the world's highest conservation priorities and better understanding factors influencing their longevity and actuarial senescence patterns should improve their conservation.
ageing; lemuridae; life-history; primates; zoo
Leisure-time physical activity and risk of disability incidence: A 12-year prospective cohort study among young elderly of the same age at baseline.
Matsunaga T, Naito M, Wakai K, Ukawa S, Zhao W, Okabayashi S, Ando M, Kawamura T, Tamakoshi A.
J Epidemiol. 2017 Jun 9. pii: S0917-5040(17)30120-X. doi: 10.1016/j.je.2016.11.004. [Epub ahead of print]
http://sci-hub.cc/10.1016/j.je.2016.11.004
To clarify the role of physical activity in preventing disability in Japan, we investigated the association between amount of leisure-time physical activity and incidence of disability among the young elderly.
In the New Integrated Suburban Seniority Investigation (NISSIN) project conducted from 1996 to 2013, we followed 2888 community-dwelling adults aged 64-65 years with no history of cerebrovascular disease for a median follow-up of 11.6 years. Disabilities were defined as follows based on the classifications of the Japanese long-term care insurance system: 1) support or care levels (support levels 1-2 or care levels 1-5); 2) care levels 2-5; 3) support or care levels with dementia; and 4) care levels 2-5 or death. In addition, we also assessed 5) all-cause mortality.
After controlling for sociodemographic, lifestyle, and medical factors, male participants reporting an activity level of 18.1 metabolic equivalent (MET)-hours/week (the median among those with activities) or more had 52% less risk of being classified as support or care levels with dementia compared with the no activity group (hazard ratio 0.48; 95% confidence interval, 0.25-0.94). No significant association was found among women between amount of leisure-time physical activity and incidence of disability.
We identified an inverse dose-response relationship between the amount of leisure-time physical activity and the risk of disability with dementia in men. Therefore, a higher level of physical activity should be recommended to young elderly men to prevent disability with dementia.
Disability; Elderly; Leisure-time physical activity
The Association between Age-Related Macular Degeneration and the Risk of Mortality.
Wang P, Wang J, Ma J, Jin G, Guan X.
Biomed Res Int. 2017;2017:3489603. doi: 10.1155/2017/3489603. Epub 2017 May 18.
Studies have investigated the association between age-related macular degeneration (AMD) and subsequent risks of mortality, but results have been equivocal. We conducted a comprehensive analysis of prospective cohort studies to assess the association of AMD and the risk of mortality in the general population. We searched PubMed and EMBASE for trials published from 1980 to 2016. We included 11 cohort studies that reported relative risks with 95% confidence intervals for the association of AMD and mortality, involving 57,069 participants. In a random-effects model, the adjusted RR (95% confidence interval) associated with AMD was 1.09 (1.02-1.17) for all-cause mortality. Findings from this research provide support that persons with AMD had a higher subsequent risk of mortality than persons without AMD.
Modeling anthropometric indices in relation to 10-year (2002-2012) incidence of cardiovascular disease, among apparently healthy individuals: The ATTICA study.
Filippatos TD, Kyrou I, Georgousopoulou EN, Chrysohoou C, Kouli GM, Tsigos C, Tousoulis D, Stefanadis C, Pitsavos C, Panagiotakos DB.
Diabetes Metab Syndr. 2017 Jun 3. pii: S1871-4021(17)30169-8. doi: 10.1016/j.dsx.2017.05.018. [Epub ahead of print]
http://sci-hub.cc/10.1016/j.dsx.2017.05.018
Body fat accumulation is implicated in the development of cardiovascular disease (CVD). Our objective was to explore potential associations between anthropometric indices and the 10-year CVD incidence in Greek adults without previous CVD.
During 2001-2, we enrolled 3042 adults without CVD from the general population of Attica, Greece. In 2011-2, the 10-year study follow-up was performed, recording the CVD incidence in 1958 participants with baseline body mass index (BMI) ≥18.5kg/m2.
The study 10-year CVD incidence was 15.8%, exhibiting a gradual increase according to the baseline body mass index (BMI) category. Baseline BMI ≥30kg/m2 was related with significantly higher 10-year CVD risk compared to BMI <25kg/m2, even after adjustment for age and other known CVD risk factors. Baseline BMI, waist circumference, waist-to-hip ratio, waist-to-height ratio and waist-to-hip-to-height ratio were independently associated with the 10-year CVD risk in multi-adjusted models. Gender-specific analyses showed that these associations were more evident in men compared to women, with baseline BMI exhibiting an independent association with the 10-year CVD incidence in men.
Our results indicate that even simple anthropometric indices exhibit independent associations with CVD risk in a representative sample of the Greek general population without previous CVD.
Anthropometric indices; Body mass index; Cardiovascular disease; Obesity; Waist circumference
Effect of ω-3 polyunsaturated fatty acids on arthritic pain: A systematic review.
Abdulrazaq M, Innes JK, Calder PC.
Nutrition. 2017 Jul - Aug;39-40:57-66. doi: 10.1016/j.nut.2016.12.003. Epub 2016 Dec 21. Review.
Pain is a significant problem in rheumatoid arthritis (RA) and is associated with prostaglandins derived from the ω-6 polyunsaturated fatty acid (PUFA) arachidonic acid. The ω-3 PUFAs eicosapentaenoic acid and docosahexaenoic acid have been shown to reduce inflammation, with some studies showing clinical improvements in RA. The aim of this systematic review was to investigate the effect of ω-3 PUFAs on arthritic pain.
A systematic literature review of ω-3 PUFAs and pain associated with RA was performed up to December 2015. Randomized controlled trials (RCTs) investigating the effect of ω-3 PUFAs (>2 g/d) on patient or physician assessment of pain, or assessment by both patient and physician, were included. The Cochrane Collaboration's tool for assessing risk for bias was employed. Data for outcomes of interest were extracted and collated for interpretation.
Eighteen RCTs published between 1985 and 2013 involving 1143 patients were included. Dosage of ω-3 PUFAs used was 2.1 to 9.1 g/d, with study durations of 12 to 52 wk. Ten studies supported the hypothesis that there is a reduction in patient or physician assessment of pain associated with RA after intake of ω-3 PUFAs. Eight studies found no statistically significant effect of ω-3 PUFAs on arthritic pain.
ω-3 PUFAs may have a therapeutic role in decreasing pain associated with RA, with doses of 3 to 6 g/d appearing to have a greater effect. Due to the limitations identified in the RCTs included in this review, more research is needed to investigate ω-3 PUFAs in larger populations and over extended periods of time.
DHA; EPA; Fish oil; Pain; Rheumatoid arthritis
Cooking Methods for Red Meats and Risk of Type 2 Diabetes: A Prospective Study of U.S. Women.
Liu G, Zong G, Hu FB, Willett WC, Eisenberg DM, Sun Q.
Diabetes Care. 2017 Jun 13. pii: dc170204. doi: 10.2337/dc17-0204. [Epub ahead of print]
This study examined different cooking methods for red meats in relation to type 2 diabetes (T2D) risk among U.S. women who consumed red meats regularly (≥2 servings/week).
RESEARCH DESIGN AND METHODS:
We monitored 59,033 women (1986-2012) aged 30-55 years and free of diabetes, cardiovascular disease, and cancer at baseline when information on frequency of different cooking methods for red meats, including broiling, barbequing, roasting, pan-frying, and stewing/boiling, was collected.
During 1.24 million person-years of follow-up, we documented 6,206 incident cases of T2D. After multivariate adjustment including red meat cooking methods, total red meat and processed red meat intake were both associated with a monotonically increased T2D risk (both P trend <0.05). After multivariate adjustment including total red meat intake, a higher frequency of broiling, barbequing, and roasting red meats was each independently associated with a higher T2D risk. When comparing ≥2 times/week with <1 time/month, the hazard ratios (HRs) and 95% CI of T2D were 1.29 (1.19, 1.40; P trend <0.001) for broiling, 1.23 (1.11, 1.38; P trend <0.001) for barbequing, and 1.11 (1.01, 1.23; P trend = 0.14) for roasting. In contrast, the frequency of stewing/boiling red meats was not associated with T2D risk, and an inverse association was observed for pan-frying frequency and T2D risk. The results remained similar after cooking methods were further mutually adjusted.
Independent of total red meat consumption, high-temperature and/or open-flame cooking methods for red meats, especially broiling and barbequing, may further increase diabetes risk among regular meat eaters.
Daytime napping and risk of type 2 diabetes: a meta-analysis of prospective studies.
Chen GC, Liu MM, Chen LH, Xu JY, Hidayat K, Li FR, Qin LQ.
Sleep Breath. 2017 Jun 13. doi: 10.1007/s11325-017-1528-z. [Epub ahead of print]
Prospective studies reported inconsistent findings on the relationship between daytime napping and risk of type 2 diabetes (T2D). Categorized and dose-response meta-analyses were performed to quantify this relation.
Potentially eligible studies were identified by searching PubMed and Embase databases. Dose-response effects were assessed by the generalized least squares trend estimation and study-specific summary relative risks (RRs) with 95% confidence intervals (CIs) were computed with a random-effects model.
Seven prospective studies including one US, four European, and two Chinese cohorts involving 249,077 participants and 13,237 cases of T2D were included. The overall analyses showed a 17% increased risk of T2D when comparing habitual nappers with non-nappers (RR = 1.17, 95% CI 1.08-1.27). By region, the summary RR was 1.21 (95% CI 1.17-1.26), 1.15 (95% CI 1.03-1.30) and 1.23 (95% CI 0.87-1.73) for the US, European, and Chinese studies, respectively. Limiting to five studies that excluded subjects with known major chronic disorders yielded a summary RR of 1.16 (95% CI 1.03-1.30). A dose-response analysis suggested an 11% (95% CI 7-16%) increased T2D risk for each increment in daytime napping of 30 min/day and, despite no evidence for nonlinearity (P nonlinearity = 0.65), the increased risk of T2D for short nap (<50 min/day) was dominated by the US study.
This meta-analysis suggests that daytime napping is associated with an increased risk of T2D. Given the limited number of cohorts and inconsistency in terms of methodological and population characteristics across these cohorts, residual confounders and/or reverse causality cannot be fully addressed, and our findings should be interpreted with great caution. Future well-designed prospective studies are still warranted.
Meta-analysis; Napping; Sleep; Type 2 diabetes
Dietary Phosphorus Intake and the Kidney.
Chang AR, Anderson C.
Annu Rev Nutr. 2017 Jun 14. doi: 10.1146/annurev-nutr-071816-064607. [Epub ahead of print]
http://www.annualreviews.org.sci-hub.cc/doi/10.1146/annurev-nutr-071816-064607
Although phosphorus is an essential nutrient required for multiple physiological functions, recent research raises concerns that high phosphorus intake could have detrimental effects on health. Phosphorus is abundant in the food supply of developed countries, occurring naturally in protein-rich foods and as an additive in processed foods. High phosphorus intake can cause vascular and renal calcification, renal tubular injury, and premature death in multiple animal models. Small studies in human suggest that high phosphorus intake may result in positive phosphorus balance and correlate with renal calcification and albuminuria. Although serum phosphorus is strongly associated with cardiovascular disease, progression of kidney disease, and death, limited data exist linking high phosphorus intake directly to adverse clinical outcomes. Further prospective studies are needed to determine whether phosphorus intake is a modifiable risk factor for kidney disease.
Dietary n-3 polyunsaturated fatty acids, fish consumption, and endometrial cancer risk: a meta-analysis of epidemiological studies.
Hou R, Yao SS, Liu J, Wang LL, Wu L, Jiang L.
Oncotarget. 2017 May 30. doi: 10.18632/oncotarget.18295. [Epub ahead of print]
The relationship between intake of fish and n-3 fatty acids and endometrial cancer risk has not been consistent across epidemiological studies. We quantitatively assessed the aforementioned association through a systematic review and meta-analysis. PubMed and Embase were searched through March 2017 for eligible epidemiological studies. Fixed or random-effects models were used to pool relative risks (RRs) and 95% confidence intervals (CIs). The dose-response relationship was also evaluated. Based on the literature search, five prospective studies and 11 case-control studies were identified. All 16 studies were categorized as high-quality studies. After pooling available risk estimates, no significant association was detected between overall fish intake and endometrial cancer risk. In subgroup analyses, every one additional serving/week of fish intake was significantly associated with inversed endometrial cancer risk in studies adjusted for smoking (RR (95% CI): 0.95 (0.91-1.00)), or studies performed in Europe (RR (95% CI): 0.90 (0.84-0.97)), but not in other tested subgroups. In studies conducted in Asia, there was significant positive association (RR (95% CI): 1.15 (1.10-1.21)). Regarding n-3 PUFA intake, marginally inverse associations of high EPA or DHA intake were detected (EPA: RR (95% CI) = 0.79 (0.61-1.04); DHA: RR (95% CI) = 0.85 (0.64-1.11)). Dose-response analyses suggested a significant nonlinear relationship between DHA intake and endometrial cancer risk (p: 0.04). Overall, this meta-analysis suggests that intake of n-3 PUFA may be inversely associated with endometrial cancer risk at some level of evidence, although the exact relationship, especially for fish intake, needs further characterization. Further well-designed studies are warranted.
endometrial cancer; epidemiology; fish; n-3 fatty acids
Synbiotic supplementation and the effects on clinical and metabolic responses in patients with rheumatoid arthritis: a randomised, double-blind, placebo-controlled trial.
Zamani B, Farshbaf S, Golkar HR, Bahmani F, Asemi Z.
Br J Nutr. 2017 Apr;117(8):1095-1102. doi: 10.1017/S000711451700085X. Epub 2017 May 11.
Synbiotic intake may be associated with reduced inflammation in patients with rheumatoid arthritis (RA) due to optimised inflammatory markers, oxidative stress and insulin resistance. This research was conducted to assess the effects of synbiotic supplementation on the clinical and metabolic parameters of patients with RA. A total of fifty-four patients with RA were allocated into two groups to receive either a synbiotic capsule (n 27) or a placebo (n 27) for 8 weeks in this randomised, double-blind, placebo-controlled trial. Fasting blood samples were taken at baseline and week 8 of the study to quantify related markers. After the 8-week intervention, compared with the placebo, synbiotic supplementation resulted in a significant reduction in serum high-sensitivity C-reactive protein (hs-CRP) levels (-1427·8 (sd 3267·2) v. +2833·4 (sd 5639·7) ng/ml, P=0·001). In addition, compared with the placebo, synbiotic supplementation improved disease activity score-28 joints (DAS-28) (-1·6 (sd 0·8) v. -0·3 (sd 0·5), P<0·001) and visual analogue scales (VAS) pain (-30·4 (sd 18·7) v. -11·5 (sd 15·9), P<0·001). In addition, a significant elevation in plasma nitric oxide (NO) (+0·8 (sd 4·4) v. -2·6 (sd 4·5) µmol/l, P=0·008), and significant reductions in insulin values (-13·8 (sd 26·4) v. +4·2 (sd 28·2) pmol/l, P=0·01), homoeostasis model of assessment-estimated insulin resistance (HOMA-IR) (-0·5 (sd 1·0) v.+0·1 (sd 1·1), P=0·03) and homoeostatic model assessment-β-cell function (HOMA-B) (-9·4 (sd 17·9) v. +3·3 (sd 18·9), P=0·01) following supplementation with the synbiotic compared with the placebo. Compared with the placebo, synbiotic supplementation also resulted in a significant increase in plasma GSH (+36·6 (sd 63·5) v. -58·5 (sd 154·4) µmol/l, P=0·005). Overall, our study demonstrated that synbiotic supplementation for 8 weeks among patients with RA had beneficial effects on hs-CRP, DAS-28, VAS, NO, insulin levels, HOMA-IR, HOMA-B and GSH levels.
DAS-28 disease activity score-28 joints; HOMA-B homoeostatic model assessment-β-cell function; HOMA-IR homoeostasis model of assessment-estimated insulin resistance; KUMS Kashan University of Medical Sciences; MDA malondialdehyde; RA rheumatoid arthritis; TAC antioxidant capacity; VAS visual analogue scales; hs-CRP high-sensitivity C-reactive protein; Metabolic profiles; Rheumatoid arthritis; Supplementation; Synbiotics
Alcohol consumption pattern and risk of Barrett's oesophagus and erosive oesophagitis: an Italian case-control study.
Filiberti RA, Fontana V, De Ceglie A, Blanchi S, Grossi E, Della Casa D, Lacchin T, De Matthaeis M, Ignomirelli O, Cappiello R, Rosa A, Foti M, Laterza F, D'Onofrio V, Iaquinto G, Conio M.
Br J Nutr. 2017 Apr;117(8):1151-1161. doi: 10.1017/S0007114517000940. Epub 2017 May 8.
http://sci-hub.cc/10.1017/S0007114517000940
Knowledge about the association between alcohol and Barrett's oesophagus and reflux oesophagitis is conflicting. In this case-control study we evaluated the role of specific alcoholic beverages (red and white wine, beer and liquors) in 339 Barrett's oesophagus and 462 oesophagitis patients compared with 619 endoscopic controls with other disorders, recruited in twelve Italian endoscopic units. Data on alcohol and other individual characteristics were obtained from structured questionnaires. No clear, monotonic significant dose-response relationship was pointed out for red wine. However, a generalised U-shaped trend of Barrett's oesophagus/oesophagitis risk due to red wine consumption particularly among current drinkers was found. Similar results were also found for white wine. Liquor/spirit consumption seemed to bring about a 1·14-2·30 risk excess, although statistically non-significant, for current Barrett's oesophagus/oesophagitis drinkers. Statistically significant decreasing dose-response relationships were found in Barrett's oesophagus for frequency and duration of beer consumption. Similar, but less clear downward tendencies were also found for oesophagitis patients. In conclusion, although often not statistically significant, our data suggested a reduced risk of Barrett's oesophagus and oesophagitis with a low/moderate intake of wine and beer consumption. A non-significant increased risk of Barrett's oesophagus/oesophagitis was observed with a higher intake of any type of heavy alcohol consumption, but no conclusion can be drawn owing to the high number of non-spirit drinkers and to the small number of drinkers at higher alcohol intake levels.
BE Barrett's oesophagus; C control; E oesophagitis; EAC oesophageal adenocarcinoma; GERD gastro-oesophageal reflux disease; MLR multinomial logistic regression; TLT test for linear trend; Alcohol; Barrett's oesophagus; Epidemiology; Gastro-oesophageal reflux disease; Oesophagitis; Risk factors
Citrus consumption and incident dementia in elderly Japanese: the Ohsaki Cohort 2006 Study.
Zhang S, Tomata Y, Sugiyama K, Sugawara Y, Tsuji I.
http://sci-hub.cc/10.1017/S000711451700109X
Although some experimental biological studies have indicated that citrus may have preventive effects against cognitive impairment, no cohort study has yet examined the relationship between citrus consumption and incident dementia. In a baseline survey, we collected data on daily citrus intake (categorised as ≤2, 3-4 times/week or almost every day) and consumption of other foods using a FFQ, and used a self-reported questionnaire to collect data on other covariates. Data on incident dementia were retrieved from the Japanese Long-term Care Insurance database. A multivariate-adjusted Cox model was used to estimate the hazard ratios (HR) and 95 % CI for incident dementia according to citrus consumption. Among 13 373 participants, the 5·7-year incidence of dementia was 8·6 %. In comparison with participants who consumed citrus ≤2 times/week, the multivariate-adjusted HR for incident dementia among those did so 3-4 times/week and almost every day was 0·92 (95 % CI 0·80, 1·07) and 0·86 (95 % CI 0·73, 1·01), respectively (P trend=0·065). The inverse association persisted after excluding participants whose dementia events had occurred in the first 2 years of follow-up. The multivariate HR was 1·00 (reference) for ≤2 times/week, 0·82 (95 % CI 0·69, 0·98) for 3-4 times/week and 0·77 (95 % CI 0·64, 0·93) for almost every day (P trend=0·006). The present findings suggest that frequent citrus consumption was associated with a lower risk of incident dementia, even after adjustment for possible confounding factors.
HR hazard ratio; LTCI Long-term Care Insurance; Citrus; Cohort studies; Dementia; Elderly; Japan
International Society of Sports Nutrition position stand: safety and efficacy of creatine supplementation in exercise, sport, and medicine.
Kreider RB, Kalman DS, Antonio J, Ziegenfuss TN, Wildman R, Collins R, Candow DG, Kleiner SM, Almada AL, Lopez HL.
J Int Soc Sports Nutr. 2017 Jun 13;14:18. doi: 10.1186/s12970-017-0173-z. eCollection 2017. Review.
Creatine is one of the most popular nutritional ergogenic aids for athletes. Studies have consistently shown that creatine supplementation increases intramuscular creatine concentrations which may help explain the observed improvements in high intensity exercise performance leading to greater training adaptations. In addition to athletic and exercise improvement, research has shown that creatine supplementation may enhance post-exercise recovery, injury prevention, thermoregulation, rehabilitation, and concussion and/or spinal cord neuroprotection. Additionally, a number of clinical applications of creatine supplementation have been studied involving neurodegenerative diseases (e.g., muscular dystrophy, Parkinson's, Huntington's disease), diabetes, osteoarthritis, fibromyalgia, aging, brain and heart ischemia, adolescent depression, and pregnancy. These studies provide a large body of evidence that creatine can not only improve exercise performance, but can play a role in preventing and/or reducing the severity of injury, enhancing rehabilitation from injuries, and helping athletes tolerate heavy training loads. Additionally, researchers have identified a number of potentially beneficial clinical uses of creatine supplementation. These studies show that short and long-term supplementation (up to 30 g/day for 5 years) is safe and well-tolerated in healthy individuals and in a number of patient populations ranging from infants to the elderly. Moreover, significant health benefits may be provided by ensuring habitual low dietary creatine ingestion (e.g., 3 g/day) throughout the lifespan. The purpose of this review is to provide an update to the current literature regarding the role and safety of creatine supplementation in exercise, sport, and medicine and to update the position stand of International Society of Sports Nutrition (ISSN).
Adolescents; Athletes; Children; Clinical applications; Ergogenic aids; Muscle power; Muscular strength; Performance enhancement; Safety; Sport nutrition
Atrial fibrillation and the risk for myocardial infarction, all-cause mortality and heart failure: A systematic review and meta-analysis.
Ruddox V, Sandven I, Munkhaugen J, Skattebu J, Edvardsen T, Otterstad JE.
Eur J Prev Cardiol. 2017 Jan 1:2047487317715769. doi: 10.1177/2047487317715769. [Epub ahead of print]
Background In contemporary atrial fibrillation trials most deaths are cardiac related, whereas stroke and bleeding represent only a small subset of deaths. We aimed to evaluate the long-term risk of cardiac events and all-cause mortality in individuals with atrial fibrillation compared to no atrial fibrillation. Design A systematic review and meta-analysis of studies published between 1 January 2006 and 21 October 2016. Methods Four databases were searched. Studies had follow-up of at least 500 stable patients for either cardiac endpoints or all-cause mortality for 12 months or longer. Publication bias was evaluated and random effects models were used to synthesise the results. Heterogeneity between studies was examined by subgroup and meta-regression analyses. Results A total of 15 cohort studies was included. Analyses indicated that atrial fibrillation was associated with an increased risk of myocardial infarction (relative risk (RR) 1.54, 95% confidence interval (CI) 1.26-1.85), all-cause mortality (RR 1.95, 95% CI 1.50-2.54) and heart failure (RR 4.62, 95% CI 3.13-6.83). Coronary heart disease at baseline was associated with a reduced risk of myocardial infarction and explained 57% of the heterogeneity. A prospective cohort design accounted for 25% of all-cause mortality heterogeneity. Due to there being fewer than 10 studies, sources of heterogeneity were inconclusive for heart failure. Conclusions Atrial fibrillation seems to be associated with an increased risk of subsequent myocardial infarction in patients without coronary heart disease and an increased risk of, all-cause mortality and heart failure in patients with and without coronary heart disease.
Atrial fibrillation; heart failure; mortality; myocardial infarction
Risk factors and protective factors associated with incident or increase of frailty among community-dwelling older adults: A systematic review of longitudinal studies.
Feng Z, Lugtenberg M, Franse C, Fang X, Hu S, Jin C, Raat H.
PLoS One. 2017 Jun 15;12(6):e0178383. doi: 10.1371/journal.pone.0178383. eCollection 2017.
Frailty is one of the greatest challenges facing our aging population, as it can lead to adverse outcomes such as institutionalization, hospitalization, and mortality. However, the factors that are associated with frailty are poorly understood. We performed a systematic review of longitudinal studies in order to identify the sociodemographic, physical, biological, lifestyle-related, and psychological risk or protective factors that are associated with frailty among community-dwelling older adults.
A systematic literature search was conducted in the following databases in order to identify studies that assessed the factors associated with of frailty among community-dwelling older adults: Embase, Medline Ovid, Web of Science, Cochrane, PsychINFO Ovid, CINAHL EBSCOhost, and Google Scholar. Studies were selected if they included a longitudinal design, focused on community-dwelling older adults aged 60 years and older, and used a tool to assess frailty. The methodological quality of each study was assessed using the Quality of Reporting of Observational Longitudinal Research checklist.
Twenty-three studies were included. Significant associations were reported between the following types of factors and frailty: sociodemographic factors (7/7 studies), physical factors (5/6 studies), biological factors (5/7 studies), lifestyle factors (11/13 studies), and psychological factors (7/8 studies). Significant sociodemographic factors included older age, ethnic background, neighborhood, and access to private insurance or Medicare; significant physical factors included obesity and activities of daily living (ADL) functional status; significant biological factors included serum uric acid; significant lifestyle factors included a higher Diet Quality Index International (DQI) score, higher fruit/vegetable consumption and higher tertile of all measures of habitual dietary resveratrol exposure; significant psychological factors included depressive symptoms.
A broad range of sociodemographic, physical, biological, lifestyle, and psychological factors show a longitudinal association with frailty. These factors should be considered when developing interventions aimed at preventing and/or reducing the burden associated with frailty among community-dwelling older adults.
Go To Topic Listing General Health and Longevity | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,912 |
Q: Finding JSON objects in mongoDB I'm trying to find objects using the built it queries and It just doesn't work..
My JSON file is something like this:
{ "Text1":
{
"id":"2"
},
"Text2":
{
"id":"2,3"
},
"Text3":
{
"id":"1"
}
}
And I write this db.myCollection.find({"id":2})
And it doesn't find anything.
When I write db.myCollection.find() it shows all the data as it should.
Anyone knows how to do it correctly?
A: Its hard to change the data-structure but as you want just your matching sub-document and you don't know where is your target sub-document (for example the query should be on Text1 or Text2 , ...) there is a good data structure for this:
{
"_id" : ObjectId("548dd9261a01c68fab8d67d7"),
"pair" : [
{
"id" : "2",
"key" : "Text1"
},
{
"id" : [
"2",
"3"
],
"key" : "Text2"
},
{
"id" : "1",
"key" : "Text3"
}
]
}
and your query is:
db.myCollection.findOne({'pair.id' : "2"} , {'pair.$':1, _id : -1}).pair // there is better ways (such as aggregation instead of above query)
as result you will have:
{
"0" : {
"id" : "2",
"key" : "Text1"
}
}
Update 1 (newbie way)
If you want all the document not just one use this
var result = [];
db.myCollection.find({'pair.id' : "2"} , {'pair.$':1, _id : -1}).forEach(function(item)
{
result.push(item.pair);
});
// the output will be in result
Update 2
Use this query to get all sub-documents
db.myCollection.aggregate
(
{ $unwind: '$pair' },
{ $match : {'pair.id' : "2"} }
).result
it produce output as
{
"0" : {
"_id" : ObjectId("548deb511a01c68fab8d67db"),
"pair" : {
"id" : "2",
"key" : "Text1"
}
},
"1" : {
"_id" : ObjectId("548deb511a01c68fab8d67db"),
"pair" : {
"id" : [
"2",
"3"
],
"key" : "Text2"
}
}
}
A: Since your are query specify a field in a subdocument this is what will work. see .find() documentation.
db.myCollection.find({"Text1.id" : "2"}, {"Text1.id": true})
{ "_id" : ObjectId("548dd798e2fa652e675af11d"), "Text1" : { "id" : "2" } }
If the query is on "Text1" or "Text2" the best thing to do here as mention in the accepted answer is changing you document structure. This can be easily done using the "Bulk" API.
var bulk = db.mycollection.initializeOrderedBulkOp(),
count = 0;
db.mycollection.find().forEach(function(doc) {
var pair = [];
for(var key in doc) {
if(key !== "_id") {
var id = doc[key]["id"].split(/[, ]/);
pair.push({"key": key, "id": id});
}
}
bulk.find({"_id": doc._id}).replaceOne({ "pair": pair });
count++; if (count % 300 == 0){
// Execute per 300 operations and re-Init
bulk.execute();
bulk = db.mycollection.initializeOrderedBulkOp();
}
})
// Clean up queues
if (count % 300 != 0 )
bulk.execute();
Your document now look like this:
{
"_id" : ObjectId("55edddc6602d0b4fd53a48d8"),
"pair" : [
{
"key" : "Text1",
"id" : [
"2"
]
},
{
"key" : "Text2",
"id" : [
"2",
"3"
]
},
{
"key" : "Text3",
"id" : [
"1"
]
}
]
}
Running the following query:
db.mycollection.aggregate([
{ "$project": {
"pair": {
"$setDifference": [
{ "$map": {
"input": "$pair",
"as": "pr",
"in": {
"$cond": [
{ "$setIsSubset": [ ["2"], "$$pr.id" ]},
"$$pr",
false
]
}
}},
[false]
]
}
}}
])
returns:
{
"_id" : ObjectId("55edddc6602d0b4fd53a48d8"),
"pair" : [
{
"key" : "Text1",
"id" : [
"2"
]
},
{
"key" : "Text2",
"id" : [
"2",
"3"
]
}
]
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,318 |
Tan Ning () (born 28 May 1990) is a Chinese football player.
Club career
He used to play for an amateur club Nanjing Tehu of Nanjing City League in 2007.
References
External links
Player profile at Sodasoccer.com
1990 births
Living people
Sportspeople from Nanjing
Chinese footballers
Footballers from Jiangsu
Guangzhou F.C. players
Chinese Super League players
Association football goalkeepers | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,584 |
American Eagle Outfitters in Murray, UT
2 American Eagle Outfitters locations found near Murray
Aerie Store - 6191 S State St Suite 1955
Rating: 4.4 (15 Reviews)
6191 S State St Suite 1955, Murray UT 84107
American Eagle Outfitters - 6191 S State St Suite 328
6191 S State St Suite 328, Murray UT 84107
American Eagle Outfitters Stores in The Nearby Cities
American Eagle Outfitters in Sandy
American Eagle Outfitters in Salt Lake City
Murray, Utah
American Eagle Outfitters is an accessories and clothing retailer in the United States. It has its headquarters location in the Southside Works Neighborhood located in the city of Pittsburgh, Pennsylvania. The American Eagle Outfitters brand was founded in the year 1977. It is the parent company of the brand Aerie. This brand has a target audience of female and male college students. There are more than 949 American Eagle Outfitters locations in the country. They employ approximately 6,600 people in their numerous locations. These stores are known for their polo shirts, low-rise jeans, sweatpants, graphic t-shirts and more.
Similar Stores in Murray
Burlington5976 State St, Murray
Victoria's Secret Murray2 Locations
Journeys Murray2 Locations
Madewell6191 State St #C-222, Murray | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,617 |
Si ritiene che sia nato a Ere (ora parte di Tournai), e fu attivo tra il 1638 ed il 1660. Della sua vita conosciamo molto poco al di fuori delle sue opere. Ha lavorato a Tournai nel 1638 entrando a far parte della Gilda di San Luca di quella città.
Altri progetti | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,081 |
Mission Control>
Breakfast on the Moon #3
Post category:Mission Control / Moon / Top Of The Page
Breakfast On The Moon principally celebrates the 50th anniversaries of the Apollo manned missions to the Moon. Breakfast On The Moon #3 is the third in the series.
Jim Lovell
Part 1 of 7 parts. It is incredibly fitting that we can welcome the Commander of Apollo 13, Captain James Lovell, should speak to the world on this particular day. He speaks both of his flight around the Moon on Apollo 8 and his storied flight around the Moon on Apollo 13.
C D Carson
Part 2 of 7 parts. Part 2 continues the celebration of the Apollo missions with a commentary by C D Carson, a notoriously avid "Moon First" advocate.
Janet Ivey
3rd of 7 parts. Part 3 highlights Janet Ivey of http://janetsplanet.com as she discusses a proposal to make July 20th Space Exploration Day, a permanent (nonpaid) Federal Holiday for July 20th. July 20th is, of course, the day Apollo 11 landed on the Moon, and Neil Armstrong became the first human to set foot on another planet.
Mark and Karen Lucas
Part 4 of 7 parts. Mark and Karen Lucas are representatives from Yuri's Night. This year, we invited Yuri's Night to send a representative, as we wanted to honor Yuri Gagarin's
Fred Becker
Part 5 of 7 parts. To honor Space Shuttle STS-1, the first space shuttle to orbit the Earth, Fred Becker speaks of his times working for NASA on STS-1, backdropped by videos of the actual flight.
Anita Gale
Part 6 of 7 parts. Our celebration of STS-1 continuers as Anita Gale, is a retired Boeing engineer, with 40 years of experience as a Project Engineer and Systems Engineer on the Space Shuttle and Commercial Crew programs, walks us through what it was like to be an engineer working to build the Space Shuttle.
William Johnstone
Part 7 of 7 parts. The Hubble Space Telescope is arguably the single most important scientific instrument to have ever been put in space. Bill was the Lockheed/Martin manager in charge of all aspects of constructing those portions of the Hubble that were not directly related to its optics, and as such helps us celebrate an instrument that literally expanded or view of the universe we live in far beyond what had been possible until then.
SacL5 $100 Million Grant Proposal
Apollo 15 Breakfast on the Moon Aug 7, 2021
Settling Space: How To in 86 Seconds | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,518 |
High-pressure hose, 1.5 m, ID 8, incl. connection parts, curved outflow 63888860 https://www.kaercher.com/ae/accessory/high-pressure-hose-1-5-m-id-8-incl-connection-parts-curved-outflow-63888860.html Connection hose in longlife quality with 2x M22 x 1.5, with angled connection on one side.
Connection hose in longlife quality with 2x M22 x 1.5, with angled connection on one side. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,035 |
{"url":"https:\/\/www.omgwiki.org\/dido\/doku.php?id=dido:public:ra:1.4_req:2_nonfunc:28_manageability:06_system","text":"DIDO Wiki\n\nSidebar\n\ndido:public:ra:1.4_req:2_nonfunc:28_manageability:06_system\n\n4.3.5.3 System Manageability Issues\n\nSubsystem, Component and Module Lifecycle Issues\n\n\u201cStudies have shown the average software program lifespan over the last 20 years to be around 6-8 years. Longevity increases somewhat for larger programs, so for extremely large complex programs (i.e., over a million Lines of Code \u2013 LOC) the average climbs as high as 12-14 years.\u201d1) Obviously, there is not just the lifespan of the target system, but there are independent lifespans for each version for each subsystem, module or component that is developed externally. For example, the Windows Operating System (OS) first appeared in the mid 1980s with version 1.0.2) and the current version is 10. About every 10 years, Microsoft release another major release of Windows. 3). IPV4 was originally available in 1883. Windows 7 was release in October 2009 and IPV4 was the dominate Internet Protocol (IP). By 2012, IPV6 gained dominance4). Therefore, if your system was released in 20010 using IPV4 and Windows 7, by 2012 the network protocol needed to be upgraded which can have cascading maintenance effects throughout your system. By 2015, Windows 10 was released again having a cascading effect on upgrades.\n\nFigure 1 developed by the Industrial Internet Consortium5) illustrates some of the architectural components required in a generic Industrial Internet of Things (IIoT) system. When a system is deployed, each of these components needs to be managed. Each of these components has its own unique System Lifecycle which evolves independently of the target system.\n\nNote: Also see 2.3.4.2.2 Data-in-Motion for a further discussion of the DIDO Layers.\n\nSystem Monitoring\n\nThe fundamental requirement for manageability of any system is the collection of data about the system. This is often done with Monitoring Software specifically designed for this task. However, the task of monitoring complex, distributed systems is often difficult and beyond the scope of any particular product. The best place to start is to think of the monitoring in terms of layers. There are considered three layers6).\n\nSystem Logging\n\nThe Data Monitoring is the use of Data Logging to collect and store data for analysis to discover trends or record the events and actions of an application, a system, or a network. This allows for tracking of interactions using messages. Some of the commonly used logging levels are:\n\nTable 1: Some common logging levels used in applications7)\nLogging Level Description\ndebug Designates fine-grained informational events that are most useful to debug an application.\ntrace Designates information about the flow of the execution or threads in an application.\ninfo Designates informational messages that highlight the progress of the application at coarse-grained level.\nwarn Designates potentially harmful situations.\nerror Designates error events that might still allow the application to continue running.\nfatal Designates very severe error events that will presumably lead the application to abort.\n\nSystem Management\n\nProject Management Software is software used for project planning, scheduling, resource allocation and change management. It allows project managers (PMs), stakeholders and users to control costs and manage budgeting, quality management and documentation and also may be used as an administration system. Project management software is also used for collaboration and communication between project stakeholders. These tools can help throughout the system lifecycle, from requirement analysis through sun-setting the system. However, in a distributed system, these tools can play a significant role in determining the health of each node, the network and the overall system of nodes.\n\nAlthough the publication on Software Metrics for Predicting Maintainability 8) is a bit dated, many of the ideas of capturing metrics to measure Maintainability are still relevant today. By studying these metrics and understanding the formulas and the parameters used in the formulas, a lot of insight can be provided in explaining positive and negative Manageability traits.\n\nTable 2: Description of Mectrics for Maintainability of Systems (or Projects) 10)\n 5.1 5.2 5.3 UR Un-referenced Requirements The number of original requirements not referenced by a lower document in the documentation hierarchy. NR Non-Referencing Items The number of items not referencing an original requirement. M-MC Module Coupling A measure of the strength of the relationships between modules. M-MS Module Strength A measure of how strongly related are the elements within a module. HK-IF Information Flow A measure of the control flow and data flow between modules. R-IF Integrated Information Flow of Rombach A measure of inter-module and intra-module complexity based on information flow. KPL-IF Information Flow by Kitchenham et al A measure of inter-module complexity inspired from Henry & Kafura's information flow metric. Since Kitchenham et al experienced some difficulties in understanding the definition of flows provided by Henry & Kafura, they formulated a new set of definitions. IF4 Information Flow Complexity A measure of inter-module complexity based on information flow CA-DC Design Complexity of Card & Agresti A measure of inter-module and intra-module complexity of a system based on fan-out, number of modules and input\/output variables COCO Cocomo Inspired Metric A selection of appropriate adjustment factors of the intermediate Cocomo metric v(G) Cyclomatic Complexity Number The number of independent basic paths in a program. knots The number of crossing lines (unstructured goto statements) in a control flow RLC Relative Logical Complexity The number of binary decisions divided by the number of statements Vcd Comments Volume of Declarations Total number of characters found in the comments of the declaration section of a module. The declaration section comprises comments before the module heading up to the first executable statement of the module body. Vcs Comments Volume of Structures Total number of characters in the comments found anywhere in the module except in the declaration section. The declaration section comprises comments before the module heading up to the first executable statement of the module body. Ls Average Length of Variable Names Mean number of characters of all variables used in a module. Unused declared variables are not included. LOC Lines of Code The number of lines in the source code excluding blank lines or comment lines. E Software Science Effort An estimation of programming effort based on the number of operators and operands. It is a combination of other Software Science metrics. DAR Documentation Accuracy Ratio A verification of the accuracy of the CEI Spec, RS and SDD with respect to the source code. SCC Source Code Consistency The extent to which the source code contains uniform notation, terminology and symbology within itself.\n\nVendor Lock-in Issues\n\nA major management issue for many projects is the avoidance of Vendor Lock-In. Vendor lock-in restricts the options available to a system (or project) because of the dependency on sole-source proprietary technology, solution or service provided by a single vendor or vendor partner. This technique can be disabling and demoralizing because customers are effectively prevented from switching to alternate sources for the technology, solution or service making the customer-vendor relationship one sided.\n\nVendor Lock-In reduces the ability of manage costs over the life expectancy of the system (or project) or avoid risks when a vendor of a product ceases to maintain a critical component of the system (or project) or even when the product ceases to exist.\n\nThis can be partially mitigated through the use of Open Source Software (OSS), however, remember, just because something is OSS, does not mean there is not a vendor. Additionally, if the OSS software is deprecated or evolves in a divergent way from the requirements of the target system (or project), then the responsibility for the care and maintenance of the OSS has to be covered by the system (or project). However, there are OSS solutions which also adhere to standards such as the Data Distribution Service (DDS) vendor Object Computing Incorporated (OCI) .\n\nAn example of an OSS offering that has suffered from a calamity is XeroMQ. Even though ZeroMQ is still around and being used, the ZeroMQ OSS effort was shaken by the death of its prime moving force Pieter Hintjens who died9). There are any spinoffs and derivatives of ZeroMQ. Here are a few reported in Wikipedia10)\n\n\u2022 In 2012, two of the original developers forked ZeroMQ as Crossroads I\/O.11)12)\n\u2022 Martin Sustrik has started nanomsg,13) a rewrite of the ZeroMQ core library.14)\n\u2022 In 2012, Dongmin Yu announced his pure Java conversion of ZeroMQ, JeroMQ.15)\n\u2022 This has inspired further full-native ports of ZeroMQ, such as NetMQ for C#16).\n\u2022 March 2013, Pieter Hintjens announced a new draft of the ZMTP wire-level protocol bringing extensible security mechanisms to ZeroMQ17).\n\u2022 Martin Hurton implemented the CurveZMQ authentication and encryption mechanism18) in the core library shortly afterwards.\n\nNot recognizing or managing the risks due this kind of unfortunate occurrence to the system (or project) might be expedient, but is in many ways irresponsible for systems (or projects) with a long lifespan. An alternative to the risks of OSS or proprietary vendor lock-in is the selection components that are standards based from a Standards Organization that offers a wider spectrum of vendors to choose from.\n\nDIDO Specifics\n\nTo be added\/expanded in future revisions of the DIDO RA\n1)\nSoftware Evolution, Blog Post, Mitopia Technologies, https:\/\/mitosystems.com\/software-evolution\/\n2)\nThe History of Windows Operating Systems, Vengie Beal, Webopedia, 2 Sugust 2018, Accessed 16 July 2020, https:\/\/www.webopedia.com\/DidYouKnow\/Hardware_Software\/history_of_microsoft_windows_operating_system.html#windows-1\n3)\nWhen will Microsoft end support for your version of Windows or Office?, Ed Bott, 10 April 2018, ZDNet, Accessed 16 July 2020, https:\/\/www.zdnet.com\/article\/when-will-microsoft-pull-the-plug-on-your-version-of-windows-or-office\/\n4)\nSix Years Since World Launch, IPv6 Now Dominant Internet Protocol for Many, Internet Society, 6 June 2018, Accessed 16 July 2020, https:\/\/www.internetsociety.org\/news\/press-releases\/2018\/six-years-since-world-launch-ipv6-now-dominant-internet-protocol-for-many\/\n5)\nIIC Connectivity Framework defines IIoT network architecture for scalable interoperability, Industrial Embedded Systems, 18 July 2020, http:\/\/industrial.embedded-computing.com\/articles\/iic-connectivity-framework-defines-iiot-network-architecture-for-scalable-interoperability\/\n6)\nTools for Distributed Systems Monitoring, Kufel, \u0141ukasz, 1 December 2016, Foundations of Computing and Decision Sciences, Vol 41: 10.1515\/fcds-2016-0014, https:\/\/www.researchgate.net\/publication\/311863266_Tools_for_Distributed_Systems_Monitoring\/citation\/download\n7)\nLog4j - Logging Levels, Log4J, Accessed 18 July 2020, https:\/\/www.tutorialspoint.com\/log4j\/log4j_logging_levels.htm\n8)\nSoftware Metrics for Predicting Maintainability, Marc Frappier, Stan Matwin, and Ali Mili, University of Ottawa, Canadian Space Agency, 1994, References 20 July 2020, http:\/\/www.dmi.usherb.ca\/~frappier\/Papers\/tm2.pdf\n9)\nThe Life, Ideas, and Legacy of Pieter Hintjens (from ZeroMQ to \u201cA Protocol for Dying\u201d), Medium, Evan SooHoo, 23 September 2018, Accessed 20 July 2020, https:\/\/medium.com\/@evan_soohoo\/the-life-ideas-and-legacy-of-pieter-hintjens-from-zeromq-to-a-protocol-for-dying-fc1673caeaa7\n10)\nZeroMQ, Wikipedia, Accessed 20 July 2020, https:\/\/en.wikipedia.org\/wiki\/ZeroMQ\n11)\nZeroMQ and Crossroads I\/O: Forking over trademarks. LWN.net. Retrieved 14 July 2012.https:\/\/lwn.net\/Articles\/488732\/\n12)\n13)\nnanomsg. Retrieved 8 June 2013 http:\/\/nanomsg.org\/\n14)\nWhy should I have written ZeroMQ in C, not C++ (part I) , 250bpm, Martin S\u00fastrik, 10 May 2012, Accessed 20 July 2020, http:\/\/250bpm.com\/blog:4\n15)\njeromq - java pojo zeromq, zeromq-dev mailing list, Retrieved 23 May 2013, http:\/\/lists.zeromq.org\/pipermail\/zeromq-dev\/2012-August\/018265.html\n16)\nNetMQ, GitHub, Retrieved 23 May 2013, https:\/\/github.com\/zeromq\/netmq\n17)\nSecuring ZeroMQ: draft ZMTP v3.0 Protocol, Hintjens.com, Retrieved 23 May 2013, http:\/\/hintjens.com\/blog:39\n18)\nCurveZMQ - Security for ZeroMQ , CurveZMQ, Accessedd 20 July 2020, http:\/\/curvezmq.org\/","date":"2023-03-26 03:24:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3398502469062805, \"perplexity\": 3628.678768346102}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296945381.91\/warc\/CC-MAIN-20230326013652-20230326043652-00638.warc.gz\"}"} | null | null |
\section{Introduction}
\label{sec:introduction}
Quasars are among the most luminous astrophysical objects, and are
believed to be powered by accretion onto supermassive black holes
\citep[e.g.][]{Sal64,Lyn69}. They have become a key element in our
current paradigm of galaxy evolution \citep[e.g.,][]{Spr05, Croton06,
Hop08}, and essentially all spheroidal systems at present harbor
massive black holes \citep{KorRic95}, the masses of which are
correlated with many properties of their host systems. Despite their
importance, and intense theoretical activity, a full theory of the
coevolution of galaxies and quasar eludes us.
The current paradigm assumes that every galaxy initially forms in a
gas-rich, rotationally-supported system. Once the dark matter halo
grows to a critical scale some event, most likely a major merger
\citep{Car90, HaiLoe98, CatHaeRee99, KauHae00, Spr05, Hop06, Hop08} or
instability in a cold-stream fed disk \citep{DiM12}, triggers a period
of rapid, obscured star formation, the generation of a stellar bulge
and a growing black hole (BH). Eventually the accreting BH becomes
visible as a quasar, and soon after the star formation is quenched on
a short timescale, perhaps via radiative or mechanical feedback from
the BH \citep[e.g.][]{Sil98, Kin03, WyiLoe03, Sha09, Nat12, AleHic12}.
Understanding the details of this picture remains an active area of
research.
Phenomenological models for quasar demographics often adopt power-law
relations between quasars, galaxies, and dark matter halos
\citep[e.g.,][]{EfsRee88, Car90, WyiLoe02, WyiLoe03, HaiCioOst04,
Mar06, Lid06, Croton09, She09, BooSch10}. In these models, the duty
cycle of quasars is tuned to match the observations, and a generic
conclusion is that the duty cycle is a strong function of halo mass or
quasar luminosity, peaking at a halo mass of $10^{12-13}\,M_\Sol$.
However, these previous models do not incorporate constraints provided
by the galaxy stellar mass function over the interval $0<z<6$. And
yet, a variety of lines of evidence suggest that the relation between
halos and galaxies is highly non-linear, with a characteristic peak in
galaxy formation efficiency at a halo mass of $\sim10^{12}\,M_\Sol$
\citep{Van03, Val04, Man06, Con09, Mos10, TruKlyPri11, BehWecCon12}.
The aim of this paper is to incorporate empirically constrained
relations between galaxies and halos into a simple model for quasar
demographics. We will demonstrate that a model constructed to match
the observed galaxy stellar mass function implies a quasar duty cycle
that is independent of galaxy and halo mass at each redshift. This
has important implications for physical models aimed at understanding
the triggering of quasars and their connection to the evolution of
galaxies.
The outline of the paper is as follows. In \S\ref{sec:model} we
describe the model, in \S\ref{sec:data} the model is compared to data,
and a discussion is presented in \S\ref{sec:discussion}. We conclude
in \S\ref{sec:conclusions}. Where necessary we adopt a $\Lambda$CDM
cosmological model with $\Omega_m=0.28$, $\Omega_\Lambda=0.72$ and
$\sigma_8=0.8$. Unless the $h$ dependence is explicitly specified or
parametrized, we assume $h=0.7$. Dark matter halo masses are quoted
as $M_{\rm vir}$ \citep{Bryan98}. Luminosities are quoted in Watts
and magnitudes in the AB system, and stellar masses assume a
\citet{Cha03} stellar initial mass function.
\section{The model}
\label{sec:model}
Our goal is to construct a simple model that relates galaxies,
quasars, and dark matter halos over the redshift interval $0<z<6$. A
small number of free parameters will characterize the model, and these
parameters will be constrained against observations.
The most constraining observation will be the quasar luminosity
function, and to predict that in our model we could begin with the
observed stellar mass function. However it will be useful later to
have information on how quasars occupy dark matter halos, and for this
reason we begin by specifying a dark matter halo mass function and its
evolution to $z=6$. We adopt the fitting functions of
\citet{Tin08,Tin10} for the halo mass function and large-scale bias,
which represent the latest fits to these parameters from cosmological
$N-$body simulations\footnote{The fits are only calibrated to $z=2$,
but we checked the mass function fit agrees with our $N$-body
simulation to better than a factor of 2 up to $z=6$.}. Note that
here and throughout we consider only parent halos; satellite halos,
also known as subhalos, are not included in the present study. This
is a reasonable approximation at high redshift, as quasars inhabit
highly biased halos on the steeply falling tail of the mass function
and any satellite galaxies of the same mass would live in even more
massive halos which are exponentially rare. This assumption will
break down at lower luminosities, where the satellite fraction can be
expected to rise. This assumption will also fail to account for the
small-scale clustering of quasars, in particular the clustering within
the halo scale of $\lesssim1$ Mpc. When we compare to clustering
measurements in $\S$\ref{sec:clust} we will therefore restrict our
comparison to $R>1\,$Mpc, which is where most of the data lie.
Extending the model to satellites is in principle straightforward, but
requires an assumption about the joint occupation of quasars in
central and satellite galaxies of the same halo.
We adopt empirically constrained relations between galaxy stellar mass
and dark matter halo mass over the interval $0<z<6$ from
\citet{BehWecCon12}. Briefly, these relations were constrained by
populating dark matter halo merger trees with galaxies via
redshift-dependent $M_h-M_{\rm gal}$ relations. Model galaxy stellar mass
functions were then computed by taking into account observational
uncertainties in the stellar mass estimates and galaxy star formation
rates were computed by following the growth of galaxies through the
merger trees. The model stellar mass functions and star formation
rate functions were compared to a comprehensive compilation of
observations. The underlying $M_h-M_{\rm gal}$ relations were varied until
a good match to the data was achieved. The resulting relations agree
with results obtained from other techniques, including abundance
matching, halo occupation models, satellite kinematics, and
gravitational lensing \citep[see][]{BehWecCon12}. We also adopt an
amount of scatter between galaxy mass and halo mass as a function of
redshift implied by the model of \citet{BehWecCon12}. This scatter
increases from $\approx0.2$ dex at $z=0.5$ to $\approx0.5$ dex at
$z=6$, although some of this `scatter' reflects observational
uncertainty.
Galaxies are assigned BHs via the following equation:
\begin{equation}
\frac{M_{\rm BH}}{10^{10}M_\odot} =
10^\alpha\,(1+z)^2\,
\left(\frac{M_{\rm gal}}{10^{10}M_\odot}\right)^\beta \,,
\label{eqn:mbh_mgal}
\end{equation}
where $M_{\rm gal}$ and $M_{\rm BH}$ are the stellar mass of the galaxy and mass
of the BH, respectively. The available data at $z\sim0$ is consistent
with a linear relation between $M_{\rm gal}$ and $M_{\rm BH}$, (i.e.~$\beta=1$)
which is what we adopt herein, with a normalization constant of
$\alpha\approx-3.1$ \citep{HarRix04}. The scaling with redshift is
motivated by observations \citep{McL06,Tar12}, but since we fit for
$\alpha$ at each redshift, any deviation from $(1+z)^2$ will be
absorbed in the redshift-dependence of the parameter $\alpha$. In our
fiducial model we adopt a scatter in this relation of 0.3 dex,
independent of mass, consistent with the observed scatter in the local
$M_{\rm BH}-\sigma$ relation \citep{Tre02}.
We have chosen to relate $M_{\rm BH}$ to the total stellar mass of the
galaxy, rather than specifically to the bulge component. Obviously
for bulge-dominated galaxies the distinction is irrelevant, but the
differences can grow as we include galaxies with a large disk
component. Assuming that bulge properties are the dominant factor in
determining $M_{\rm BH}$, a more refined model would include the evolution
and mass-dependence of the bulge-to-total ratio. However for now we
neglect this distinction. We do find that our results are relatively
robust to modest changes in the slope of the $M_{\rm BH}-M_{\rm gal}$ relation
(see \S\ref{sec:data}) --- and any overall normalization change can be
absorbed into our parameter $\alpha$ --- so there are reasons to
believe a more complex\footnote{Such a model might couple $M_{\rm BH}$ to
$M_{\rm gal}\simeq M_{\rm bulge}$ at high-$z$ but allow low-$z$ galaxies
to (re)grow disks leading to evolution in $M_{\rm BH}-M_{\rm gal}$ but not
$M_{\rm BH}-M_{\rm bulge}$, see
e.g. \citet{Jah09,Cis11,KorBen11,KorBenCor11}.} model would achieve
a similar level of success in fitting the observations.
In addition to the strong observed correlation between $M_{\rm gal}$
and $M_{\rm BH}$, there are well-known correlations between $M_{\rm BH}$ and other
parameters of the galaxy including the velocity dispersion, $\sigma$,
and galaxy size, $R_e$. In fact, \citet{Hop07b} argued for the
existence of a BH fundamental plane (relating $M_{\rm BH}$, $\sigma$, and
$R_e$) that has smaller scatter than any other relationship between
$M_{\rm BH}$ and a single galaxy property. Another option would therefore
have been to connect BHs to galaxies via $\sigma$, as for example done
by \citet{Croton09}, or via the BH fundamental plane. We choose to
use $M_{\rm gal}$ herein because this quantity is readily available
for galaxies to $z=6$, and because the redshift-dependent connection
between galaxies and halos is presently available for galaxy stellar
masses, but not for galaxy velocity dispersions.
The BH mass is converted to a bolometric quasar luminosity through the
Eddington ratio, $L/L_{\rm Edd}\equiv\eta$,
\begin{equation}
L_Q = 3.3\times10^4\ \eta\ \frac{M_{\rm BH}}{\,M_\Sol}\, \,L_\Sol.
\label{eqn:Lq_Mbh}
\end{equation}
In our fiducial model $\eta$ is independent of redshift. We draw
$\eta$ from a lognormal distribution with mean of $\eta=0.1$ and a
dispersion of $0.3\,$dex, in agreement with observations \citep{Kol06,
She08}. In our model the value of the Eddington ratio is degenerate
with the normalization of the $M_{\rm BH}-M_{\rm gal}$ relation and any intrinsic
width in the Eddington ratio distribution is degenerate with scatter
in the $M_{\rm BH}-M_{\rm gal}$ relation. In order to explore this degeneracy we
consider a second model where $\eta$ is 0.1 at low redshift, increases
linearly between $0.5<z<3.5$ to a value of 1.0, and at higher
redshifts $\eta=1.0$ \citep[see e.g.,][]{Wil10b, She12}. These two
models will serve to indicate a reasonable range in possible evolution
in the Eddington ratio.
\begin{figure}[!t]
\begin{center}
\resizebox{3.5in}{!}{\includegraphics{f1.eps}}
\end{center}
\caption{Summary of the model relations at $z=2$. The quasar LF
determines the abundance (see the points on the curve, which label
space densities in units of log Mpc$^{-3}$) of quasars at a given
luminosity (right vertical axis) or BH mass (left vertical axis).
For an assumed lifetime, $t_Q$, this maps to an abundance of
galaxies and the stellar mass function provides the appropriate
galaxy stellar mass (upper horizontal axis). The empirically
constrained $M_{\rm gal}-M_h$ relations from \citet{BehWecCon12}
allow us to map this into a halo mass (lower horizontal axis). The
curve shown is at $z=2$, though the general behavior is similar at
other redshifts with a steep low-mass slope and a shallower high
mass slope (see Figure \ref{fig:mbh_mhalo}). Note the lower
horizontal axis determines the clustering amplitude at fixed
redshift while the left vertical axis determines the quasar
luminosity.}
\label{fig:QAM}
\end{figure}
In order to compare to observations, we must translate $L_Q$ into
magnitudes in a given filter. We adopt the relation between
bolometric luminosity and $i$-band magnitude (k-corrected to $z=2$)
using the relation from \citet{Shen++09}:
\begin{eqnarray}
M_{i}(z=2) &=& \hphantom{-}72.5 - 2.5\,\log L_Q \label{eq:lboldef} \\
&=& -5.26 - 2.5\,\log\left(\eta M_{\rm BH}\right) \\
&=& -30.3 - 2.5\,\left(\log\eta + \alpha\right) - 5\log(1+z) \nonumber \\
& & - 2.5\beta\,\log\left(M_{\rm gal}/10^{10}M_\odot\right) \label{eq:lboldef2}
\,\,,
\end{eqnarray}
where $L_Q$ is in Watts and $M_{\rm BH}$ is in solar masses. The last two
relations follow directly from Equations \ref{eqn:mbh_mgal} and
\ref{eqn:Lq_Mbh}; we include them here to make explicit the connection
between $M_{\rm gal}$ and observed quasar magnitude, and also to emphasize
the fact that $\eta$ and $\alpha$ are perfectly degenerate in our
model. There is scatter in $L_Q$ at fixed $M_{\rm gal}$ which arises from a
combination of scatter in $M_{\rm BH}-M_{\rm gal}$ and $L_Q-M_{\rm BH}$. In our model
we adopt a scatter of 0.3 dex between each of these relations,
resulting in a total scatter between $M_{\rm gal}$ and $L_Q$ of 0.42 dex.
There are two free parameters in this model at each redshift: the
normalization of the $M_{\rm BH}-M_{\rm gal}$ relation, specified by $\alpha$, and
the quasar duty cycle, $f_{\rm on}$. These two parameters are fit to
the observed quasar LF via $\chi^2$ minimization. An important, and
novel feature of this model is that we adopt a constant duty cycle,
independent of luminosity, $M_{\rm BH}$ or $M_h$. Sometimes the duty cycle
is recast into a ``lifetime'' using the Hubble time: $t_Q\equiv f_{\rm
on}t_H$. As we will demonstrate in the following section, both of
these parameters are highly constrained by the observed quasar LF.
The resulting relations between galaxies, halos, and quasars are
illustrated in Figure \ref{fig:QAM}. These relations represent the
best-fit model constrained by the quasar LF at $z=2$ (see
$\S$\ref{sec:lf}). The quasar LF allows us to relate luminosity to
number density. For an assumed duty cycle we then have the abundance
of BHs of that mass. Similarly the stellar mass function maps galaxy
mass to abundance. Thus at fixed duty cycle we obtain a tight
constraint on $M_{\rm BH}-M_{\rm gal}$. As the stellar mass function and quasar
LF contain significant curvature only one combination of normalization
and duty-cycle provides a good fit to the data for a range of
luminosities (unless we allow significant variation in the lifetime as
a function of luminosity).
Figure \ref{fig:lf_params} shows how the predicted quasar luminosity
function at $z=2$ depends upon several parameters in the model. The
amount of scatter in the $L_Q-M_{\rm gal}$ relation is important for the
shape at high luminosity, and indeed the abundance of luminous quasars
provides a lower limit on the scatter for any model which places
quasars in halos on the exponentially falling part of the mass
function. We see that a model with no scatter in the $L_Q-M_{\rm gal}$
relation predicts drastically fewer bright quasars and a steeper
bright-end slope than a model including scatter \citep[see also][for
related discussion]{WhiMarCoh08, ShaWeiShe10, DeGraf11, TraSte12}.
Variations in the BH mass at fixed galaxy mass ($\alpha$) change both
the normalization and shape of the luminosity function while variation
in the slope of the relation ($\beta$) has a large effect on the shape
of the LF both at low and high luminosity.
\begin{figure}[!t]
\begin{center}
\resizebox{3.5in}{!}{\includegraphics{f2.eps}}
\end{center}
\caption{Variation in the predicted luminosity function of quasars at
$z=2$ as a function of the parameters in our model. The dashed
(red) line shows how the inclusion of scatter in the $M_{\rm BH}-M_{\rm
gal}$ relation is important at the high mass end, with models
including more scatter predicting more luminous quasars. Variations
due to changes in the normalization of the $M_{\rm BH}-M_{\rm gal}$ relation
($-3.4<\alpha<-2.8$; Equation \ref{eqn:mbh_mgal}) are shown by the
dotted (blue) lines, and we see this parameter changes both the
normalization and shape of the LF since the galaxy stellar mass
function has a particular shape. Finally the dot-dashed (green)
line shows variation in the logarithmic slope of the $M_{\rm BH}-M_{\rm
gal}$ relation ($0.5<\beta<1.5$; Equation \ref{eqn:mbh_mgal}).}
\label{fig:lf_params}
\end{figure}
\begin{figure*}[!t]
\begin{center}
\resizebox{6.5in}{!}{\includegraphics{f3.eps}}
\end{center}
\caption{The quasar luminosity function predicted by our model at
different redshifts, as compared to the observations and a simple
model in which quasar luminosity is tied to halo, not galaxy, mass
(denoted PLM for power-law model). The data are from
\citet[][COMBO-17; open squares]{Wol03}, \citet[][SDSS; solid
circles]{Ric06}, \citet[][2SLAQ+SDSS; open diamonds]{Cro09},
\citet[][NDWFS+DLS; stars]{Gli10}, and \citet[][COSMOS;
crosses]{Mas12}. The lifetime, $t_Q$, and the $M_{\rm BH}-M_{\rm gal}$
normalization, $\alpha$, are fit in each panel and the grey region
illustrates the $1\,\sigma$ uncertainty in the model prediction.
Only black symbols are included in the fits; the grey symbols
generally represent data of lower quality and are included for
comparison purposes only.}
\label{fig:lf}
\end{figure*}
\begin{figure}[!t]
\begin{center}
\resizebox{3.5in}{!}{\includegraphics{f4.eps}}
\end{center}
\caption{Upper Panel: The duty-cycle, or quasar lifetime, as a
function of redshift. We define $t_Q=f_{\rm on}t_H$ where $t_H$ is
the Hubble time at redshift $z$ and $f_{\rm on}$ is the probability
that a BH is a luminous quasar (which is independent of luminosity
in our model). Also shown are lines of constant $f_{\rm
on}=10^{-1}$, $10^{-2}$ and $10^{-3}$. Middle Panel: Evolution of
the normalization of the $M_{\rm BH}-M_{\rm gal}$ relation in our model (for two
choices of evolution in $\eta$; solid and dashed lines) compared to
results from the literature. The solid band is the normalization at
$z=0$ \citep{HarRix04}. Plus symbols and diamonds are individual
measurements from \citet{Cis11} and \citet{Jah09}, respectively.
Triangles are binned estimates from \citet{Dec10}, squares are
binned estimates from \citet{McL06}, the solid circle is a binned
measurement from \citet{Pen06}, and stars are the average of two
quasars from \citet{Tar12} for two choices for estimating galaxy
masses. Lower Panel: Assumed evolution in the Eddington ratio,
$\eta$, for the two models shown in the middle panel.}
\label{fig:dutycycle}
\end{figure}
\vspace{2cm}
\section{Comparison with observational data}
\label{sec:data}
\subsection{The Quasar Luminosity Function}
\label{sec:lf}
Figure \ref{fig:lf} shows the predictions of our model compared to a
compilation of observational data from \citet[][COMBO-17; open
squares]{Wol03}, \citet[][SDSS; solid circles]{Ric06},
\citet[][2SLAQ+SDSS; open diamonds]{Cro09}, \citet[][NDWFS+DLS;
stars]{Gli10}, and \citet[][COSMOS; crosses]{Mas12}. We have adopted
the following transformation between filters
\citep{Wol03,Ric06,Cro09}:
\begin{eqnarray}
M_i(z=2) &=& M_g(z=2) - 0.25 \\
&=& M_{1450} - 0.29 \\
&=& M_{b_J} - 0.71
\end{eqnarray}
in order to convert all of the measurements to the $M_i(z=2)$ system
for comparison.
The lifetime, $t_Q$, normalization of the $M_{\rm BH}-M_{\rm gal}$ relation
($\alpha$ in Equation \ref{eqn:mbh_mgal}) and scatter have been fit to
the data at each redshift. The grey shaded regions mark the $1\sigma$
range of allowed models. In most panels the formal errors are so
small that the grey band is buried behind the best-fit relation. The
constraints on the parameters are so strong because the data at $z<4$
samples luminosities both above and below the knee in the LF and
because the formal errors on the LF are small.
For comparison we also show the luminosity function that results from
assuming a power-law relation between quasar luminosity and halo mass,
as has been assumed in many early works \citep[e.g.][]{EfsRee88,
Car90, WyiLoe02, WyiLoe03, HaiCioOst04, Mar06, Lid06, Croton09,
She09, BooSch10}. This model is characterized by two free
parameters, the duty cycle and the normalization of the (power-law)
relation between quasar luminosity and halo mass\footnote{The
particular model we consider is $L_Q=\gamma M_h^{1.4}$, where
$\gamma$ is the free normalization and the index, 1.4, was chosen
from the power-law model of \citet{Croton09}.}. The fundamental
difference between our model's predictions and these power-law models
is that we explicitly take into account the efficiency of galaxy
formation as a function of mass and redshift (see Figure
\ref{fig:QAM}). The two models differ less significantly at higher
redshifts for reasons to be discussed below.
\begin{figure}[!t]
\begin{center}
\resizebox{3.8in}{!}{\includegraphics{f5.eps}}
\end{center}
\caption{The quasar luminosity function at high redshift. At $z=4.75$
the data are from \citet{Ric06} and at $z=6$ the data are from
\citet{Wil10}. The best-fit model (solid line) and $1\sigma$
uncertainty (shaded band) includes variation in the duty cycle,
normalization in the $M_{\rm BH}-M_{\rm gal}$ relation and scatter in the
relation between $M_{\rm gal}$ and $L_Q$. This in contrast to the lower
redshift fits, where the scatter was held fixed at $0.42$ dex. At
high redshift the best-fit scatter exceeds 1 dex. The $1\sigma$
range of allowed duty cycles ($f_{\rm on}$) is included in the
legend in each panel.}
\label{fig:hz}
\end{figure}
In Figure \ref{fig:dutycycle} we show the quasar lifetime, $t_Q$ (or,
equivalently, the duty cycle), the normalization of the $M_{\rm BH}-M_{\rm gal}$
relation, $\alpha$, and our two model choices for evolution in $\eta$.
In the top panel of Figure \ref{fig:dutycycle} we include lines of
constant duty cycles of $10^{-1}$, $10^{-2}$ and $10^{-3}$. For
reference, the Salpeter time is the e-folding time for a BH growing at
a fraction $\eta$ of the Eddington luminosity with a radiative
efficiency of $\epsilon$ and is defined as $t_{\rm Salp} = 4\times10^8
(\epsilon/\eta)\,$yr. It is striking how little $t_Q$ varies from
$0.5<z<3$. The evidence for a decrease in $t_Q$ at $z>3$ should be
regarded as tentative, as the data used to constrain these parameters
becomes rather uncertain, is compiled from heterogeneous sources, and,
at $z=4.25$, probes a very limited dynamic range. Moreover, at all
redshifts the formal errors are almost certainly underestimates
because the errors on the observed quasar LFs are only the Poisson
uncertainties, which are vanishingly small for many luminosity
bins. Our estimates of $t_Q$ are in good agreement with quasar
lifetimes inferred by other methods, as summarized in \citet{Mar04}.
In the middle panel of Figure \ref{fig:dutycycle} we show the
evolution of the normalization of the $M_{\rm BH}-M_{\rm gal}$ relation as
inferred from our model, assuming either a constant or evolving
Eddington ratio. In this panel we also include the normalization
measured at $z\sim0$ \citep{HarRix04}, and estimates of its evolution
in samples of massive galaxies to $z\sim4$. The two models produce
very different evolution in normalization of the $M_{\rm BH}-M_{\rm gal}$
relation, as expected from Equation \ref{eq:lboldef2}. The model with
constant $\eta$ produces marginally better agreement with the data at
$z<2.5$ although given the likely large systematic uncertainties in
the measurements, it is difficult to draw strong conclusions. In
particular, scatter in the relation between $M_{\rm gal}$ and $L_Q$ can
result in significant biases when inferring mean properties in flux
limited samples \citep{Lau07a, Lau07b}. Among recent models, the
models of \citet{Hop07a} and \citet{Croton06} predict roughly an order
of magnitude increase in $M_{\rm BH}$ at $M_{\rm gal}\sim 10^{10}$ between $z=0$
and $z=3$. In contrast, the simulations of \citet{Sij07} and the
semi-analytic model of \citet{Fan12} predict almost no evolution at
the massive end.
Model fits to the highest redshift quasar LFs are shown separately in
Figure \ref{fig:hz}. In this case we have included the scatter
between $M_{\rm BH}$ and $L_Q$ as an additional free parameter. This was
necessary because the fiducial model, with a scatter of 0.42 dex,
failed to match the high redshift data without extremely small $f_{\rm
on}$ and $\alpha$\footnote{We have gone back and re-fit the
lower-redshift data allowing the scatter to be an additional free
parameter and found a best-fit scatter that agrees to within
$\approx0.1$ dex of our fiducial value. Thus, for simplicity, we
decided to keep the scatter fixed at 0.42 dex at lower redshifts.}.
For $z=4.75$ and $z=6$ the best-fit scatter is 1.2 and $1.4\,$dex,
respectively. The $1\sigma$ range of plausible duty cycles, $f_{\rm
on}$, spans $2\,$dex at these redshifts ($-2.6<{\rm log}\,f_{\rm
on}<-0.6$ at $z=4.75$ and $-2.8<{\rm log}\,f_{\rm on}<-0.7$ at $z=6$).
Even though the model is not well constrained at high redshift, it is
worth considering these data in some detail. In particular, if we
focus on $z=6$ we see that the duty cycle is still less than unity and
the scatter in $L_Q-M_{\rm gal}$ large. Our model prefers this solution
because the optically observed quasars are extremely rare ($\Phi\sim
10^{-9}\,{\rm Mpc}^{-3}{\rm mag}^{-1}$) and yet the luminosity
function is not falling exponentially. If quasars inhabited very high
mass halos and the luminosity was tightly correlated with halo mass
then we would expect an exponential decline at the bright-end of the
luminosity function. Future constraints on the quasar LF at high
redshift would be very valuable for constraining the duty cycle at
these epochs. Since rapid accretion rates with long duty cycles seems
to be necessary to produce massive BHs within the first Gyr of cosmic
time, this would provide information on the visibility of this growth
in the resframe ultraviolet and optical.
Returning to lower redshifts, Figure \ref{fig:qsolfbin} shows the
model LFs at $z=0.5$ and $z=2.4$. Here we consider the contribution
to the total LF from quasars in halos of different masses.
Specifically, we construct model LFs by selecting quasars residing in
halos less massive than log$(M_h/\,M_\Sol)<$ 13.0, 13.5, and 14.0. The
purpose of this figure is to demonstrate that massive halos contribute
very little to the total LF. In fact, the model is almost entirely
insensitive to what happens in halos more massive than
log$(M_h/\,M_\Sol)<$13.5, owing to their rarity relative to lower mass
halos. This has important consequences for any model that is tuned to
match the quasar LF, as we discuss in $\S$\ref{sec:discussion}.
\begin{figure}[!t]
\begin{center}
\resizebox{3.5in}{!}{\includegraphics{f6.eps}}
\end{center}
\caption{Contribution to the quasar LF from quasars in different halo
masses. The curves represent the model LF computed including halos
less massive than the values shown in the legend (in units of
log$\,M_\Sol$). The quasar LF is almost entirely insensitive to the
presence or absence of quasars in halos more massive than
$10^{13.5}\,M_\Sol$.}
\label{fig:qsolfbin}
\end{figure}
\begin{figure}[!t]
\begin{center}
\resizebox{3.7in}{!}{\includegraphics{f7.eps}}
\end{center}
\caption{The projected correlation function, $w_p(R)$, vs. projected
distance, $R$, at 5 redshifts chosen to be representative of the
data. We include results from \citet[][R09]{Ros09},
\citet[][W12]{Whi12}, and \citet[][S09]{Shen++09}, all of which are
based on data from the Sloan Digital Sky Survey. At the highest
redshift there is some tension between the model and data, but the
error bars are large and the simulation box is too small to provide
model predictions at the largest scales. Future measurements of the
clustering of both low and high redshift quasars will provide
powerful constraints on the model.}
\label{fig:wp}
\end{figure}
In Figure \ref{fig:lf} we adopted our fiducial values for the slope of
the $M_{\rm BH}-M_{\rm gal}$ relation. We found that we can find equally good
fits if we modify the slope of the $M_{\rm BH}-M_{\rm gal}$ relation to
$\beta=4/3$ or $5/3$, or even if we change the overall normalization
in the $M_{\rm gal}-M_h$ relation. These changes result in different
best-fit values for $t_Q$ and $\alpha$. Future constraints on the
$M_{\rm BH}-M_{\rm gal}$ relation as a function of redshift will, in the context
of our model, provide strong constraints on the evolution of the
scatter and the mean Eddington ratio. Within the parameter space
allowed by the data there are several degeneracies. For example, an
increase in $t_Q$ can compensate an increase in scatter in the
$L_Q-M_h$ relation. Increased scatter can also be compensated by
decreasing $\alpha$. Finally, increasing $\alpha$ can be compensated
by decreasing $t_Q$.
\subsection{Quasar Clustering}
\label{sec:clust}
With the model parameters constrained by the quasar LF, we are now
able to make predictions for the clustering of quasars as a function
of luminosity and redshift. Recall that our model is characterized by
two parameters, the quasar lifetime, $t_Q$ and the normalization of
the $M_{\rm BH}-M_{\rm gal}$ relation, $\alpha$. In the model, we assume that
quasars are a random sample of the BHs in halos, and therefore $t_Q$
has no effect on the clustering of quasars. The clustering is quite
weakly dependent on the scatter over the luminosity range probed by
current and future planned surveys. The clustering is therefore only
sensitive to $\alpha$, and this parameter is well-constrained at $z<4$
(see Figure \ref{fig:dutycycle}). Moreover, $\alpha$ has an
increasingly minor effect on the predicted clustering at higher
redshifts.
Figure \ref{fig:wp} shows a comparison of our model and the data on
the projected autocorrelation function, $w_p(R)$, as a function of
projected (comoving) distance, $R$, for a variety of redshifts chosen
to illustrate the current constraints. We have computed the model
correlation function by populating the halos drawn from an N-body
simulation\footnote{The simulation employed $2048^3$ particles in a
cubic box of side length 1 Gpc with a force softening of $14\,$kpc
(comoving) and was run with the TreePM code of
\protect\citet{TreePM}. Halos were found with a friends-of-friends
algorithm \citep{DEFW} with a linking length of 0.168 times the mean
inter-particle spacing. Spherical over-density masses were computed
for each halo (including a correction for finite resolution). For
the range of halo masses and redshifts of interest, masses defined
via $180\times$ the background density are almost identical to the
`virial' definition employed by \protect\cite{BehWecCon12}.} with BHs
using the best-fitting relations derived above, and then calculating
the clustering of BHs within the luminosity range of each
observational sample. This allows us to take into account the
scale-dependent bias and non-linearities, which are important on Mpc
scales.
\begin{figure}[!t]
\begin{center}
\resizebox{3.5in}{!}{\includegraphics{f8.eps}}
\end{center}
\caption{The large-scale bias predicted by our model as a function of
luminosity for a number of redshifts. The relation is shallow at
low luminosity due to the steepness of the $M_{\rm BH}-M_h$ relation at
low mass (see Figure \ref{fig:mbh_mhalo}). The steepness of the
relation at high luminosity depends on the scatter in the model,
being less steep for more scatter. We have marked on the curves
where the quasar number density is $5\times10^{-7}{\rm Mpc}^{-3}$,
which corresponds to of order 100 quasar pairs within $20\,$Mpc in a
survey volume of $10^{10}\,{\rm Mpc}^{3}$. To accurately measure
the bias of objects at lower space densities (and brighter
luminosities) one would need to resort to cross-correlations.}
\label{fig:bias}
\end{figure}
The majority of models assume that quasar activity occurs due to the
major merger of two gas-rich galaxies, since this scenario provides
the rapid and violent event needed to funnel fuel to the center of the
galaxy \citep[e.g. via the bars-within-bars instability;][]{Shl89} and
feed the central engine while at the same time providing a connection
between BH fueling and the growth of a spheroidal stellar component
\citep[e.g.,][]{Hop08}. In computing the clustering of quasars we
have populated the halos in the simulation at random, neglecting any
properties of the halos apart from their mass (e.g., whether they have
had a recent major merger). However, the probability that a halo will
undergo a major merger in a short redshift interval is only weakly
dependent on the mass of the halo \citep{LacCol93, Per03, CohWhi05,
WetCohWhi09, FakMa09, Hop10b}, i.e., the mass function of such halos
is almost proportional to the mass function of the parent population.
Moreover, the clustering properties of recently merged halos are
similar to a random sample of the population with the same mass
distribution \citep{Per03, WetCohWhi09}. Thus, our procedure for
randomly selecting halos is consistent with (though not a strong
argument in favor of) the major merger scenario for quasar triggering.
The agreement between the data and the model is excellent at $z<3$,
especially considering that the model was only tuned to the quasar LF.
The inclusion of satellite quasars would slightly increase the model
prediction in the lowest redshift bin ($z\simeq 0.5$), but any
satellite contribution is quite small for the higher redshifts. The
model under-predicts the observed clustering at $z\sim3.7$, although
the errors on the data are large. This model prediction is quite
robust: the $M_{\rm BH}-M_h$ relation at high redshift becomes very steep
(see Figure \ref{fig:mbh_mhalo}, discussed below), and so even a
significant change in $\alpha$ or $\eta$ changes the clustering only
modestly. Similarly, changes in the assumed $L_Q-M_{\rm BH}$ scatter within
the range $0.3-0.6\,$dex do not significantly alter the predicted
clustering. This occurs because a change in scatter induces a change
in $\alpha$ that happens to leave the clustering essentially
unchanged. Future constraints on the clustering of high-redshift
quasars will place strong constraints on this model, as discussed
further in $\S$\ref{sec:imp}, and may indicate that some of our model
assumptions break down as we approach an era of rapid BH growth at
high $z$.
Observationally, it has proven very difficult to measure a dependence
of clustering strength on quasar luminosity \citep[see e.g.,][for a
recent example]{Shen++09}, in part because the significant scatter
between quasar luminosity and halo mass will dilute any intrinsic
relation between clustering strength and luminosity. We address this
issue in Figure \ref{fig:bias}, where we plot the large-scale bias as
a function of luminosity and redshift. Here the model bias was
computed via the relation between bias, halo mass, and cosmology from
\citet{Tin10}.
\begin{figure}[!t]
\begin{center}
\resizebox{3.5in}{!}{\includegraphics{f9.eps}}
\end{center}
\caption{The typical black hole mass in the central galaxy of a halo
of mass $M_h$, vs. $M_h$, for a number of redshifts (corresponding
to the redshifts shown in Figure \ref{fig:lf}), for a model with a
constant Eddington ratio, $\eta$ (top panel), and a model with a
varying $\eta$ (bottom panel). The typical BH mass corresponding to
a fixed $M_h$ increases with $z$, as expected. Note the significant
curvature in the relation, which arises due to our assumption that
galaxy properties regulate the size of black holes and the
well-known inefficiencies of galaxy formation in high and low halo
masses.}
\label{fig:mbh_mhalo}
\end{figure}
We find a very shallow relation between bias and quasar luminosity
below $M_i(z=2)\sim-26$. In our model this occurs for three reasons:
(1) the intrinsic relation between bias and halo mass is very shallow
below the characteristic halo mass, which at $z\sim0$ is
$\sim10^{13}\,M_\Sol$; (2) the $M_{\rm BH}-M_h$ relation becomes very steep at
low mass, implying that a large range in quasar luminosities maps into
a small range in halo masses; (3) scatter in the $M_{\rm gal}-M_h$,
$M_{\rm BH}-M_{\rm gal}$, and $L_Q-M_{\rm BH}$ relations dilutes the strong clustering
in high mass halos. The degree of luminosity dependence (as well as
the absolute value of the bias) is sensitive to the scatter in the
$L_Q-M_h$ relation, with more scatter leading to less $L$-dependence.
This weak luminosity-dependent clustering is also predicted in the
models of \citet{Hop08}, \citet{Croton09} and \citet{She09}.
Figure \ref{fig:bias} demonstrates that we expect significant
luminosity dependent quasar bias only for very luminous quasars.
However, measuring the autocorrelation function of such luminous
quasars is made difficult by their low space densities, which can be
illustrated as follows. The error on the bias in the high-$L$ regime
is dominated by counting statistics. The number of pairs within e.g.,
$20\,$Mpc is $(1/2)\bar{n}_Q^2\left[1+\bar{\xi}_{20}\right]V_{\rm
survey}V_{20}$ where $V_{20}=(4\pi/3)(20\,{\rm Mpc})^3$, $V_{\rm
survey}$ is the survey volume, $\bar{n}_Q$ is the quasar space
density, and $\bar{\xi}$ is the volume average correlation function.
For $\xi(r)=(r_0/r)^2$ we have $\bar{\xi}=3\xi$, and $r_0\sim
10-20\,h^{-1}$Mpc so we expect $\bar{\xi}\sim \mathcal{O}(1)$. One
hundred pairs within $20\,$Mpc would return an error on the bias of
$\sim10\%$, and for a fiducial survey volume of $10^{10}{\rm Mpc}^3$,
this corresponds to a quasar number density of
$\approx5\times10^{-7}\,{\rm Mpc}^{-3}$. The luminosity corresponding
to this number density at each redshift is marked by a solid symbol
along the $b(L)$ relation in Figure \ref{fig:bias}. In order to probe
the bias for quasars at higher luminosities it will be necessary to
resort to cross-correlation techniques, which allow estimates of the
bias of objects with extremely low space density. An appealing method
would be to cross-correlate existing spectroscopic samples of quasars
with samples of galaxies or lower luminosity quasars selected from
deeper photometry in upcoming surveys such as DES, Pan-STARRS, SUMIRE
and LSST.
\section{Discussion}
\label{sec:discussion}
\subsection{Implications}
\label{sec:imp}
The success of our model in reproducing the basic demographics of
quasars allows us to consider several implications that follow
naturally within our framework.
In Figure \ref{fig:mbh_mhalo} we show the best-fit model $M_h-M_{\rm BH}$
relations from $z=0.5$ to $z=3.75$ (the relations above $z=3.75$ are
highly under-constrained and so are not plotted). As discussed above,
the quasar LF places very weak constraints on the model relations at
$\log(M_h/\,M_\Sol)>13.5$, and so one should interpret the model
relations in Figure \ref{fig:mbh_mhalo} with this in mind. It is also
worth pointing out that while the model formally allows for the
existence of extremely massive BHs with $M_{\rm BH}>10^{10}\,M_\Sol$ residing
within moderately massive halos, at high redshift such halos are very
rare. For example, at $z=4.75$ one expects only of order one halo
with log$(M_h/\,M_\Sol)>$13 per $10^9$ Mpc$^3$.
With average mass accretion histories for halos, we can evolve halos
and hence their black holes through the relations shown in Figure
\ref{fig:mbh_mhalo}. To do this we employ mass accretion histories
presented in \citet{BehWecCon12}, which provide excellent fits to the
results of $N-$body simulations. The resulting evolution in BH mass
is shown in Figure \ref{fig:mbh_growth} for three representative halo
masses, and for both model choices for the evolution in the Eddington
ratio. In the model lower mass black holes are growing to lower
redshift faster than higher mass black holes (this is sometimes
referred to as BH downsizing). In the model with a constant $\eta$,
the BHs in the most massive halos lose mass below $z\approx 1.5$,
while in the varying $\eta$ model all BHs grow, if only modestly, at
all epochs. This suggests that a model with evolving Eddington ratios
may be necessary to ensure self-consistent evolution. Models that
enforce self-consistent growth of BHs should shed further light on
this problem \citep[e.g.,][]{Mer04, MerHei08, Sha09}.
\begin{figure}[!t]
\begin{center}
\resizebox{3.5in}{!}{\includegraphics{f10.eps}}
\end{center}
\caption{BH growth in the best-fit model from $z=3.75$ to $z=0.5$.
Results are shown for two choices for the evolution in $\eta$ (see
the lower panel of Figure \ref{fig:dutycycle}). Notice that the
constant $\eta$ model produces massive BHs that lose mass at
$z<1.5$, suggesting that one or more of the assumptions of this
model are breaking down at low redshift. In contrast, the varying
$\eta$ model produces realistic BH growth at all epochs. In both
models lower mass BHs grow more at late times compared to higher
mass BHs, a phenomenon sometimes referred to as BH downsizing.}
\label{fig:mbh_growth}
\end{figure}
Figure \ref{fig:mag_mhalo} shows the evolution of the halo mass for
quasars of fixed luminosity. The trend of lower $M_h$ at higher $z$
was already apparent in Figure \ref{fig:mbh_mhalo}. Figure
\ref{fig:mag_mhalo} also emphasizes how the range of halo masses for a
fixed luminosity range narrows towards higher $z$. This effect is in
the opposite sense to models which tie the luminosity of quasars
directly to halo properties \citep[e.g.][]{Croton09}. Our model is
able to reproduce the observed $L$-independent clustering at low $z$
because the run of bias with halo mass also becomes shallower at low
$z$ for the halo masses of interest.
The evolution of the LF shown in Figure \ref{fig:lf} is driven by
evolution in the $M_{\rm BH}-M_{\rm gal}$ and $M_{\rm gal}-M_h$ relations and the
evolution of the halo mass function (evolution in the $L_Q-M_{\rm BH}$
relation is governed by evolution in $\eta$). The break in the model
quasar LF arises primarily due to the shape of the $M_{\rm gal}-M_h$
relation, and thus $L_\star$ quasars live in halos near the peak of
that relation, $M_h\sim 10^{12}M_\odot$. The peak of the $M_{\rm gal}-M_h$
relation changes very little with redshift \citep[e.g.,]{BehWecCon12},
so that at fixed $M_h$ there is little change of $M_{\rm gal}$ with $z$.
However the luminosity of the break can evolve due to a combination of
evolution in the $M_{\rm BH}-M_{\rm gal}$ relation or the Eddington ratio. In our
fiducial model $\eta$ is constant and $M_{\rm BH}\propto (1+z)^2$ at fixed
$M_{\rm gal}$ and so the break in the luminosity function scales as
$(1+z)^2$. The faint-end slope of the model LF does not vary
significantly, in good agreement with the data, and the overall
normalization changes only modestly. The major departure from pure
luminosity evolution is the change in the slope of the bright end.
The bright-end slope appears shallower at higher $z$ both because the
data are probing closer to the (brighter) break of the LF and because
the $M_{\rm BH}-M_h$ relation becomes steeper at higher mass and redshift.
We also note that the bright end of the model LF is strongly
suppressed at $z<1.5$, and it is this suppression that is responsible
for much of the drop in the quasar number density to lower redshift.
The drop is a consequence of evolving Eddington ratios and the
shallowing of the $M_{\rm BH}-M_h$ relation at high mass, which is in turn
driven by the very slow growth of massive galaxies at low redshift.
In fact, the model naturally reproduces the global rise and fall of
the quasar number density over the interval $0.5<z<4.75$. This
follows simply from the evolution in the $M_{\rm gal}-M_h$ and $L_Q-M_{\rm gal}$
relations and the halo mass function; it does not require strong
evolution in $t_Q$ at low $z$. Specifically we do not invoke a
decline in the cold gas fraction nor a decline in the major merger
rate at $z<2$ in order to reproduce the observed decline in the
abundance of quasars. While these physical processes may ultimately
be responsible for shaping the evolving relations between $L_Q$,
$M_{\rm BH}$, $M_{\rm gal}$ and $M_h$, they do not appear explicitly in the model.
\begin{figure}[!t]
\begin{center}
\resizebox{3.5in}{!}{\includegraphics{f11.eps}}
\end{center}
\caption{Relation between halo mass and redshift for quasars of a
fixed luminosity. At low redshift the range of halo masses hosting
quasars is very broad, but the distribution narrows substantially at
high redshift. This is simply a recasting of the relations shown in
Figure \ref{fig:mbh_mhalo}.}
\label{fig:mag_mhalo}
\end{figure}
Our model favors a different picture of how quasars inhabit massive
halos compared to previous work. Rather than having a preferred halo
mass scale (around $10^{12}\,M_\odot$) for quasar activity, the
present model allows for actively accreting black holes in a broad
range of galaxy and halo masses. The apparent preference for quasars
to live in halos of $10^{12}\,M_\odot$ arises from the shape of the
$M_{\rm gal}-M_h$ relation, which reflects the well known fact that galaxy
formation is most efficient in halos near $10^{12}M_\odot$, along with
the shape of the halo mass function. Specifically, above the knee in
the $M_{\rm gal}-M_h$ relation halos become exponentially rare, while below
the knee a large range in $M_{\rm gal}$ maps into a small range in $M_h$.
Thus, the {\it average} halo mass of quasars will be close to the
knee, despite the fact that quasars occupy a broad distribution of
halo masses.
Due to its simplicity the model predicts the clustering of any
population of quasars once the model parameters are fixed (e.g., by
the observed LF). Variation in the $L_Q-M_h$ scatter or $M_{\rm BH}-M_{\rm
gal}$ slope do not strongly affect the predicted clustering, meaning
that our model makes an essentially parameter-free prediction of the
clustering of quasars as a function of luminosity and redshift.
Overall the agreement between the predicted clustering and the
observations is good, though there is a tendency for the model to
slightly underpredict the observations and there is some tension at
the highest redshifts. This tension has been noted before -- the very
high amplitude of clustering measured at $z\sim 4$, in combination
with the abundance, requires quasars to have a duty cycle approaching
unity and almost no scatter in $L_Q$ at fixed $M_h$
\citep{WhiMarCoh08, ShaWeiShe10}. This is at odds with the very low
number densities but power-law decline seen in the luminosity function
at high $z$. If the clustering measurements can be strengthened,
possibly by cross-correlation of existing spectroscopic quasar samples
with deeper photometric quasar or galaxy samples, then it will
indicate that one of our assumptions is breaking down as we approach
the era of rapid black hole growth in the early Universe.
We make no assumption about what triggers quasar activity, whether it
be a major merger of two gas rich galaxies, a secular instability in a
disk, or a critical halo mass. In general it is quite difficult to
translate abundance and clustering measurements into constraints on
the underlying mechanisms that trigger quasar activity. We can gain
some insight by the fact that our duty cycle, or quasar lifetime, is
relatively independent of redshift with a tendency to fall towards
higher redshifts rather than rise. If quasars are visible for a
fixed, but short, time and are triggered by mergers then we expect
$t_Q$ to scale with the merger rate \citep[c.f.][]{Car90}. The merger
rate for halos, per halo, per unit redshift is relatively flat
\citep{LacCol93, Per03, CohWhi05, WetCohWhi09, FakMa09, Hop10b}, so if
we can naively translate halo mergers into galaxy mergers we expect a
rate (per unit time) scaling as $(1+z)H(z)\propto (1+z)^{5/2}$ for
$z\gg 1$. If quasars are visible for a constant interval after each
merger then $t_Q\propto 1+z$, which is not in good agreement with our
best-fit relation. Of course, galaxy merger rates can differ from
halo merger rates. A recent analysis by \citet{Hop10a} suggests a
rate per unit time scaling as $(1+z)^{1.5-2.0}$, which would lead to
slower evolution in $t_Q$, as we observe. Such agreement is not
conclusive however, and we cannot rule out secular processes or a
time-varying combination of multiple triggers.
\subsection{Comparison to Previous Work}
The success of our model in explaining the basic demographics of
quasars with relatively few, smoothly varying inputs goes a long way
to explaining the manner in which forward modeling of the quasar
population can succeed with relatively little fine tuning. Both
semi-analytic models \citep[e.g.,][]{CatHaeRee99, KauHae00, KauHae02,
VolHaaMad03, BroSomFab04, Granato04, Croton06, MonFonTaf07, Mal07,
Bon09, Fan12, Hir12} and hydrodynamic simulations
\citep[e.g.,][]{Sij07, DeGraf11} adjust their subgrid models to ensure
a reasonable match to the $M_{\rm gal}-M_h$ relation over a broad redshift
range, thus ensuring that galaxies populate halos in approximately the
correct manner. All of the models introduce a $M_{\rm BH}-M_{\rm gal}$ relation
through either or a combination of common feeding mechanisms and
feedback-limited BH growth. As we have shown, with these two
ingredients even simple lightcurve models are sufficient to match the
basic demographics of quasars over a broad range of luminosity and
redshift. A good match to the data can be found for a wide range of
scatter in $M_{\rm BH}-M_{\rm gal}$, or evolution in the scatter. Conversely, if
a model has difficulties reproducing the stellar mass function and its
evolution then it will need to incorporate mass-dependent quasar
physics that counteracts this deficiency in order to match the
observed quasar properties.
By contrast, models that tie black hole properties directly to the
underlying halo population need to introduce more complexity in order
to reproduce the observed properties of quasars. Recent examples
include \citet{Lid06}, \citet{Croton09}, and \citet{She09}, who all
need to include mass- and redshift-dependent duty cycles to explain
the shape and evolution of the quasar luminosity function. While our
model and theirs can produce qualitatively similar fits to the basic
data, the explanations for the observed behaviors differ. One of the
most basic differences is the range of halos that host active quasars,
and its evolution (discussed above). This in turn affects how each
model explains the evolution of the quasar LF and the
luminosity-independence of quasar clustering.
Conventional wisdom is that the quasar duty cycle is required by the
data to be a (strong) function of luminosity \citep[e.g.][]{Ade05,
Hop05, Lid06, Croton09, She09}. In our model this is not the case.
There are two major reasons for this. The first is that we obtain a
flattening of the $b(L)$ relation from the steepness of the $L_Q-M_h$
relation at low $L_Q$ and the second is the intrinsic scatter
\footnote{This scatter may arise due to time-dependent processes,
i.e.~a high $L_Q$ object at the time of observation is not required
to have always been or continue to be high $L_Q$.} in that
relation. Thus our model is not a ``light bulb'' model in the sense of
\citet{Hop05} and \citet{Lid06}, who reserve that term for a model in
which there is no scatter in $L_Q-M_h$. However scatter in the
$L_Q-M_h$ relation is {\it expected\/}, due to the observed scatter in
$M_{\rm BH}-M_{\rm gal}$ and variation in Eddington ratios if from no other
source; for this reason we refer to our model as a ``scattered'' light
bulb model. This expected level of scatter is enough to make $b(L)$
flat until extremely high $L$ or correspondingly low $\bar{n}_Q$
\citep[a similar behavior is seen in the model of][which is also not
strictly a light bulb model in the above sense]{Croton09}. For this
reason we are able to obtain a model in which both the quasar lifetime
and the quasar clustering are independent of $L$.
\citet{Aird12} studied $X-$ray selected active galactic nuclei (AGN)
as a function of galaxy mass at $z\sim0.6$ and found no preference for
AGN to be found in galaxies of a particular mass at fixed Eddington
ratio, even for ratios as high as $\eta\gtrsim0.1$. Their results
suggest a duty cycle that does not depend strongly on galaxy mass, in
excellent agreement with our results.
Finally, the apparent preference for quasars to live in halos of
$10^{12}M_\odot$, which has been noted by many authors, arises in our
model from the shape of the $M_{\rm gal}-M_h$ relation, which reflects the
well-known fact that galaxy formation is most efficient in halos of
$10^{12}M_\odot$, in combination with the halo mass function. Within
the context of our model this cannot be taken as evidence for a merger
driven origin to quasar activity, despite the fact that it is close to
the small group scale where mergers may be more efficient, because it
is not believed that the knee of the $M_{\rm gal}-M_h$ relation is related
to mergers.
\subsection{Mock Catalogs}
While our intent has been to understand the quasar phenomenon, the
model can also be used for the creation of mock catalogs from N-body
simulations. The simplicity of the model makes it easy to rapidly
generate redshift-dependent quasar populations that have the correct
luminosity function and clustering, given halo catalogs at the
redshifts of interest. The steps for creating such a catalog are
straightforward:
\begin{itemize}
\item[1.] Adopt the redshift-dependent $M_{\rm gal}-M_h$ relation from
\citet{BehWecCon12}, including scatter in $M_{\rm gal}$ at fixed $M_h$.
\item[2.] Use the $M_{\rm gal}-M_{\rm BH}$ relation from Equation
\ref{eqn:mbh_mgal} to assign BHs to galaxies, including 0.3 dex of
scatter in $M_{\rm BH}$ at fixed $M_{\rm gal}$. Fix the normalization of this
relation to the local value, with no redshift evolution (because we
advocate using the varying $\eta$ model; see below).
\item[3.] Randomly turn a fraction, $f_{\rm on}$, of the BHs into
active quasars. As evident from Figure \ref{fig:dutycycle}, the
quasar lifetime is approximately constant at $3\times10^7$ yr at
$z<3$; we therefore advocate fixing $t_Q$ to this value. One then
determines the duty cycle via $f_{\rm on}(z)=t_Q/t_H(z)$.
\item[4.] For the active BHs, convert $M_{\rm BH}$ into $L_Q$ using Equation
\ref{eqn:Lq_Mbh}, with an additional 0.3 dex of scatter in $L_Q$ at
fixed $M_{\rm BH}$. Use the redshift-dependent Eddington ratio, $\eta$,
shown in the bottom panel of Figure \ref{fig:dutycycle}. We
advocate using the varying $\eta$ model because this model produces
self-consistent BH growth at all redshifts (see Figure
\ref{fig:mbh_mhalo}).
\end{itemize}
When simulations are populated with quasars in this way, the mock
quasar LF and clustering will agree with all existing LF and
clustering data at $z<3$. In order to produce mock catalogs at higher
redshifts one will need to include a drop in $t_Q$ as shown in Figure
\ref{fig:dutycycle}. Such mock catalogs should prove useful in the
context of ongoing and future planned surveys such as BOSS, bigBOSS,
DES, Pan-STARRS, SUMIRE and LSST.
\section{Summary}
\label{sec:conclusions}
We have presented a simple model for quasars with the aim of
understanding to what extent their demographics arise naturally from
what is known about the evolution of galaxies, along with plausible
assumptions about how black holes inhabit them. The key feature of
the model is that the properties of black holes are set by those of
their host galaxies rather than their host halos \citep[see
also][]{Whi12}. In the model, BH mass is linearly related to galaxy
mass and BHs shine at a fixed fraction of the Eddington luminosity
during accretion episodes. Galaxies are related to dark matter halos
via empirically constrained relations \citep{BehWecCon12}. The model
has only two free parameters at each redshift, the normalization of
the $M_{\rm BH}-M_{\rm gal}$ relation and the duty cycle, both of which are
tightly constrained by observations of the quasar LF. We have
explored two possibilities for the evolution of the Eddington ratio
with redshift, finding physically self-consistent BH growth for a
model in which the Eddington ratio increases with increasing redshift.
The model provides an excellent fit to the LF data for $0.5<z<6$ and
reproduces the observed clustering at intermediate redshifts with no
additional adjustable parameters.
The best-fit model parameters imply a quasar lifetime of approximately
$3\times10^7\,$yr at $z<3$. This may be expected if the growth of the
galaxy during a quasar event only allows $\sim 1$ e-folding of black
hole growth before feedback halts quasar activity.
There are several implications of our model, which we now summarize:
\begin{itemize}
\item Actively accreting BHs are equally likely to exist in galaxies,
and dark matter halos, over a wide range in masses. The BHs in
halos more massive than $10^{13.5}M_\odot$ contribute very little to
the observed quasar LF at any redshift due to their rarity. The
quasar LF therefore places weak constraints on the quasar duty cycle
in massive halos.
\item The break in the quasar LF is a reflection of the break in the
$M_{\rm gal}-M_h$ relation at $M_h\sim10^{12}M_\odot$ and the observed
evolution of the LF primarily reflects the $(1+z)^2$ scaling of
$L_Q/M_{\rm gal}$ and the change in shape of the $M_{\rm gal}-M_h$ relation.
The bright-end slope of the quasar LF appears shallower at high $z$
both because the data are probing closer to the (brighter) break in
the LF and because the $M_{\rm BH}-M_h$ relation becomes steeper at higher
mass and redshift.
\item Our model naturally reproduces the global rise and fall of the
quasar number density over the interval $0.5<z<6$. This follows
simply from the evolution in the $L_Q-M_h$ relation and does not
require strong evolution in the quasar lifetime at $z<3$. The
bright end of the model quasar LF is strongly suppressed at $z<1.5$,
due to the slow growth of massive galaxies, and this is responsible
for much of the drop in quasar number density to low redshift.
\item The apparent preference for quasars to live in halos of
$10^{12}M_\odot$ arises from the shape of the $M_{\rm gal}-M_h$ relation,
which reflects the well-known fact that galaxy formation is most
efficient near $10^{12}M_\odot$, in conjunction with the steepness
of the halo mass function at high mass.
\item There is some tension between our model and the amplitude of
clustering observed at $z\sim 4$; the latter, taken at face value,
suggests that quasars have a duty cycle approaching unity and almost
no scatter in $L_Q-M_h$ while the power-law fall-off of the bright
end of the luminosity function suggests otherwise. Future
clustering measurements in this redshift range will be crucial tests
of the model.
\item The nearly constant inferred quasar lifetimes as a function of
luminosity and redshift (at $z<3$) should provide valuable
constraints on the triggering mechanisms for quasars.
\end{itemize}
Measurements of quasar demographics at higher redshifts and lower
luminosities will help to further constrain and test our model. In
particular, stronger constraints on the quasar LF at $z>4$, on quasar
clustering as a function of luminosity and redshift, and on the
$M_{\rm BH}-M_{\rm gal}$ relation as a function of redshift, will provide very
strong constraints on the model parameters. Moreover, with such
observational constraints in hand, we will be able to directly
constrain the mean Eddington ratio as a function of redshift and the
scatter as a function of redshift, providing further insight into the
link between quasars, black holes, galaxies, and dark matter halos.
\acknowledgments
We thank Nic Ross and Yue Shen for providing their data in electronic
form, Adam Myers, Matt McQuinn, and Yue Shen for comments on an
earlier draft, and Tom Targett for his literature compilation of data
that went into Figure 4. The referee is thanked for comments that
improved the quality of the manuscript. M.W. was supported by the NSF
and NASA. This work made extensive use of the NASA Astrophysics Data
System and of the {\tt astro-ph} preprint archive at {\tt arXiv.org}.
The analysis made use of the computing resources of the National
Energy Research Scientific Computing Center.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,142 |
namespace MusicStoreB2C
{
public class AppSettings
{
public string SiteTitle { get; set; }
public bool CacheDbResults { get; set; } = true;
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 5,304 |
SpaceX CRS-21, також відома як SpX-21 — двадцять перша місія вантажного космічного корабля Dragon до Міжнародної космічної станції, яку запущено 6 грудня грудня 2020 року. Перший запуск ракети-носія компанії SpaceX в рамках контракту Commercial Resupply Services з компанією НАСА. Вперше запуск здійснено за допомогою вантажного корабля нового покоління Dragon 2.
Корисне навантаження та його подальше використання
Корабель доставив до МКС 2972 кг вантажу, з яких 1882 кг у герметичному відсіку:
продукти харчування та речі для екіпажу — 364 кг
матеріали для наукових досліджень — 953 кг
обладнання для виходу у відкритий космос –120 кг
обладнання і деталі станції — 317 кг
комп'ютери та комплектуючі — 46 кг
російський вантаж — 24 кг.
Наукове обладнання для подальшого проведення експериментів:
Bishop Airlock — комерційний повітряний модуль, розроблений компанією Nanoracks. Це буде металева конструкція у формі ковпака на зовнішній поверхні станції.
BioAsteroid — установка, за допомогою якої буде здійснюватися експеримент з видобутку корисних копалин з метеоритів та інших космічних об'єктів.
HemoCue — прилад для підрахунку кількості лейкоцитів в умовах гравітації.
The Brain Organoid experiment — експеримент із дослідження діяльності головного мозку людини.
Cardinal Heart — дослідження реакції тканин серця на лікарські препарати в умовах мікрогравітації.
Subsa-Brains вивчатиме вплив мікрометеоритів і космічного сміття та шкоду, яку вони можуть заподіяти, а також процес відновлення матеріалів методами пайки.
Three-Dimensional Microbial Monitoring — це проект, який має на меті побудувати тривимірну карту бактерій та інших метаболітів, які є в різних частинах МКС, та визначити, як умови космічного польоту впливають на різні ідентифіковані види.
Хід місії
Запуск здійснено 6 грудня 2020 о 16:17:08 (UTC). Перший ступінь ракети після запуску успішно повернувся на Землю (на плавучу баржу).
7 грудня о 18:40 (UTC) корабель успішно пристикувався до МКС. Стикування відбувалось в автоматичному режимі під контролем американських космонавтів 64-ї експедиції. Це був перший випадок автоматичного стикування корабля SpaceX та перший випадок, коли до МКС одночасно пристиковано два кораблі Dragon 2.
12 січня 2021 корабель було відстиковано від МКС під керівництвом космонавтів 64-ї місії. Вперше від'єднання було здійснено самостійно, без використання крана-маніпулятора Канадарм2. 14 січня о 01:26 (UTC) корабель успішно повернувся на Землю — приводнення відбулося в Мексиканській затоці неподілік Флориди. Він доставив на Землю 2002 кг вантажу з результатами наукових експериментів.
Див. також
Міжнародна космічна станція — космічна станція, на котру здійснюється доставка вантажу.
Dragon — космічний корабель, котрий виконує цю місію.
Falcon 9 — ракета-носій, котра запускає вантажний корабель Dragon.
SpaceX — компанія, що створила і керує кораблем Dragon і ракетою-носієм Falcon 9.
Галерея
Примітки
Джерела та посилання
SpaceX CRS-20 Mission
Космонавтика 2020
Грудень 2020
SpaceX
Вантажні кораблі постачання до МКС | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,701 |
<?php
namespace Shaygan\TelegramBotApiBundle\Type;
abstract class Type implements TypeInterface
{
/**
* @param \stdClass $obj
*/
public function __construct(\stdClass $obj = null)
{
if ($obj instanceof \stdClass) {
$this->loadResult($obj);
}
}
/**
* @param \stdClass $obj
*/
public function loadResult(\stdClass $obj)
{
foreach ($obj as $key => $value) {
$this->$key = $value;
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 510 |
/***********************************************************************
created: Sat Mar 12 2005
author: Paul D Turner
*************************************************************************/
/***************************************************************************
* Copyright (C) 2004 - 2006 Paul D Turner & The CEGUI Development Team
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
* IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
***************************************************************************/
#include <XmlHandler.h>
#include <System.h>
namespace duilib2
{
XmlHandler::XmlHandler()
{
}
XmlHandler::~XmlHandler()
{
}
void XmlHandler::handleContainer(const RawDataContainer& source)
{
System::getSingleton().getXmlParser()->parseXml(*this, source);
}
void XmlHandler::elementStart(const String& /*element*/, const XmlAttributes& /*attributes*/)
{
}
void XmlHandler::elementEnd(const String& /*element*/)
{
}
void XmlHandler::text(const String& /*text*/)
{
}
} // namespace duilib2
| {
"redpajama_set_name": "RedPajamaGithub"
} | 76 |
WGA should have told them publically to bag it! Then quietly, get interested parties together quietly to work on this. Then go to the government quietly and lay out the necessary activities that need to be regulated federally.
Seriously, though, just go back to the Alar scare. Raley's is promoting Nutriclean Certified Produce. Safeway and others began to advertise that their produce was being inspected by Primus, as if to say that this was better for their customers.And I promise you that this is going to come back again, only this time, it will be that "company A's buyers" are imposing more stringent standards than "company B's". | {
"redpajama_set_name": "RedPajamaC4"
} | 4,271 |
require 'test_helper'
require 'action_controller'
parameters_classes = [Delocalize::Parameters]
if defined?(ActionController::Parameters)
# FIXME Can this happen automatically, e.g. by loading the Railtie?
ActionController::Parameters.send(:include, Delocalize::ParameterDelocalizing)
parameters_classes << ActionController::Parameters
end
puts "Testing parameter classes: #{parameters_classes.inspect}"
parameters_classes.each do |parameters_class|
describe parameters_class do
before do
I18n.locale = I18n.default_locale
Time.zone = 'Berlin' # make sure everything works as expected with TimeWithZone
end
it "delocalizes top level params based on the given options" do
params = parameters_class.new(:released_on => '21. Mai 1986', :available_until => '25. Dezember 2013, 23:59 Uhr', :price => '1.299,99')
delocalized_params = params.delocalize(:released_on => :date, :available_until => :time, :price => :number)
delocalized_params[:released_on].must_equal Date.civil(1986, 5, 21)
delocalized_params[:available_until].must_equal Time.zone.local(2013, 12, 25, 23, 59)
delocalized_params[:price].must_equal '1299.99'
end
it "delocalizes nested params based on the given options" do
params = parameters_class.new(:product => { :released_on => '21. Mai 1986', :available_until => '25. Dezember 2013, 23:59 Uhr', :price => '1.299,99' })
delocalized_params = params.delocalize(:product => { :released_on => :date, :available_until => :time, :price => :number })
delocalized_params[:product][:released_on].must_equal Date.civil(1986, 5, 21)
delocalized_params[:product][:available_until].must_equal Time.zone.local(2013, 12, 25, 23, 59)
delocalized_params[:product][:price].must_equal '1299.99'
end
it "delocalizes field-for type params based on the given options" do
params = parameters_class.new(
:product => {
variant_attributes: {
"0" => { :released_on => '21. Mai 1986', :available_until => '25. Dezember 2013, 23:59 Uhr', :price => '1.299,99' },
"1" => { :released_on => '1. Juni 2001', :available_until => '12. November 2014, 00:00 Uhr', :price => '1.099,01' },
}
}
)
delocalized_params = params.delocalize(:product => { :variant_attributes => { :released_on => :date, :available_until => :time, :price => :number } })
delocalized_params[:product][:variant_attributes]['0'][:released_on].must_equal Date.civil(1986, 5, 21)
delocalized_params[:product][:variant_attributes]['0'][:available_until].must_equal Time.zone.local(2013, 12, 25, 23, 59)
delocalized_params[:product][:variant_attributes]['0'][:price].must_equal '1299.99'
delocalized_params[:product][:variant_attributes]['1'][:released_on].must_equal Date.civil(2001, 6, 1)
delocalized_params[:product][:variant_attributes]['1'][:available_until].must_equal Time.zone.local(2014, 11, 12, 00, 00)
delocalized_params[:product][:variant_attributes]['1'][:price].must_equal '1099.01'
end
it "delocalizes nested params on the key itself based on the given options" do
params = parameters_class.new(:product => { :released_on => '21. Mai 1986', :available_until => '25. Dezember 2013, 23:59 Uhr', :price => '1.299,99' })
product_params = params[:product].delocalize(:released_on => :date, :available_until => :time, :price => :number)
product_params[:released_on].must_equal Date.civil(1986, 5, 21)
product_params[:available_until].must_equal Time.zone.local(2013, 12, 25, 23, 59)
product_params[:price].must_equal '1299.99'
end
it "delocalizes deeply nested params for one-to-one based on the given options" do
params = parameters_class.new(:parent => { :child => { :child_date => '21. Mai 1986', :child_time => '25. Dezember 2013, 23:59 Uhr', :child_number => '1.299,99' } })
delocalized_params = params.delocalize(:parent => { :child => { :child_date => :date, :child_time => :time, :child_number => :number } })
delocalized_params[:parent][:child][:child_date].must_equal Date.civil(1986, 5, 21)
delocalized_params[:parent][:child][:child_time].must_equal Time.zone.local(2013, 12, 25, 23, 59)
delocalized_params[:parent][:child][:child_number].must_equal '1299.99'
end
it "delocalizes deeply nested params for one-to-one on the key itself based on the given options" do
params = parameters_class.new(:parent => { :child => { :child_date => '21. Mai 1986', :child_time => '25. Dezember 2013, 23:59 Uhr', :child_number => '1.299,99' } })
parent_params = params[:parent].delocalize(:child => { :child_date => :date, :child_time => :time, :child_number => :number })
parent_params[:child][:child_date].must_equal Date.civil(1986, 5, 21)
parent_params[:child][:child_time].must_equal Time.zone.local(2013, 12, 25, 23, 59)
parent_params[:child][:child_number].must_equal '1299.99'
end
it "delocalizes all the things at all the levels of all the types" do
delocalize_options = {
:top_level_date => :date,
:top_level_time => :time,
:top_level_number => :number,
:parent => {
:parent_date => :date,
:parent_time => :time,
:parent_number => :number,
:child => {
:child_date => :date,
:child_time => :time,
:child_number => :number
}
}
}
params = parameters_class.new(
:top_level_date => '21. Mai 1986',
:top_level_time => '25. Dezember 2013, 23:59 Uhr',
:top_level_number => '1.299,99',
:parent => {
:parent_date => '21. Mai 2004',
:parent_time => '24. Dezember 2013, 23:59 Uhr',
:parent_number => '999,99',
:child => {
:child_date => '21. Mai 2011',
:child_time => '31. Dezember 2013, 23:59 Uhr',
:child_number => '9.999'
}
}
)
delocalized_params = params.delocalize(delocalize_options)
delocalized_params[:top_level_date].must_equal Date.civil(1986, 5, 21)
delocalized_params[:top_level_time].must_equal Time.zone.local(2013, 12, 25, 23, 59)
delocalized_params[:top_level_number].must_equal '1299.99'
delocalized_params[:parent][:parent_date].must_equal Date.civil(2004, 5, 21)
delocalized_params[:parent][:parent_time].must_equal Time.zone.local(2013, 12, 24, 23, 59)
delocalized_params[:parent][:parent_number].must_equal '999.99'
delocalized_params[:parent][:child][:child_date].must_equal Date.civil(2011, 5, 21)
delocalized_params[:parent][:child][:child_time].must_equal Time.zone.local(2013, 12, 31, 23, 59)
delocalized_params[:parent][:child][:child_number].must_equal '9999'
end
# TODO Figure out deeply nested params for one-to-many relations.
# The problem is that one-to-many relations may be given as a hash or an array. Delocalize should
# be able both cases just fine.
it "fails for a non-existent type" do
params = parameters_class.new(:available_until => '25. Dezember 2013, 23:59 Uhr')
->{ params.delocalize(:available_until => :datetime) }.must_raise(Delocalize::ParserNotFound)
end
it "keeps unconfigured parameters as they are while still delocalizing others" do
params = parameters_class.new(:released_on => '1986-05-21', :price => '1.299,99')
delocalized_params = params.delocalize(:price => :number)
delocalized_params[:released_on].must_equal '1986-05-21'
delocalized_params[:price].must_equal '1299.99'
end
it "doesn't raise when nested params given and which aren't defined in options" do
params = parameters_class.new(:parent => { :parent_date => '21. Mai 2004' })
## Should not throw an error:
params.delocalize({})
end
it "delocalizes arrays" do
params = parameters_class.new(:location => ['13,456', '51,234'], :interval => ['25. Dezember 2013', '31. Januar 2014'])
delocalized_params = params.delocalize(:location => [:number], interval: [:date])
delocalized_params[:location].must_equal ['13.456', '51.234']
delocalized_params[:interval].must_equal [Date.civil(2013, 12, 25), Date.civil(2014, 1, 31)]
end
it "keeps invalid dates in the params hash" do
params = parameters_class.new(:first_date => "asdf", :second_date => "02.0.2017", :third_date => "02.123.2017")
delocalized_params = params.delocalize(:first_date => :date, :second_date => :date, :third_date => :date)
delocalized_params[:first_date].must_equal "asdf"
delocalized_params[:second_date].must_equal "02.0.2017"
delocalized_params[:third_date].must_equal "02.123.2017"
end
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,790 |
Q: Ansible: How to filter dict2items and run playbook only for the matched values I have a dict playbook which looks like this:
x_php_versions_installed:
ea-php71:
- ea-php71-php-bcmath
- ea-php71-php-xmlrpc
- ea-php71-php-zip
- pecl-memcached
- pecl-imagick
ea-php72:
- ea-php72-php-cli
- ea-php72-php-common
- ea-php72-php-curl
- pecl-imagick
I would like to filter them, to write me each item.value which contains 'ea' string but not everything else. My task looks like this:
- name: Write out only the ea packages
debug:
msg: '{{ item.value }}'
when: item.value | selectattr(item.value, 'contains', 'ea')
loop: '{{ x_php_versions_installed | dict2items }}
But it does not work, because it will list all of the packages, not only the ea ones. The expected answer should look like this:
...
"msg": [
"ea-php71-php-bcmath",
"ea-php71-php-xmlrpc",
"ea-php71-php-zip"
]
...
"msg": [
"ea-php72-php-cli",
"ea-php72-php-common",
"ea-php72-php-curl"
]
...
Another possibility is to filter out the 'pecl' string, it will gave me the same result and it also works fine.
A: Q: "Filter item.value which contains ea string."
A: The task below does the job
- debug:
msg: "{{ item.value|select('match','^ea-(.*)$')|list }}"
loop: "{{ x_php_versions_installed|dict2items }}"
gives (abridged)
msg:
- ea-php71-php-bcmath
- ea-php71-php-xmlrpc
- ea-php71-php-zip
msg:
- ea-php72-php-cli
- ea-php72-php-common
- ea-php72-php-curl
Note: The test match by default "succeeds if it finds the pattern at the beginning of the string". The task below gives the same result
- debug:
msg: "{{ item.value|select('match', 'ea-')|list }}"
loop: "{{ x_php_versions_installed|dict2items }}"
Q: "Filter out the pecl string."
A: Change the filter to reject and fit the regex. For example, the task below gives the same result
- debug:
msg: "{{ item.value|reject('match','^pecl-(.*)$')|list }}"
loop: "{{ x_php_versions_installed|dict2items }}"
Notes:
*
*Select the lists without iteration. Declare the variables
x_php_versions_installed_keys: "{{ x_php_versions_installed.keys()|list }}"
x_php_versions_installed_ea_vals: "{{ x_php_versions_installed|dict2items|
map(attribute='value')|
map('select', 'match', 'ea-')|list }}"
x_php_versions_installed_ea: "{{ dict(x_php_versions_installed_keys|
zip(x_php_versions_installed_ea_vals)) }}"
gives
x_php_versions_installed_ea:
ea-php71:
- ea-php71-php-bcmath
- ea-php71-php-xmlrpc
- ea-php71-php-zip
ea-php72:
- ea-php72-php-cli
- ea-php72-php-common
- ea-php72-php-curl
*
*Example of a complete playbook for testing
- hosts: localhost
vars:
x_php_versions_installed:
ea-php71:
- ea-php71-php-bcmath
- ea-php71-php-xmlrpc
- ea-php71-php-zip
- pecl-memcached
- pecl-imagick
ea-php72:
- ea-php72-php-cli
- ea-php72-php-common
- ea-php72-php-curl
- pecl-imagick
x_php_versions_installed_keys: "{{ x_php_versions_installed.keys()|list }}"
x_php_versions_installed_ea_vals: "{{ x_php_versions_installed|dict2items|
map(attribute='value')|
map('select', 'match', 'ea-')|list }}"
x_php_versions_installed_ea: "{{ dict(x_php_versions_installed_keys|
zip(x_php_versions_installed_ea_vals)) }}"
tasks:
- debug:
msg: "{{ item.value|select('match','^ea-(.*)$')|list }}"
loop: "{{ x_php_versions_installed|dict2items }}"
- debug:
msg: "{{ item.value|select('match', 'ea-')|list }}"
loop: "{{ x_php_versions_installed|dict2items }}"
- debug:
msg: "{{ item.value|reject('match','^pecl-(.*)$')|list }}"
loop: "{{ x_php_versions_installed|dict2items }}"
- debug:
msg: "{{ item.value|reject('match','pecl-')|list }}"
loop: "{{ x_php_versions_installed|dict2items }}"
- debug:
var: x_php_versions_installed_ea
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,922 |
#include "CreateAnythingFromStringAction.h"
#include "AnythingUtils.h"
#include "Renderer.h"
#include "StringStream.h"
//---- CreateAnythingFromStringAction ---------------------------------------------------------------
RegisterAction(CreateAnythingFromStringAction);
CreateAnythingFromStringAction::CreateAnythingFromStringAction(const char *name) : Action(name) { }
CreateAnythingFromStringAction::~CreateAnythingFromStringAction() { }
bool CreateAnythingFromStringAction::DoExecAction(String &transitionToken, Context &ctx, const ROAnything &config)
{
StartTrace(CreateAnythingFromStringAction.DoExecAction);
TraceAny(config, "config:");
ROAnything roString;
String string;
if (config.LookupPath(roString, "String")) {
Renderer::RenderOnString(string, ctx, roString);
} else {
SystemLog::Warning("CreateAnythingFromStringAction::DoExecAction: String slot not defined in config!");
return false;
}
Trace("resulting string before creating the any:[" << string << "]");
Anything newAny;
if (string.Length()) {
IStringStream is(string);
newAny.Import(is);
}
TraceAny(newAny, "newAny:");
ROAnything destConfig;
if (!config.LookupPath(destConfig, "Destination")) {
SystemLog::Warning("CreateAnythingFromStringAction::DoExecAction: Destination slot not defined in config!");
return false;
}
StorePutter::Operate(newAny, ctx, destConfig);
return true;
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,045 |
{"url":"https:\/\/bayesf22-notebook.classes.andrewheiss.com\/bayes-rules\/12-chapter.html","text":"Published\n\nOctober 7, 2022\n\n(Original chapter)\n\nlibrary(bayesrules)\nlibrary(tidyverse)\nlibrary(brms)\nlibrary(cmdstanr)\nlibrary(rstanarm)\nlibrary(marginaleffects)\nlibrary(broom)\nlibrary(broom.mixed)\nlibrary(tidybayes)\nlibrary(ggdist)\nlibrary(patchwork)\nlibrary(ggh4x) # For coord_axes_inside() and nested facets\nlibrary(geomtextpath)\nlibrary(ggrepel)\n\n# Plot stuff\nclrs <- MetBrewer::met.brewer(\"Lakota\", 6)\ntheme_set(theme_bw())\n\n# Tell bayesplot to use the Lakota palette for things like pp_check()\n# bayesplot::color_scheme_set(clrs)\n\n# Tell bayesplot to use the viridis rocket palette for things like pp_check()\nviridisLite::viridis(6, option = \"rocket\", end = 0.85, direction = -1) |>\n# Take off the trailing \"FF\" in the hex codes\nmap_chr(~str_sub(., 1, 7)) |>\nbayesplot::color_scheme_set()\n\n# Seed stuff\nset.seed(1234)\nBAYES_SEED <- 1234\n\ndata(equality_index, package = \"bayesrules\")\n\nequality <- equality_index |>\n# Omit California because it has so many laws already\nfilter(state != \"california\")\n\n## The general setup\n\nWe want to model the number of LGBTQ+ anti-discrimination laws in states based on how urban a state is and its historical partisan voting patterns. Here\u2019s the general relationship. A regular straight OLS line doesn\u2019t fit the data well, but because the outcome is a count, and because the general relationship is curvy, Poisson regression will work.\n\nggplot(equality, aes(x = percent_urban, y = laws)) +\ngeom_point(aes(fill = historical), pch = 21, size = 4, color = \"white\") +\ngeom_smooth(aes(color = \"Poisson regression\"), se = FALSE, method = \"glm\",\nmethod.args = list(family = \"poisson\")) +\ngeom_smooth(aes(color = \"Normal regression\"), se = FALSE, method = \"lm\") +\nscale_fill_manual(values = c(clrs[6], clrs[3], clrs[2])) +\nscale_color_manual(values = c(clrs[5], clrs[4])) +\nlabs(x = \"Percent urban\", y = \"Count of laws\", color = NULL, fill = \"Party\") +\ntheme(legend.position = \"bottom\")\n\n## 12.1: Building the Poisson regression model\n\n### Prelude II: How to interpret Poisson coefficients\n\nBefore specifying priors, it\u2019s helpful to know what these actual coefficients mean. Here\u2019s a basic frequentist model, with coefficients logged and exponentiated:\n\nmodel_basic <- glm(laws ~ percent_urban + historical, data = equality,\ntidy(model_basic)\n## # A tibble: 4 \u00d7 5\n## term estimate std.error statistic p.value\n## <chr> <dbl> <dbl> <dbl> <dbl>\n## 1 (Intercept) 1.72 0.305 5.65 1.62e- 8\n## 2 percent_urban 0.0163 0.00357 4.56 5.15e- 6\n## 3 historicalgop -1.51 0.135 -11.2 4.39e-29\n## 4 historicalswing -0.609 0.105 -5.78 7.52e- 9\ntidy(model_basic, exponentiate = TRUE)\n## # A tibble: 4 \u00d7 5\n## term estimate std.error statistic p.value\n## <chr> <dbl> <dbl> <dbl> <dbl>\n## 1 (Intercept) 5.60 0.305 5.65 1.62e- 8\n## 2 percent_urban 1.02 0.00357 4.56 5.15e- 6\n## 3 historicalgop 0.220 0.135 -11.2 4.39e-29\n## 4 historicalswing 0.544 0.105 -5.78 7.52e- 9\n\u2022 For the intercept $$\\beta_0$$, this is the intercept on the logged scale when percent urban is 0 in historically Democratic states (since it\u2019s the omitted base case). We can backtransform this to the response\/count scale by exponentiating it: $$e^{1.7225} = 5.599$$. That means that in a historically Democratic non-urban state, we\u2019d expect to see 5.6 anti-discrimination laws.\n\nBut the most un-urban Democratic states are Maine and Vermont, each at 38% urban, so the intercept isn\u2019t super important here.\n\n\u2022 For the percent urban $$\\beta_1$$ coefficient, this is the slope of the line on the log scale. We can expect the logged number of laws in states to increase by 0.0163 for every additional percentage point of urban-ness. To make that more interpretable we can exponentiate it ($$e^{0.0163} = 1.0164$$), which means that a 1 percentage point increase in urban-ness is associated with 1.0164 times more anti-discrimination laws (or 1.64%)\n\n\u2022 For the party\/historical $$\\beta_2$$ and $$\\beta_3$$ coefficients, these are the shifts in the logged Democratic intercept (again because it\u2019s the omitted base case). We\u2019d thus expect the logged number of laws in GOP states to be 1.5 lower on average. That makes no sense when logged, but if we exponentiate it ($$e^{-1.5145} = 0.2199$$), we find that GOP states should have 22% as many anti-discrimination laws as a Democratic state (or only 22% of what a typical Democratic state would have).\n\nOr even better, we can look at the average marginal effects for these coefficients and get an overall average slope and change in intercept across the whole range of the fitted line. Here, the average slope is 0.17 laws (not logged laws) as urban-ness increases; the average GOP difference is 14.7 laws (that\u2019s huge!).\n\nmfx_basic <- marginaleffects(model_basic)\ntidy(mfx_basic)\n## type term contrast estimate std.error statistic\n## 1 response percent_urban dY\/dX 0.1718771 0.03894868 4.412913\n## 2 response historical gop - dem -14.7199505 1.33083188 -11.060714\n## 3 response historical swing - dem -8.6083989 1.47239777 -5.846517\n## p.value conf.low conf.high\n## 1 1.019892e-05 0.09553912 0.2482152\n## 2 1.945431e-28 -17.32833305 -12.1115680\n## 3 5.019718e-09 -11.49424546 -5.7225522\n\n\u2026or we can look at the average marginal effects at user-specified or representative values, like in prototypical urban and rural Democratic and Republican states:\n\nmfx_basic_typical <- model_basic |>\nmarginaleffects(newdata = datagrid(percent_urban = c(40, 90),\nhistorical = c(\"dem\", \"gop\")),\nvariables = \"percent_urban\",\nby = c(\"percent_urban\", \"historical\"))\ntidy(mfx_basic_typical)\n## type term contrast percent_urban historical estimate\n## 1 response percent_urban mean(dY\/dX) 40 dem 0.17440682\n## 2 response percent_urban mean(dY\/dX) 40 gop 0.03835624\n## 3 response percent_urban mean(dY\/dX) 90 dem 0.39318726\n## 4 response percent_urban mean(dY\/dX) 90 gop 0.08647131\n## std.error statistic p.value conf.low conf.high\n## 1 0.015033937 11.600875 4.078865e-31 0.14494084 0.20387279\n## 2 0.006152249 6.234507 4.532025e-10 0.02629806 0.05041443\n## 3 0.098768546 3.980896 6.865611e-05 0.19960447 0.58677006\n## 4 0.027393506 3.156635 1.596007e-03 0.03278103 0.14016159\n\nThis is neat! For Democratic states the backtransformed slope\/effect is fairly large: a one percentage point increase (or rather, an infinitely small increase, since we\u2019re working with instantaneous partial derivatives here), is associated with 0.17 more anti-discrimination laws in rural states and 0.39 in urban states. In Republican states, the effect is small, with 0.04 and 0.09 more laws in rural and urban states.\n\n### Prelude III: Poisson assumptions\n\nPoisson models have a few important assumptions:\n\n1. Structure of the data: Conditioned on predictors $$X$$, the observed data $$Y_i$$ for each case $$i$$ is independent of other cases like case $$j$$\n2. Structure of $$Y$$: The outcome is a discrete count of events\n3. Structure of the relationship: The logged average $$Y$$ value can be written as a linear combination of the predictors: $$\\log(\\lambda_i) = \\beta_0 + \\beta_1 X_{i1} + \\dots$$\n4. Structure of the variability in $$Y$$: The mean and variance in Poisson distributions is the same, so there should be more spread around the fitted line for higher values of $$Y$$\n\nThe first three are all straightforward and standard for GLM-type models: $$Y$$ needs to be independent, $$Y$$ needs to be a count, and $$\\log(Y)$$ has to be modelable with a linear model.\n\nThe fourth is unique to Poisson models, though. In the Poisson distribution, the mean and the variance are the same thing, both when looking at $$Y$$ by itself and when conditioning it on other things:\n\n\\begin{aligned} E(Y) &= \\operatorname{Var}(Y) = \\lambda \\text{, and} \\\\ E(Y \\mid X) &= \\operatorname{Var}(Y \\mid X) = \\lambda \\end{aligned}\n\nWe can check that with the count of laws, both overall:\n\n$E(\\text{Laws}) = \\operatorname{Var}(\\text{Laws})$\n\nequality |>\nsummarise(mean = mean(laws),\nvariance = sd(laws))\n## # A tibble: 1 \u00d7 2\n## mean variance\n## <dbl> <dbl>\n## 1 10.6 10.3\n\nAnd across different levels of urban-ness:\n\n$E(\\text{Laws} \\mid \\text{Percent urban}) = \\operatorname{Var}(\\text{Laws}\\mid \\text{Percent urban})$\n\nequality_across_urban <- equality |>\nmutate(urban_bins = santoku::chop_quantiles(percent_urban,\nc(0.25, 0.5, 0.75))) |>\ngroup_by(urban_bins) |>\nsummarise(mean = mean(laws),\nvariance = sd(laws)) |>\nmutate(percent_urban = quantile(equalitypercent_urban, c(0, 0.25, 0.5, 0.75) + 0.125), .after = urban_bins) equality_across_urban ## # A tibble: 4 \u00d7 4 ## urban_bins percent_urban mean variance ## <fct> <dbl> <dbl> <dbl> ## 1 [0%, 25%) 56.7 6.17 6.22 ## 2 [25%, 50%) 70.2 3.5 3.03 ## 3 [50%, 75%] 77.9 12 9.37 ## 4 (75%, 100%] 90.6 20.5 11.5 That\u2019s magical. In general, the assumptions hold pretty well. It gets a little off (underdispersed) for higher values of percent urban, but overall, the mean and variance are the same! equality |> ggplot(aes(x = percent_urban, y = laws)) + geom_point(size = 1, color = \"grey60\") + geom_smooth(se = FALSE, method = \"glm\", method.args = list(family = \"poisson\"), size = 0.5, color = \"grey40\") + geom_pointrange(data = equality_across_urban, aes(y = mean, ymin = mean - variance, ymax = mean + variance), color = clrs[4]) ## Warning: Using size aesthetic for lines was deprecated in ggplot2 3.4.0. ## \u2139 Please use linewidth instead. ### Defining the priors Okay cool. Now that we\u2019ve checked the Poisson assumptions and these coefficients make sense, we can set good logical priors for the different parameters in our Poisson model. For our priors, we\u2019ll say that we think that number of anti-discrimination laws in a typical state is 7. The log of that is 2ish ($$\\log(7) \\approx 1.95$$). We\u2019ll also say that this logged intercept could range \u00b11 around that mean, so 1\u20133. In the unlogged world, that means a typical state would have between 3 and 20 laws ($$e^1 \\approx 3; e^3 \\approx 20$$). Our prior for $$\\beta_0$$ is thus normal(2, 0.5). In the book they specify a vague normal(0, 2.5) prior for all the other coefficients and rely on rstanarm\u2019s autoscaling to make them reflect the data better. Here for fun I\u2019ll use brms to be more specific about the priors. For urban-ness ($$\\beta_1$$), I think that there\u2019s definitely a positive relationship, but it\u2019s not going to be massive. A 10% increase in urban-ness in a typical state will probably add a couple more laws. The percent change from going from 7 (our prior intercept) to 9 is 0.3 ($$\\frac{9 - 7}{7} = 0.286$$). Scaling that down to the result of a 1% increase gives us a change of 0.0286, or if we think about it multiplicatively it would be 1.0286. The logged version of 1.0286 is 0.0282, so we\u2019re looking for coefficients around that. (pct_change_10 <- (9 - 7) \/ 7) ## [1] 0.2857143 (pct_change_1 <- pct_change_10 * 0.1) ## [1] 0.02857143 log(1 + pct_change_1) ## [1] 0.02817088 To get a sense for the range around that mean, let\u2019s pretend a typical state goes from 7 to 40 laws as it becomes a little bit more urban. That\u2019s a huge (and probably unlikely) jump! What does that look like in logged coefficients? (pct_change_10 <- (40 - 7) \/ 7) ## [1] 4.714286 (pct_change_1 <- pct_change_10 * 0.1) ## [1] 0.4714286 log(1 + pct_change_1) ## [1] 0.3862337 An effect that big would have a coefficient of 0.386. So in general, the range of plausible coefficients doesn\u2019t ever get too high. Like, a coefficient of 2 would imply that there would be 7.4 times the number of anti-discrimination laws (or an increase of 740%!) with just a 1 percentage point increase in urban-ness. That\u2019s wild. exp(2) ## [1] 7.389056 So we\u2019ll set the prior average at 0, with a small range around it so that it goes from -2 to 2. For kicks and giggles, we\u2019ll use a t-distribution instead of a normal distribution since the t-distribution has fatter tails and makes large coefficients more possible (maybe some states do see huge jumps? idk). You can see the fatter tails here with the blue t-distribution. Our official prior for $$\\beta_1$$ is thus student_t(2, 0, 1). # R's built-in dt() function for t-distributions doesn't use mu and sigma, but # extraDistr::dlst() does. We'll set df arbitrarily to 2 here since that's what # McElreath did in his 7th video on robust regression :shrug: ggplot() + geom_function(fun = ~dlst(., df = 2, mu = 0, sigma = 1), size = 1.5, color = clrs[1]) + geom_function(fun = ~dnorm(., 0, 1), color = clrs[2], size = 0.5) + xlim(c(-4, 4)) For the party-specific changes in intercept ($$\\beta_2$$ and $$\\beta_3$$), we can conceptualize this as the number of GOP state laws as a percentage of Democratic laws. For example, we\u2019ve already said that the typical zero-urban Democratic state has around 7 laws. Based on background knowledge of how GOP states have dealt with LGBTQ+ issues, I\u2019m guessing that there\u2019s a big difference in those states. What if a GOP state has just one expected law? That would be 14% ($$\\frac{1}{7} \\approx 0.143$$) of a typical Democratic state. With 6 laws, it would have 86% as many laws ($$\\frac{6}{7} \\approx 0.857$$), and so on. We\u2019ll assume that GOP states on average will generally have fewer laws on average than Democratic states (meaning the ratio would be less than 1; if a GOP state had 8 laws compared to 7 in a Democratic state, the ratio would be 1.142). Let\u2019s say that on average GOP states will have 60% of the laws a Democratic state would\u2014that would imply that compared to a Democratic state with 7 laws, a GOP state would have 4 ($$7 \\times 0.6 = 4.2$$). The ratio could be as low as 10%, and it could maybe be positive sometimes, like 110% or 150%, but it can never be below 0. Something like this half-t-distribution: tibble(x = seq(0.1, 5, by = 0.01)) |> mutate(y = dlst(x, df = 2, mu = 0.6, sigma = 2)) |> ggplot(aes(x = x, y = y)) + geom_line(size = 1, color = clrs[3]) + geom_vline(xintercept = 1, color = \"grey50\", linetype = 21) + annotate(geom = \"label\", x = 0.4, y = 0.05, label = \"Fewer laws relative\\nto Democratic states\", size = 3) + annotate(geom = \"label\", x = 1.6, y = 0.05, label = \"More laws relative\\nto Democratic states\", size = 3) + scale_x_continuous(labels = scales::percent_format(), breaks = seq(0, 5, 0.5)) + labs(x = \"% of laws in identical Democratic state\") That\u2019s cool, but we can\u2019t use that distribution in Stan because we\u2019re actually modelling the logged ratio. To get an idea of the general shape of the logged distribution we can log the t-distribution: tibble(x = seq(0.1, 6, by = 0.01)) |> mutate(y = dlst(x, df = 2, mu = 0.6, sigma = 2)) |> mutate(x = log(x)) |> ggplot(aes(x = x, y = y)) + geom_line(size = 1, color = clrs[3]) + geom_vline(xintercept = 0, color = \"grey50\", linetype = 21) + annotate(geom = \"label\", x = -0.5, y = 0.05, label = \"Fewer laws relative\\nto Democratic states\", size = 3) + annotate(geom = \"label\", x = 0.5, y = 0.05, label = \"More laws relative\\nto Democratic states\", size = 3) It\u2019s doing some weird things on the left side of the plot because of how logs work with zero. The closer we get to 0, the bigger the logged value becomes: log(0.1) ## [1] -2.302585 log(0.01) ## [1] -4.60517 log(0.00001) ## [1] -11.51293 log(1e-10) ## [1] -23.02585 It\u2019s super unlikely that we\u2019ll ever see a GOP state with 0.00000001% of the laws of a Democratic state, so a value like -23 on the logged scale is super implausible. A GOP state with just 1% of the laws of a Democratic state would have a logged value of -4.6051702; any lower than that is extreme. So our average of 0.6 is -0.511 on the log scale. We\u2019ll use a t-distribution again (for fat tails), and use a sigma of 2 which creates this kind of distribution with most values below 0 (so the unlogged ratio is less than 100%): tibble(x = seq(-3, 2, by = 0.01)) |> mutate(y = dlst(x, df = 2, mu = log(0.6), sigma = 2)) |> ggplot(aes(x = x, y = y)) + geom_line(size = 1, color = clrs[3]) + geom_vline(xintercept = 0, color = \"grey50\", linetype = 21) + annotate(geom = \"label\", x = -0.7, y = 0.09, label = \"Fewer laws relative\\nto Democratic states\", size = 3) + annotate(geom = \"label\", x = 0.7, y = 0.09, label = \"More laws relative\\nto Democratic states\", size = 3) This still doesn\u2019t make a ton of sense with logged values, so we can exponentiate it just to see what it looks like on the original scale of ratios: tibble(x = seq(-3, 1, by = 0.01)) |> mutate(y = dlst(x, df = 2, mu = log(0.6), sigma = 2)) |> mutate(x = exp(x)) |> ggplot(aes(x = x, y = y)) + geom_line(size = 1, color = clrs[3]) + geom_vline(xintercept = 1, color = \"grey50\", linetype = 21) + annotate(geom = \"label\", x = 0.7, y = 0.09, label = \"Fewer laws relative\\nto Democratic states\", size = 3) + annotate(geom = \"label\", x = 1.3, y = 0.09, label = \"More laws relative\\nto Democratic states\", size = 3) + scale_x_continuous(labels = scales::percent_format(), breaks = seq(0, 5, 0.5)) + labs(x = \"% of laws in identical Democratic state\") That\u2019s not identical to the half-t-distribution we made up earlier, and it makes tiny ratios like 1% very unlikely, but the bulk of the distribution is still around 60% as expected, so we\u2019ll go with it. Our final prior for $$\\beta_2$$ on the log scale is thus student_t(2, -0.5, 2). Historical swing states behave a little differently. Some of them might have more laws than a typical Democratic state (like 8\/7, or 1.14 or 114%); some might have fewer (like 6\/7, or 0.86, or 86%). In this case we don\u2019t know much about the direction of the distance, so we\u2019ll say that the average ratio is 100% \u00b1 some amount: tibble(x = seq(-2, 1.5, by = 0.01)) |> mutate(y = dlst(x, df = 2, mu = 0, sigma = 2)) |> mutate(x = exp(x)) |> ggplot(aes(x = x, y = y)) + geom_line(size = 1, color = clrs[2]) + geom_vline(xintercept = 1, color = \"grey50\", linetype = 21) + annotate(geom = \"label\", x = 0.6, y = 0.11, label = \"Fewer laws\\nrelative to\\nDemocratic\\nstates\", size = 3) + annotate(geom = \"label\", x = 1.4, y = 0.11, label = \"More laws\\nrelative to\\nDemocratic\\nstates\", size = 3) + scale_x_continuous(labels = scales::percent_format(), breaks = seq(0, 5, 0.5)) + labs(x = \"% of laws in identical Democratic state\") On a logged scale this nice and symmetrical around 0: tibble(x = seq(-2, 2, by = 0.01)) |> mutate(y = dlst(x, df = 2, mu = 0, sigma = 2)) |> ggplot(aes(x = x, y = y)) + geom_line(size = 1, color = clrs[2]) + geom_vline(xintercept = 0, color = \"grey50\", linetype = 21) + annotate(geom = \"label\", x = -0.6, y = 0.11, label = \"Fewer laws relative\\nto Democratic states\", size = 3) + annotate(geom = \"label\", x = 0.6, y = 0.11, label = \"More laws relative\\nto Democratic states\", size = 3) A student_t(2, 0, 2) distribution looks reasonable and vague enough, so our final prior for $$\\beta_3$$ is student_t(2, 0, 2). ### Finally, the formal model PHEW OKAY so with all of that, here\u2019s our official model and priors: \\begin{aligned} \\text{Laws}_i &\\sim \\operatorname{Poisson}(\\lambda_i) \\\\ \\log(\\lambda_i) &= \\beta_0 + \\beta_1\\ \\text{Percent urban}_i + \\beta_2\\ \\text{GOP}_i + \\beta_3\\ \\text{Swing}_i \\\\ \\\\ \\beta_0 &\\sim \\mathcal{N}(2, 0.5) \\\\ \\beta_1 &\\sim \\operatorname{Student t}(\\nu = 2, \\mu = 0, \\sigma = 1) \\\\ \\beta_2 &\\sim \\operatorname{Student t}(\\nu = 2, \\mu = -0.5, \\sigma = 2) \\\\ \\beta_3 &\\sim \\operatorname{Student t}(\\nu = 2, \\mu = 0, \\sigma = 2) \\\\ \\end{aligned} How reasonable are all these priors when they\u2019re all working together? Let\u2019s simulate it! priors <- c(prior(normal(2, 0.5), class = Intercept), prior(student_t(2, 0, 1), class = b, coef = \"percent_urban\"), prior(student_t(2, -0.5, 2), class = b, coef = \"historicalgop\"), prior(student_t(2, 0, 2), class = b, coef = \"historicalswing\")) model_equality_prior_brms <- brm( bf(laws ~ percent_urban + historical), data = equality, family = poisson(), prior = priors, sample_prior = \"only\", chains = 4, iter = 4000, seed = BAYES_SEED, backend = \"cmdstanr\", refresh = 0 ) ## Start sampling It\u2019s all over the place with different slopes across different historical parties, which is good: prior_draws_brms <- equality |> group_by(historical) |> summarize(min = min(percent_urban), max = max(percent_urban)) |> mutate(percent_urban = map2(min, max, ~seq(.x, .y, 1))) |> unnest(percent_urban) |> add_epred_draws(model_equality_prior_brms, ndraws = 100) prior_draws_brms |> ggplot(aes(x = percent_urban, y = .epred)) + geom_line(aes(group = paste(historical, .draw), color = historical), alpha = 0.5, size = 0.5) + coord_cartesian(ylim = c(0, 100)) + scale_color_manual(values = c(clrs[6], clrs[3], clrs[2])) + labs(x = \"Percent urban\", y = \"Predicted number of laws\", color = \"Party\") + theme(legend.position = \"bottom\") We can\u2019t specify individual parameter priors with rstanarm (???), so we\u2019ll just do what the book does and use normal(0, 2.5) with magical autoscaling: equality_model_prior <- stan_glm( laws ~ percent_urban + historical, data = equality, family = poisson, prior_intercept = normal(2, 0.5), prior = normal(0, 2.5, autoscale = TRUE), chains = 4, iter = 4000, seed = 84735, refresh = 0, prior_PD = TRUE ) What priors did rstanarm decide were good? prior_summary(equality_model_prior) ## Priors for model 'equality_model_prior' ## ------ ## Intercept (after predictors centered) ## ~ normal(location = 2, scale = 0.5) ## ## Coefficients ## Specified prior: ## ~ normal(location = [0,0,0], scale = [2.5,2.5,2.5]) ## Adjusted prior: ## ~ normal(location = [0,0,0], scale = [0.17,4.97,5.60]) ## ------ ## See help('prior_summary.stanreg') for more details It decided on $$\\mathcal{N}(0, 0.17)$$, $$\\mathcal{N}(0, 4.97)$$, and $$\\mathcal{N}(0, 5.6)$$, which is a lot wider than what I decided on above :shrug:. Those wider priors give a larger range of possible values than the narrow models earlier: prior_draws_rstanarm <- equality |> group_by(historical) |> summarize(min = min(percent_urban), max = max(percent_urban)) |> mutate(percent_urban = map2(min, max, ~seq(.x, .y, 1))) |> unnest(percent_urban) |> add_epred_draws(equality_model_prior, ndraws = 100) prior_draws_rstanarm |> ggplot(aes(x = percent_urban, y = .epred)) + geom_line(aes(group = paste(historical, .draw), color = historical), alpha = 0.5, size = 0.5) + coord_cartesian(ylim = c(0, 100)) + scale_color_manual(values = c(clrs[6], clrs[3], clrs[2])) + labs(x = \"Percent urban\", y = \"Predicted number of laws\", color = \"Party\") + theme(legend.position = \"bottom\") ## 12.2: Simulating the posterior With these informative-ish priors, we can finally fit the actual model and play with the posterior. ### Run the model FOR FUN AND EXCITEMENT AND LEARNING I wrote the model in Stan here, but I\u2019m not going to work with its posterior samples or anything for the rest of the notebook. I just wanted to try writing a non-OLS model in Stan. It is definitely not optimized or efficient or anything, but it works and it\u2019s neat. priors <- c(prior(normal(2, 0.5), class = Intercept), prior(student_t(2, 0, 1), class = b, coef = \"percent_urban\"), prior(student_t(2, -0.5, 2), class = b, coef = \"historicalgop\"), prior(student_t(2, 0, 2), class = b, coef = \"historicalswing\")) model_equality_brms <- brm( bf(laws ~ percent_urban + historical), data = equality, family = poisson(), prior = priors, chains = 4, iter = 4000, seed = BAYES_SEED, backend = \"cmdstanr\", refresh = 0 ) ## Start sampling equality_model <- stan_glm( laws ~ percent_urban + historical, data = equality, family = poisson, prior_intercept = normal(2, 0.5), prior = normal(0, 2.5, autoscale = TRUE), chains = 4, iter = 4000, seed = 84735, refresh = 0 ) There are different ways to model GLMs in Stan. First we can use the more traditional mathy approach of calculating $$\\lambda$$ as a function of the intercept and all the slopes multiplied by all the Xs, then exponentiating the $$\\lambda$$, then feeding the unlogged $$\\lambda$$ to poisson() in Stan. This is precisely what the mathematical model says to do, but it involves manual matrix multiplication. 12-stan\/equality-manual.stan data { int<lower=0> n; \/\/ Number of rows int<lower=0> k; \/\/ Number of predictors matrix[n,k] X; \/\/ Predictors array[n] int Y; \/\/ Outcome variable } parameters { real alpha; vector[k] beta; } transformed parameters { array[n] real log_lambda; array[n] real<lower=0> lambda; for (i in 1:n) { \/\/ We can be super explicit about the whole equation, expanding it to \/\/ beta1*x1 + beta2*x2 + ..., or alternatively, we can use dot_product() to \/\/ multiply all the betas and Xs at once log_lambda[i] = alpha + beta[1] * X[i,1] + beta[2] * X[i,2] + beta[3] * X[i,3]; \/\/ log_lambda[i] = alpha + dot_product(X[i], beta); lambda[i] = exp(log_lambda[i]); } } model { alpha ~ normal(2, 0.5); beta[1] ~ student_t(2, 0, 1); beta[2] ~ student_t(2, -0.5, 2); beta[3] ~ student_t(2, 0, 2); Y ~ poisson(lambda); } generated quantities { array[n] int Y_rep; vector[n] log_lik; for (i in 1:n) { log_lik[i] = poisson_lpmf(Y[i] | lambda[i]); Y_rep[i] = poisson_rng(lambda[i]); } } equality_stan_manual <- cmdstan_model(\"12-stan\/equality-manual.stan\") # Build a matrix of predictors for Stan X <- model.matrix(~ 1 + percent_urban + historical, data = equality)[,-1] equality_samples_manual <- equality_stan_manualsample(\ndata = list(n = nrow(equality),\nY = equality$laws, X = X, k = ncol(X)), parallel_chains = 4, iter_warmup = 5000, iter_sampling = 5000, refresh = 0, seed = BAYES_SEED ) ## Running MCMC with 4 parallel chains... ## ## Chain 1 finished in 1.4 seconds. ## Chain 3 finished in 1.5 seconds. ## Chain 2 finished in 1.9 seconds. ## Chain 4 finished in 1.9 seconds. ## ## All 4 chains finished successfully. ## Mean chain execution time: 1.7 seconds. ## Total execution time: 2.0 seconds. I\u2019m not going to work with these Stan models in the rest of the notebook here because it\u2019s a hassle, but just to check that they worked, here are the coefficients, the LOO stats, and pp_check(): equality_samples_manual$print(\nvariables = c(\"alpha\", \"beta[1]\", \"beta[2]\", \"beta[3]\"),\n\"mean\", \"median\", \"sd\", ~quantile(.x, probs = c(0.025, 0.975))\n)\n## variable mean median sd 2.5% 97.5%\n## alpha 1.78 1.78 0.26 1.27 2.28\n## beta[1] 0.02 0.02 0.00 0.01 0.02\n## beta[2] -1.53 -1.52 0.13 -1.79 -1.27\n## beta[3] -0.61 -0.61 0.10 -0.82 -0.42\n\nequality_samples_manual$loo() ## ## Computed from 20000 by 49 log-likelihood matrix ## ## Estimate SE ## elpd_loo -194.2 20.2 ## p_loo 17.7 4.6 ## looic 388.5 40.3 ## ------ ## Monte Carlo SE of elpd_loo is 0.1. ## ## Pareto k diagnostic values: ## Count Pct. Min. n_eff ## (-Inf, 0.5] (good) 47 95.9% 1013 ## (0.5, 0.7] (ok) 2 4.1% 127 ## (0.7, 1] (bad) 0 0.0% <NA> ## (1, Inf) (very bad) 0 0.0% <NA> ## ## All Pareto k estimates are ok (k < 0.7). ## See help('pareto-k-diagnostic') for details. equality_samples_manual |> spread_draws(Y_rep[i]) |> slice_sample(n = 25) |> mutate(id = 1:n()) |> ggplot(aes(x = Y_rep)) + geom_density(aes(group = id), color = \"lightblue\", size = 0.25) + geom_density(data = equality, aes(x = laws), color = \"darkblue\", size = 1) Instead of manually doing the matrix multiplication, Stan has shortcut functions specifically for running. The poisson_log_glm() function, for instance, takes a matrix of predictors, the intercept, and the coefficients, and deals with all the math and multiplication automatically. 12-stan\/equality.stan data { int<lower=0> n; \/\/ Number of rows int<lower=0> k; \/\/ Number of predictors matrix[n,k] X; \/\/ Predictors array[n] int Y; \/\/ Outcome variable } parameters { real alpha; vector[k] beta; } model { \/\/ Priors alpha ~ normal(2, 0.5); beta[1] ~ student_t(2, 0, 1); beta[2] ~ student_t(2, -0.5, 2); beta[3] ~ student_t(2, 0, 2); \/\/ Model Y ~ poisson_log_glm(X, alpha, beta); } generated quantities { array[n] int Y_rep; vector[n] log_lik; vector[n] lambda_hat = alpha + X * beta; for (i in 1:n) { \/\/ We can use the shortcut poisson_log_glm_lpmf, which works just like \/\/ poisson_log_glm from earlier log_lik[i] = poisson_log_glm_lpmf({Y[i]} | X[i,], alpha, beta); \/\/ Or we can use poisson_log_lpmf and feed it lambda_hat \/\/ log_lik[i] = poisson_log_lpmf(Y[i] | lambda_hat[i]); \/\/ Posterior predictive distribution Y_rep[i] = poisson_log_rng(lambda_hat[i]); } } equality_stan <- cmdstan_model(\"12-stan\/equality.stan\") X <- model.matrix(~ 1 + percent_urban + historical, data = equality)[,-1] equality_samples <- equality_stan$sample(\ndata = list(n = nrow(equality),\nY = equality$laws, X = X, k = ncol(X)), parallel_chains = 4, iter_warmup = 5000, iter_sampling = 5000, refresh = 0, seed = BAYES_SEED ) ## Running MCMC with 4 parallel chains... ## ## Chain 1 finished in 0.7 seconds. ## Chain 3 finished in 0.7 seconds. ## Chain 2 finished in 0.9 seconds. ## Chain 4 finished in 0.9 seconds. ## ## All 4 chains finished successfully. ## Mean chain execution time: 0.8 seconds. ## Total execution time: 1.0 seconds. equality_samples$print(\nvariables = c(\"alpha\", \"beta[1]\", \"beta[2]\", \"beta[3]\"),\n\"mean\", \"median\", \"sd\", ~quantile(.x, probs = c(0.025, 0.975))\n)\n## variable mean median sd 2.5% 97.5%\n## alpha 1.78 1.79 0.26 1.28 2.27\n## beta[1] 0.02 0.02 0.00 0.01 0.02\n## beta[2] -1.52 -1.52 0.13 -1.79 -1.27\n## beta[3] -0.61 -0.61 0.10 -0.82 -0.41\nequality_samples$loo() ## Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details. ## ## Computed from 20000 by 49 log-likelihood matrix ## ## Estimate SE ## elpd_loo -194.5 20.3 ## p_loo 18.0 4.8 ## looic 389.0 40.7 ## ------ ## Monte Carlo SE of elpd_loo is NA. ## ## Pareto k diagnostic values: ## Count Pct. Min. n_eff ## (-Inf, 0.5] (good) 48 98.0% 963 ## (0.5, 0.7] (ok) 0 0.0% <NA> ## (0.7, 1] (bad) 1 2.0% 38 ## (1, Inf) (very bad) 0 0.0% <NA> ## See help('pareto-k-diagnostic') for details. equality_samples |> spread_draws(Y_rep[i]) |> slice_sample(n = 25) |> mutate(id = 1:n()) |> ggplot(aes(x = Y_rep)) + geom_density(aes(group = id), color = \"lightblue\", size = 0.25) + geom_density(data = equality, aes(x = laws), color = \"darkblue\", size = 1) ### Regular diagnostics Before looking at the coefficients\/parameters and predictions, let\u2019s check the diagnostics: #### Trace plots FUZZY. model_equality_brms |> gather_draws(^b_.*, regex = TRUE) |> ggplot(aes(x = .iteration, y = .value, color = factor(.chain))) + geom_line(size = 0.1) + scale_color_viridis_d(option = \"rocket\", end = 0.85) + facet_wrap(vars(.variable), scales = \"free_y\") #### Trank plots Nice and random model_equality_brms |> gather_draws(^b_.*, regex = TRUE) |> group_by(.variable) |> mutate(draw_rank = rank(.value)) |> ggplot(aes(x = draw_rank, color = factor(.chain))) + stat_bin(geom = \"step\", binwidth = 250, position = position_identity(), boundary = 0) + scale_color_viridis_d(option = \"rocket\", end = 0.85) + facet_wrap(vars(.variable), scales = \"free_y\") + theme(axis.text.y = element_blank(), axis.title.y = element_blank(), axis.ticks.y = element_blank()) #### Posterior predicive plots It seems to be overpredicting values < 10, but it does follow the general shape of the data, so that\u2019s reassuring. pp_check(model_equality_brms, ndraws = 50) #### LOO, PSIS, and WAIC We don\u2019t have too many issues with influential points with overly high Pareto k values, and loo() is generally happy: loo(model_equality_brms) ## ## Computed from 8000 by 49 log-likelihood matrix ## ## Estimate SE ## elpd_loo -195.7 20.1 ## p_loo 19.7 5.1 ## looic 391.4 40.3 ## ------ ## Monte Carlo SE of elpd_loo is 0.2. ## ## Pareto k diagnostic values: ## Count Pct. Min. n_eff ## (-Inf, 0.5] (good) 46 93.9% 978 ## (0.5, 0.7] (ok) 3 6.1% 126 ## (0.7, 1] (bad) 0 0.0% <NA> ## (1, Inf) (very bad) 0 0.0% <NA> ## ## All Pareto k estimates are ok (k < 0.7). ## See help('pareto-k-diagnostic') for details. For fun, we can recreate Figure 7.10 from Rethinking to see which points are causing some outlier weirdness: model_equality_brms <- add_criterion(model_equality_brms, criterion = c(\"loo\", \"waic\")) ## Warning: ## 13 (26.5%) p_waic estimates greater than 0.4. We recommend trying loo instead. brms_diagnostics <- tibble( psis = model_equality_brms$criteria$loo$diagnostics$pareto_k, p_waic = model_equality_brms$criteria$waic$pointwise[, \"p_waic\"],\nstate = pull(equality, state)) |>\nmutate(highlight = psis > 0.5 | p_waic > 1)\n\nbrms_diagnostics |>\nggplot(aes(x = psis, y = p_waic)) +\ngeom_point(aes(color = highlight)) +\ngeom_text_repel(data = filter(brms_diagnostics, highlight),\naes(label = state), seed = 1234, direction = \"y\") +\ngeom_vline(xintercept = 0.5, linetype = 32) +\nscale_color_manual(values = c(\"grey40\", clrs[4]), guide = \"none\") +\nlabs(x = \"PSIS Pareto k\", y = \"WAIC penalty\")\n\n#### Trace plots\n\nStill fuzzy here too:\n\nequality_model |>\ngather_draws((Intercept), percent_urban, historicalgop, historicalswing) |>\nggplot(aes(x = .iteration, y = .value, color = factor(.chain))) +\ngeom_line(size = 0.1) +\nscale_color_viridis_d(option = \"rocket\", end = 0.85) +\nfacet_wrap(vars(.variable), scales = \"free_y\")\n\n#### Trank plots\n\nGreat\n\nequality_model |>\ngather_draws((Intercept), percent_urban, historicalgop, historicalswing) |>\ngroup_by(.variable) |>\nmutate(draw_rank = rank(.value)) |>\nggplot(aes(x = draw_rank, color = factor(.chain))) +\nstat_bin(geom = \"step\", binwidth = 250, position = position_identity(), boundary = 0) +\nscale_color_viridis_d(option = \"rocket\", end = 0.85) +\nfacet_wrap(vars(.variable), scales = \"free_y\") +\ntheme(axis.text.y = element_blank(), axis.title.y = element_blank(), axis.ticks.y = element_blank())\n\n#### Posterior predicive plots\n\nLovely\n\npp_check(equality_model, n = 50)\n\n#### LOO, PSIS, and WAIC\n\nInterestingly, rstanarm finds that 3 observations have bad Pareto k scores!\n\nrstanarm_loo <- loo(equality_model)\n## Warning: Found 3 observation(s) with a pareto_k > 0.7. We recommend calling 'loo' again with argument 'k_threshold = 0.7' in order to calculate the ELPD without the assumption that these observations are negligible. This will refit the model 3 times to compute the ELPDs for the problematic observations directly.\nrstanarm_loo\n##\n## Computed from 8000 by 49 log-likelihood matrix\n##\n## Estimate SE\n## elpd_loo -196.1 20.2\n## p_loo 20.2 5.3\n## looic 392.2 40.3\n## ------\n## Monte Carlo SE of elpd_loo is NA.\n##\n## Pareto k diagnostic values:\n## Count Pct. Min. n_eff\n## (-Inf, 0.5] (good) 46 93.9% 851\n## (0.5, 0.7] (ok) 0 0.0% <NA>\n## (0.7, 1] (bad) 3 6.1% 32\n## (1, Inf) (very bad) 0 0.0% <NA>\n## See help('pareto-k-diagnostic') for details.\n\nFor whatever reason, Maine and Vermont are super outliers now in the rstanarm model :shrug:\n\nrstanarm_diagnostics <- tibble(\npsis = rstanarm_loo$pointwise[, \"influence_pareto_k\"], p_waic = waic(equality_model)$pointwise[, \"p_waic\"],\nstate = pull(equality, state)) |>\nmutate(highlight = psis > 0.5 | p_waic > 1)\n## Warning:\n## 13 (26.5%) p_waic estimates greater than 0.4. We recommend trying loo instead.\n\nrstanarm_diagnostics |>\nggplot(aes(x = psis, y = p_waic)) +\ngeom_point(aes(color = highlight)) +\ngeom_text_repel(data = filter(rstanarm_diagnostics, highlight),\naes(label = state), seed = 1234, direction = \"y\") +\ngeom_vline(xintercept = 0.5, linetype = 32) +\nscale_color_manual(values = c(\"grey40\", clrs[4]), guide = \"none\") +\nlabs(x = \"PSIS Pareto k\", y = \"WAIC penalty\")\n\n### ELPD\n\nFor fun, we can compare the ELPD for the two models (more specific priors in brms; autoscaled priors in rstanarm) and see if one model performs better than the other. They\u2019re basically identical.\n\ntribble(\n~model, ~stats,\n\"Default auto-scaled priors (rstanarm)\", as_tibble(rstanarm_loo$estimates, rownames = \"statistic\"), \"Careful priors (brms)\", as_tibble(model_equality_brms$criteria$loo$estimates, rownames = \"statistic\")\n) |>\nunnest(stats) |>\nfilter(statistic == \"elpd_loo\") |>\nggplot(aes(x = Estimate, y = model, color = model)) +\ngeom_pointrange(aes(xmin = Estimate - 2 * SE, xmax = Estimate + 2 * SE)) +\nscale_y_discrete(labels = scales::label_wrap(15)) +\nscale_color_manual(values = c(clrs[5], clrs[1]), guide = \"none\") +\nlabs(x = \"ELPD\", y = NULL)\n\n## 12.3: Interpreting the posterior\n\n### Coefficients \/ parameters\n\nSo what do these coefficients all actually mean? We can look at the fitted draws to see the predicted count of laws across a range of urban-ness and state political party\n\nequality %>%\nggplot(aes(x = percent_urban, y = laws, color = historical)) +\ngeom_point(data = equality, size = 1) +\ngeom_line(aes(y = .epred, group = paste(historical, .draw)),\nsize = 0.5, alpha = 0.3) +\nscale_color_manual(values = c(clrs[6], clrs[3], clrs[2])) +\nlabs(x = \"Percent urban\", y = \"Count of laws\", color = \"Party\") +\ntheme(legend.position = \"bottom\")\n\nequality %>%\nggplot(aes(x = percent_urban, y = laws, color = historical)) +\ngeom_point(data = equality, size = 1) +\ngeom_line(aes(y = .epred, group = paste(historical, .draw)),\nsize = 0.5, alpha = 0.3) +\nscale_color_manual(values = c(clrs[6], clrs[3], clrs[2])) +\nlabs(x = \"Percent urban\", y = \"Count of laws\", color = \"Party\") +\ntheme(legend.position = \"bottom\")\n\nLike we thought with our priors, Democratic states have more laws on average than GOP or swing states, and swing states have more than GOP states. The Democratic-GOP gap is substantial. Based just on the plot of predictions \u2191 there, there\u2019s like a 15\u201320 law gap! Also, the count of laws is higher in urban states, also as expected.\n\nWe can look at the posterior distributions of the parameters\/coefficients to get a more precise picture:\n\nLog-scale coefficients:\n\n# There's a weird bug in broom.mixed or brms or somewhere that makes brms\n# Poisson models lose the term column here??? idk why??? tidy() works fine with\n# the rstanarm model, and parameters::parameters(model_equality_brms) shows the\n# terms fine. So here I just add them in manually with get_variables()\ncoefs_brms <- tidy(model_equality_brms) |>\nselect(-c(effect, component, group)) |>\nmutate(term = get_variables(model_equality_brms)[1:4])\ncoefs_brms\n## # A tibble: 4 \u00d7 5\n## term estimate std.error conf.low conf.high\n## <chr> <dbl> <dbl> <dbl> <dbl>\n## 1 b_Intercept 1.70 0.303 1.09 2.28\n## 2 b_percent_urban 0.0164 0.00355 0.00967 0.0235\n## 3 b_historicalgop -1.51 0.135 -1.78 -1.25\n## 4 b_historicalswing -0.609 0.104 -0.813 -0.405\n\nmodel_equality_brms |>\ngather_draws(^b_.*, regex = TRUE) |>\nmutate(.variable = factor(.variable,\nlevels = c(\"b_Intercept\", \"b_percent_urban\",\n\"b_historicalgop\", \"b_historicalswing\"),\nordered = TRUE)) |>\nggplot(aes(x = .value, fill = .variable)) +\nstat_halfeye(normalize = \"xy\") +\nscale_fill_manual(values = c(clrs[5], clrs[4], clrs[3], clrs[2]), guide = \"none\") +\nfacet_wrap(vars(.variable), scales = \"free_x\")\n\nUnlogged coefficients:\n\ncoefs_brms |>\nmutate(across(c(estimate, conf.low, conf.high), ~exp(.)))\n## # A tibble: 4 \u00d7 5\n## term estimate std.error conf.low conf.high\n## <chr> <dbl> <dbl> <dbl> <dbl>\n## 1 b_Intercept 5.49 0.303 2.96 9.78\n## 2 b_percent_urban 1.02 0.00355 1.01 1.02\n## 3 b_historicalgop 0.220 0.135 0.169 0.288\n## 4 b_historicalswing 0.544 0.104 0.444 0.667\n\nmodel_equality_brms |>\ngather_draws(^b_.*, regex = TRUE) |>\nmutate(.value = exp(.value)) |>\nmutate(.variable = factor(.variable,\nlevels = c(\"b_Intercept\", \"b_percent_urban\",\n\"b_historicalgop\", \"b_historicalswing\"),\nordered = TRUE)) |>\nggplot(aes(x = .value, fill = .variable)) +\nstat_halfeye(normalize = \"xy\") +\nscale_fill_manual(values = c(clrs[5], clrs[4], clrs[3], clrs[2]), guide = \"none\") +\nfacet_wrap(vars(.variable), scales = \"free_x\")\n## Warning: Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n\nLog-scale coefficients:\n\ncoefs_rstanarm <- tidy(equality_model, conf.int = TRUE)\ncoefs_rstanarm\n## # A tibble: 4 \u00d7 5\n## term estimate std.error conf.low conf.high\n## <chr> <dbl> <dbl> <dbl> <dbl>\n## 1 (Intercept) 1.71 0.307 1.19 2.19\n## 2 percent_urban 0.0165 0.00361 0.0108 0.0224\n## 3 historicalgop -1.51 0.137 -1.74 -1.29\n## 4 historicalswing -0.608 0.103 -0.783 -0.439\n\nequality_model |>\ngather_draws((Intercept), percent_urban, historicalgop, historicalswing) |>\nmutate(.variable = factor(.variable,\nlevels = c(\"(Intercept)\", \"percent_urban\",\n\"historicalgop\", \"historicalswing\"),\nordered = TRUE)) |>\nggplot(aes(x = .value, fill = .variable)) +\nstat_halfeye(normalize = \"xy\") +\nscale_fill_manual(values = c(clrs[5], clrs[4], clrs[3], clrs[2]), guide = \"none\") +\nfacet_wrap(vars(.variable), scales = \"free_x\")\n\nUnlogged coefficients:\n\ncoefs_rstanarm |>\nmutate(across(c(estimate, conf.low, conf.high), ~exp(.)))\n## # A tibble: 4 \u00d7 5\n## term estimate std.error conf.low conf.high\n## <chr> <dbl> <dbl> <dbl> <dbl>\n## 1 (Intercept) 5.50 0.307 3.28 8.95\n## 2 percent_urban 1.02 0.00361 1.01 1.02\n## 3 historicalgop 0.220 0.137 0.175 0.274\n## 4 historicalswing 0.544 0.103 0.457 0.645\n\nequality_model |>\ngather_draws((Intercept), percent_urban, historicalgop, historicalswing) |>\nmutate(.value = exp(.value)) |>\nmutate(.variable = factor(.variable,\nlevels = c(\"(Intercept)\", \"percent_urban\",\n\"historicalgop\", \"historicalswing\"),\nordered = TRUE)) |>\nggplot(aes(x = .value, fill = .variable)) +\nstat_halfeye(normalize = \"xy\") +\nscale_fill_manual(values = c(clrs[5], clrs[4], clrs[3], clrs[2]), guide = \"none\") +\nfacet_wrap(vars(.variable), scales = \"free_x\")\n## Warning: Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n\n#### Interpretation\n\nInterpretation time!\n\n\u2022 The intercept ($$\\beta_0$$): On the logged scale, this is the intercept when percent urban is 0 in historically Democratic states. On its own it\u2019s meaningless; exponentiating it gives us an expected count of laws in a completely rural Democratic state. The mean unlogged posterior value here is 5.5 laws, with a 95% credible interval ranging from 3 to 9.8.\n\n\u2022 The percent urban coefficient ($$\\beta_1$$): This is the slope of the line on the logged scale. We should expect the logged number of laws in a state to increase by that amount for each additional percentage point of urban-ness. The mean posterior value is 0.0164, with a 95% credible interval ranging from 0.0097 to 0.0235. That seems really tiny. We can make it more interpretable by exponentiating it, where a 1 percentage point increase in urban-ness is associated with an average of 1.0166 times (or 1.66%) more laws, with a 95% credible interval ranging from 1.0097 to 1.0238 (0.97% to 2.38%).\n\nWe can make this a little more interpretable if we think in larger changes. In Bayes Rules! they say to imagine that the urban population in one state is 25 percentage points higher than another state. If that\u2019s the case, we would expect $$e^{25 \\times 0.0164} \\approx 1.51$$ or 51% more laws, or 1.5 times the number of laws. If that rural state had 10 laws, we\u2019d expect 15 laws in the more urban state.\n\n\u2022 The historically GOP coefficient ($$\\beta_2$$): This is the shift in the logged intercept for historically Republican states. The logged number of laws is lower by 1.5 on average, but that doesn\u2019t make sense on its own. After exponentiation, we can think about ratios of GOP laws to Democratic laws. Here, that ratio has a posterior mean of 0.22 with a 95% credible interval of 0.169 to 0.288, meaning that Republican states have 22% (or 16.9%\u201328.8%) of the count of laws in similar Democratic states. That\u2019s a sizable difference! Compared to a Democratic state with 20 laws, a Republican state is predicted to only have 3\u20135 laws (20 \u00d7 0.169, 20 \u00d7 0.288). Oof.\n\n\u2022 The historically swing state coefficient ($$\\beta_3$$): This is the shift in the logged intercept for historically swing states. We interpret it the same way as the GOP shift. On the log scale it makes little sense; exponentiated, it has a posterior mean of 0.544 with a 95% credible interval of 0.444 to 0.667. That means that swing states have between 44.4%\u201366.67% of the laws of a Democratic state. Compared to a Democratic state with 20 laws, a swing state is predicted to have 9\u201313 laws (20 \u00d7 0.444, 20 \u00d7 0.667).\n\n### Marginal effects\n\nUnlogging these coefficients makes interpretation a lot easier, but it would be nice to work with counts directly too. We already did that in each of those interpretation paragraphs, translating the percent-level effects to counts in hypothetical situations (rural state with 10 laws, Democratic state with 20 laws, etc.). We can be more systematic about these conversions to counts by calculating marginal effects.\n\nBecause this model is curvy, the slope of the fitted line changes depending on the value of percent urban. Additionally, the size of the party-specific intercept shift changes across different values of the count of laws. So instead, we can look at the overall average slope and change in intercept across the whole range of values. And because we\u2019re working with posterior distributions, we actually get posterior distributions of marginal effects too!\n\nIf we look at overall marginal effect, we find that the posterior mean of the partial derivative for percent_urban on the count (or response) scale is 0.173, with a 95% credible interval of 0.1 to 0.25. That means that on an average, increasing urban-ness by one percentage point is associated with 0.1\u20130.25 additional laws, on average. The group contrasts are also helpful (and already on the count scale!). The average overall difference between Republican and Democratic states has a posterior mean of -14.64 laws, with a 95% credible interval of 12\u201317 laws, while the difference between swing states and Democratic states has a posterior mean of -8.5 laws (5.73\u201311.5).\n\nmfx_brms <- marginaleffects(model_equality_brms, type = \"response\")\ntidy(mfx_brms)\n## type term contrast estimate conf.low conf.high\n## 1 response percent_urban dY\/dX 0.173300 0.10019 0.2504355\n## 2 response historical gop - dem -14.635854 -17.31778 -12.0961604\n## 3 response historical swing - dem -8.561223 -11.50006 -5.7257059\n\nThat\u2019s the marginal effect for the average of the whole range of urban-ness, but it\u2019s maybe even more useful to look at the marginal effect at each possible level of percent urban.\n\nThis ends up being really neat! For the percent urban effect, it is small in rural states across all parties\u2014down at 50% urban, a 1 percentage point increase in urban-ness is associated with 0.2 additional laws for Democratic states, 0.1ish for swing states, and nearly 0 for Republican states.\n\nmfx_brms_typical <- model_equality_brms |>\nmarginaleffects(newdata = datagrid(percent_urban = seq(40, 90, by = 5),\nhistorical = c(\"dem\", \"gop\", \"swing\")),\nvariables = c(\"percent_urban\", \"historical\")) |>\nposteriordraws()\n\nmfx_brms_typical |>\nfilter(term != \"historical\" | !(historical %in% c(\"gop\", \"swing\"))) |>\nmutate(historical = ifelse(term == \"historical\", \"contrast\", as.character(historical))) |>\nmutate(term = fct_inorder(term)) |>\nggplot(aes(x = percent_urban, y = draw, color = historical, fill = historical)) +\nstat_lineribbon(alpha = 0.25) +\nscale_color_manual(values = c(clrs[6], clrs[3], clrs[2], clrs[1]),\nbreaks = c(\"dem\", \"gop\", \"swing\"), na.value = clrs[1]) +\nscale_fill_manual(values = c(clrs[6], clrs[3], clrs[2], clrs[1]),\nbreaks = c(\"dem\", \"gop\", \"swing\"), na.value = clrs[1]) +\nfacet_nested_wrap(vars(term, contrast), scales = \"free_y\") +\nlabs(color = NULL, fill = NULL, x = \"Percent urban\",\ny = \"Marginal effect or \u2206 in group means\\nCount of laws\") +\ntheme(legend.position = \"bottom\")\n## Warning: Using the size aesthietic with geom_ribbon was deprecated in ggplot2 3.4.0.\n## \u2139 Please use the linewidth aesthetic instead.\n## Warning: Unknown or uninitialised column: linewidth.\n## Warning: Using the size aesthietic with geom_line was deprecated in ggplot2 3.4.0.\n## \u2139 Please use the linewidth aesthetic instead.\n## Warning: Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n\n\u2191 that\u2019s all fancy with nested facets and manual ggplot work; we can also make these same plots with the plot_cme() function in marginaleffects:\n\nplot_cme(model_equality_brms,\neffect = \"percent_urban\",\ncondition = c(\"percent_urban\", \"historical\"))\n\nplot_cme(model_equality_brms,\neffect = \"historical\",\ncondition = \"percent_urban\")\n\n## 12.4: Posterior prediction\n\nWe can check how well this model predicts data by comparing the actual data of some state to the posterior predictive distribution for that state. In the book they use Minnesota, historically Democratic, fairly urban, with just 4 laws:\n\nequality |> filter(state == \"minnesota\")\n## # A tibble: 1 \u00d7 6\n## state region gop_2016 laws historical percent_urban\n## <fct> <fct> <dbl> <dbl> <fct> <dbl>\n## 1 minnesota midwest 44.9 4 dem 73.3\n\nWhat does the model predict? We can use posterior_predict() to find that out. It will return actual integer counts (since posterior predictions are on the original scale of the data):\n\nmn_pred_brms <- model_equality_brms |>\npredicted_draws(newdata = filter(equality, state == \"minnesota\"))\n\nmn_pred_brms |>\nggplot(aes(x = .prediction)) +\nstat_histinterval(slab_fill = clrs[4], slab_color = \"white\", outline_bars = TRUE,\nslab_size = 0.5) +\ngeom_vline(xintercept = 4)\n\nJust for fun, the book creates the posterior predictive distribution by hand by making the linear predictor $$\\lambda$$ for Minnesota, then using rpois() to simulate the number of laws based on that $$\\lambda$$. It\u2019s the same!\n\nmodel_equality_brms |>\nspread_draws(^b_.*, regex = TRUE) |>\nmutate(log_lambda = b_Intercept + b_percent_urban*73.3 +\nb_historicalgop*0 + b_historicalswing*0,\nlambda = exp(log_lambda),\ny_new = rpois(n(), lambda = lambda)) |>\nggplot(aes(x = y_new)) +\nstat_histinterval(slab_fill = clrs[4], slab_color = \"white\", outline_bars = TRUE,\nslab_size = 0.5) +\ngeom_vline(xintercept = 4)\n\nmn_pred_rstanarm <- equality_model |>\npredicted_draws(newdata = filter(equality, state == \"minnesota\"))\n\nmn_pred_rstanarm |>\nggplot(aes(x = .prediction)) +\nstat_histinterval(slab_fill = clrs[4], slab_color = \"white\", outline_bars = TRUE,\nslab_size = 0.5) +\ngeom_vline(xintercept = 4)\n\n## 12.5: Model evaluation\n\nIt\u2019s good.\n\n### 2. How wrong is the model?\n\npp_check() earlier showed that it overpredict values < 10, but it does follow the general shape of the data:\n\npp_check(model_equality_brms, ndraws = 50)\n\n### 3. How accurate are the model\u2019s predictions?\n\nWe checked that with LOO, ELPD, and PSIS stuff earlier. It\u2019s all pretty good.\n\n## 12.6: Negative binomial regression for overdispersed counts\n\nThis Poisson stuff is neat as long as the assumptions hold, in particular the requirement that the mean and variance of $$Y$$ are the same. If that\u2019s not the case, things break.\n\nFor instance, we can model the number of books people read per year based on whether they would like to be wise but unhappy, or happy but unwise.\n\ndata(pulse_of_the_nation, package = \"bayesrules\")\n\npulse <- pulse_of_the_nation |>\nfilter(books < 100)\n\nThe number of books people read looks Poisson-y, but with a lot more super low values than we might expect from a regular Poisson distribution\n\npulse |>\nggplot(aes(x = books)) +\ngeom_histogram(binwidth = 5, boundary = 0, color = \"white\", size = 0.25)\n\nThe mean and variance aren\u2019t the same:\n\npulse |>\nsummarise(mean = mean(books),\nvariance = sd(books))\n## # A tibble: 1 \u00d7 2\n## mean variance\n## <dbl> <dbl>\n## 1 10.9 14.1\n\nAnd they\u2019re not the same across a range of ages and wise\/unwise responses:\n\npulse |>\nmutate(age_bins = santoku::chop_quantiles(age,\nc(0.25, 0.5, 0.75))) |>\ngroup_by(age_bins, wise_unwise) |>\nsummarise(mean = mean(books),\nvariance = sd(books))\n## # A tibble: 8 \u00d7 4\n## # Groups: age_bins [4]\n## age_bins wise_unwise mean variance\n## <fct> <fct> <dbl> <dbl>\n## 1 [0%, 25%) Happy but Unwise 9.10 12.0\n## 2 [0%, 25%) Wise but Unhappy 14.8 15.2\n## 3 [25%, 50%) Happy but Unwise 8.34 10.3\n## 4 [25%, 50%) Wise but Unhappy 11.4 15.3\n## 5 [50%, 75%] Happy but Unwise 9.09 13.5\n## 6 [50%, 75%] Wise but Unhappy 12.3 15.1\n## 7 (75%, 100%] Happy but Unwise 11.7 15.5\n## 8 (75%, 100%] Wise but Unhappy 10.9 14.4\n\nThis means that $$Y$$ here is overdispersed, or that it has too much variability.\n\nWe can throw it at a Poisson model and it\u2019ll fit just fine:\n\nmodel_pulse_poisson <- brm(\nbf(books ~ age + wise_unwise),\ndata = pulse,\nfamily = poisson(),\nchains = 4, iter = 4000, seed = BAYES_SEED,\nbackend = \"cmdstanr\", refresh = 0\n)\n## Start sampling\n\nBut if we look at a posterior predictive check, we have serious problems\n\n# oh no\npp_check(model_pulse_poisson)\n\nWith negative binomial models, we get to estimate two parameters: $$\\mu$$, which is like Poisson\u2019s $$\\lambda$$, and $$r$$, which is a non-negative \u201creciprocal dispersion\u201d thing:\n\n\\begin{aligned} Y_u &\\sim \\operatorname{NegBin}(\\mu_i, r) \\\\ \\log(\\mu_i) &= \\beta_0 + \\beta_1 X_{i1} + \\beta_2 X_{i2} + \\dots \\\\ \\\\ \\beta_0, \\beta_1, \\beta_2, \\beta_{\\dots} &\\sim \\text{Some prior} \\\\ r &\\sim \\text{Some prior} > 0 \\end{aligned}\n\nmodel_pulse_negbinom <- brm(\nbf(books ~ age + wise_unwise),\ndata = pulse,\nfamily = negbinomial(),\nchains = 4, iter = 4000, seed = BAYES_SEED,\nbackend = \"cmdstanr\", refresh = 0\n)\n## Start sampling\n\nThis fits almost perfectly now!\n\n# oh no\npp_check(model_pulse_negbinom)\n\nThe coefficients are interpreted just like Poisson ones, since negative binomial uses a log link. Here are the different scales everything:\n\n\u2022 posterior_predict(), or $$Y$$: Integers of counts of outcome\n\u2022 posterior_linpred(), or $$\\log{\\mu}$$: Logged predicted outcome\n\u2022 posterior_epred() or posterior_linpred(transform = TRUE), or $$\\mu$$: Unlogged (exponentiated) predicted outcome\n\nHere are the coefficients, both logged and unlogged:\n\ncoefs_negbinom <- tidy(model_pulse_negbinom) |>\nselect(-c(effect, component, group)) |>\nmutate(term = get_variables(model_pulse_negbinom)[1:3])\n\ncoefs_negbinom |>\nmutate(term = fct_inorder(term),\nscale = \"Logged\", .before = 1) |>\nbind_rows(\ncoefs_negbinom |>\nmutate(across(c(estimate, conf.low, conf.high), ~exp(.)),\nscale = \"Unlogged\")\n) |>\narrange(term)\n## # A tibble: 6 \u00d7 6\n## scale term estimate std.error conf.low conf.high\n## <chr> <chr> <dbl> <dbl> <dbl> <dbl>\n## 1 Logged b_age 0.000326 0.00240 -0.00443 0.00500\n## 2 Unlogged b_age 1.00 0.00240 0.996 1.01\n## 3 Logged b_Intercept 2.24 0.134 1.98 2.50\n## 4 Unlogged b_Intercept 9.36 0.134 7.24 12.2\n## 5 Logged b_wise_unwiseWisebutUnhappy 0.266 0.0800 0.113 0.424\n## 6 Unlogged b_wise_unwiseWisebutUnhappy 1.30 0.0800 1.12 1.53\n\nWe\u2019ll just plot the unlogged ones because thinking about logs is weird.\n\nmodel_pulse_negbinom |>\ngather_draws(^b_.*, regex = TRUE) |>\nmutate(.value = exp(.value)) |>\nmutate(.variable = factor(.variable,\nlevels = c(\"b_Intercept\", \"b_age\",\n\"b_wise_unwiseWisebutUnhappy\"),\nordered = TRUE)) |>\nggplot(aes(x = .value, fill = .variable)) +\nstat_halfeye(normalize = \"xy\") +\nscale_fill_manual(values = c(clrs[1], clrs[2], clrs[3]), guide = \"none\") +\nfacet_wrap(vars(.variable), scales = \"free_x\")\n## Warning: Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n\nGeneral interpretation, just to get used to working with these coefficients:\n\n\u2022 The unlogged intercept shows the expected count of books read when age is 0 and people want to be unwise but happy. The mean posterior value here is 9.3, with a 95% credible interval of 7.2\u201312.2.\n\u2022 The unlogged coefficient for age shows the multiplicative change in the number of books read. A one-year increase in age is associated with a posterior average 0.03% increase in books read (or 1.0003 times), with a 95% credible interval of 0.9956\u20131.005. That\u2019s hardly anything. There\u2019s mostly likely not any sort of relationship here.\n\u2022 The unlogged coefficient for wise but unhappy shows the percent or ratio of books read compared to the happy but unwise comparison group. The posterior mean is 1.3 with a 95% credible interval of 1.12\u20131.53, meaning that people who want to be wise but unhappy read 1.3 times (or 130%) as many books as their unwise happy counterparts. If a happy unwise person reads 10 books a year, a comparable unhappy wise person would be expected to read 13 books (or rather, somewhere between 11.2 and 15.3 books).\n\nHere\u2019s what all these moving parts look like simultaneously:\n\npulse %>%\nggplot(aes(x = age, y = books, color = wise_unwise)) +\ngeom_point(data = pulse, size = 0.5, alpha = 0.8) +\ngeom_line(aes(y = .epred, group = paste(wise_unwise, .draw)),\nsize = 0.5, alpha = 0.3) +\nscale_color_manual(values = c(clrs[4], clrs[3])) +\nlabs(x = \"Age\", y = \"Count of books\", color = NULL) +\ntheme(legend.position = \"bottom\")\n\nThe slope is flat, so age doesn\u2019t matter, but there is a recognizable gap in the happy\/wise question\u2014those who want to be wise read more books.\n\nFor fun, we can also look at the marginal effect of the happy\/wise variable so we don\u2019t have to work with percentages like 130%.\n\nOn average, the marginal effect of wanting to be wise over wanting to be happy is associated with almost 3 more books read per year (1.2\u20134.7 in the 95% credible interval):\n\nmfx_neg_binom <- marginaleffects(model_pulse_negbinom, type = \"response\")\ntidy(mfx_neg_binom)\n## type term contrast estimate\n## 1 response age dY\/dX 0.003665476\n## 2 response wise_unwise Wise but Unhappy - Happy but Unwise 2.884521215\n## conf.low conf.high\n## 1 -0.04831247 0.05471462\n## 2 1.21990466 4.68279566\nmfx_neg_binom |>\nposteriordraws() |>\nfilter(contrast != \"dY\/dX\") |>\nggplot(aes(x = draw)) +\nstat_halfeye(fill = clrs[3])\n\nWe can also look at the gap across the full range of ages, though it\u2019s not that exciting or interesting since the age slope is completeley flat. The 3-book boost happens at every possible age.\n\nmfx_pulse_ages <- model_pulse_negbinom |>\nmarginaleffects(newdata = datagrid(age = seq(18, 99, by = 1),\nwise_unwise = c(\"Wise but Unhappy\")),\nvariables = c(\"wise_unwise\")) |>\nposteriordraws()\n\nmfx_pulse_ages |>\nggplot(aes(x = age, y = draw, color = wise_unwise, fill = wise_unwise)) +\nstat_lineribbon(alpha = 0.25) +\nscale_color_manual(values = c(clrs[4], clrs[3])) +\nscale_fill_manual(values = c(clrs[4], clrs[3])) +\nlabs(color = NULL, fill = NULL, x = \"Age\",\ny = \"\u2206 in group means\") +\ntheme(legend.position = \"bottom\")\n## Warning: Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.\n## Unknown or uninitialised column: linewidth.","date":"2023-03-21 13:51:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.638798177242279, \"perplexity\": 11772.836733733966}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296943698.79\/warc\/CC-MAIN-20230321131205-20230321161205-00514.warc.gz\"}"} | null | null |
\section{Introduction}
The representation of a particle as an
idealized point has long been used in physics. In fact, this
representation is central to classical mechanics and serves us
well even in quantum mechanics. In this paper we adopt a viewpoint
in which the finite extent or fuzziness of a particle is taken
into consideration thereby treating the particle as an extended
object. Such a treatment becomes important and necessary when the
confines of the quantum system in which the particle is placed
becomes comparable to the finite extent of the particle. The
finite extent or fuzziness of a particle is quantified via its
Compton wavelength which can be defined as the lower limit on how
well a particle can be localized. In nonrelativistic quantum
mechanics, the lower limit is zero since we admit position
eigenkets $|x\rangle$. But in reality, as we try to locate the
particle with greater accuracy we use more energetic probes, say
photons to be specific. To locate a particle to some $\Delta x$ we
need a photon of momentum
\begin{equation}
\Delta p \approx \frac{\hbar}{\Delta x}.
\end{equation}
The corresponding energy of the photon is
\begin{equation}
\Delta E \approx \frac{\hbar c}{\Delta x}.
\end{equation}
If this energy exceeds twice the rest energy of the particle,
relativity allows the production of a particle-antiparticle
pair in the measurement process. So we demand
\begin{equation}
\frac{\hbar c}{\Delta x} \leq 2mc^{2}
\quad \mbox{or} \quad
\Delta x \geq \frac{\hbar}{2mc} \approx \frac{\hbar}{mc}.
\end{equation}
Any attempt to further localize the particle will lead to
pair creation and we will have three (or more) particles
instead of the one we started to locate. Therefore, the
Compton wavelength of a particle measures the distance over
which quantum effects can persist The point particle
approximation used in nonrelativistic quantum mechanics
suffices to describe the dynamics since the confines of
the quantum systems under consideration are much larger
than the finite extent of the confined particles. For example, in
the analysis of the hydrogen atom, the fuzziness or the
size of the electron is $\alpha$ times smaller than the
size of the atom $a_{0}$
\begin{equation}
\frac{\hbar/mc}{a_{0}} = \alpha \approx \frac{1}{137}.
\end{equation}
Thus, in the case of the hydrogen atom and in general, for
the quantum theory of atoms, the quantum mechanics of point
particles gives an accurate description.
In this paper we develop the Hilbert space representation
theory of the quantum mechanics of extended objects. We use
this representation to demonstrate the quantization of spacetime
following which we analyze two paradigm examples: fuzzy
harmonic oscillator and the Yukawa potential. In the
second example, the quantum mechanics of extended objects
enables us to predict the phenomenological coupling
constant of the $\omega$ meson as well as the radius of
the repulsive nucleon core.
\section{Quantum Mechanics of Extended Objects}
We have established the necessity for taking into consideration
the nonzero size of a particle. In order to incorporate the
fuzziness or size of a particle into our dynamics we
introduce the following representation for position and
momentum in one dimension in units where $\hbar = c = 1$.
For position space,
\begin{eqnarray}
X_{f} & = & (Xe^{-P^{2}/m^{2}})
\rightarrow (xe^{-P^{2}/m^{2}}) \nonumber \\
P & \rightarrow & {-i}\frac{d}{dx} \\
\left[ X_f, P \right] & = & i e^{-P^{2}/m^{2}}, \nonumber
\end{eqnarray}
and for momentum space,
\begin{eqnarray}
X_{f} & = & e^{-P^{2}/2m^{2}}Xe^{-P^{2}/2m^{2}}
\rightarrow i e^{-P^{2}/2m^{2}}\frac{d}{dp}e^{-P^{2}/2m^{2}}
\nonumber \\
P & \rightarrow & p \\
\left[X_{f},P \right] & = & i e^{-p^{2}/m^{2}}. \nonumber
\end{eqnarray}
where $(AB) \equiv {(AB + BA)}/{2}$. Symmetrization has also
been employed in the momentum space representation in order to
preserve the Hermiticity of the noncommuting fuzzy position
operator $X_{f}$. In contradistinction to the quantum
mechanics of point particles where the position operator
has a smooth coordinate representation consisting of a sequence
of points, the fuzzy position operator is convolved with a
Gaussian in momentum space which has as its width the Compton
wavelength ${1}/{m}$. The convolution with the Gaussian
has the effect of smearing out these points and in the limit
as the Compton wavelength vanishes we recover the standard
operator assignments of ordinary quantum mechanics. For
simplicity, consider the effect of the fuzzy
position operator $X_{f}$ on an acceptable wavefunction in position space,
that is, one which is square integrable and has the right behavior
at infinity:
\begin{eqnarray}
X_{f}\psi(x) &=& (xe^{-P^{2}/m^{2}}) \psi (x) \nonumber \\
&=& \frac{m}{4\sqrt{\pi}}\,\left[
\int_{-\infty}^{\infty} d\lambda\, x\,
e^{iP\lambda - m^{2}\lambda^{2}/4}\psi(x)\,+
\, \int_{-\infty}^{\infty} d\lambda\,
e^{iP\lambda - m^{2}\lambda^{2}/4}[x\psi(x)] \right] \nonumber \\
& = & \frac{m}{4\sqrt{\pi}} \int_{-\infty}^{\infty}\,
d\lambda(x + \frac{\lambda}{2}) \psi(x + \lambda)
e^{-m^{2}\lambda^{2}/4}.
\end{eqnarray}
The translation of $\psi(x)$ by $\lambda$ and the subsequent
integration over all possible values of $\lambda$ weighted by
a Gaussian measure has the effect of smearing out the position.
The commutation relation obeyed by $X_{f}$ and $P$ is manifestly
noncanonical and does not depend on the representation. A direct
consequence of this commutation relation is the uncertainty relation.
\begin{equation}
\label{f-eqn}
\Delta X_{f}\Delta P\, \geq\, \frac{1}{2}|\langle e^{-P^{2}/m^{2}}
\rangle|.
\end{equation}
Now, for any two observables $A$ and
$B$ which satisfy
$\left[A,B\right]|\psi\rangle = 0$ for some
nontrivial $|\psi\rangle$,
with uncertainties $\Delta A$ and
$\Delta B$ such that
$|{\Delta A}/{\langle A\rangle}|\ll 1$ and
$|{\Delta B}/{\langle B\rangle}|\ll 1$, we have the relation
\begin{equation}
\label{ab-eqn}
\Delta ((AB)) = \langle A\rangle \Delta B + \langle B\rangle \Delta A,
\end{equation}
where again $(AB) \equiv {(AB + BA)}/{2}$. The special case
$\left[A,B\right] = 0$ corresponds to compatible variables.
We observe that whenever simultaneous eigenkets exist
\begin{eqnarray}
\langle AB\rangle &=& \int da\, db\, P(ab)\,ab = \int da\, db\,
P(a)P(b)\, ab \nonumber \\
&=& \langle A\rangle \langle B\rangle
\end{eqnarray}
where $P(ab) = |\langle ab|\psi \rangle|^{2}$ and the proof
of Eq.~(\ref{ab-eqn}) follows.
In our case,
\begin{equation}
\left[X,e^{-P^{2}/m^{2}}\right]|\psi \rangle = 0
\mbox{ only if }|\psi\rangle = {\rm constant}.
\end{equation}
Hence, there
exists at least one nontrivial simultaneous eigenket for which
$[X,e^{-P^{2}/m^{2}}]$ has a zero eigenvalue.
We can always choose this eigenket to establish the validity
of Eq.~(\ref{ab-eqn}) for our operators $X$ and
$e^{-P^{2}/m^{2}}$ along the lines shown above.
As a consequence, we obtain the modified uncertainty
principle (reinserting $\hbar$ for clarity)
\begin{equation}
\Delta X\Delta P \,\geq\, \frac{\hbar}{2}
\,+\, \frac{2\langle X\rangle \langle P\rangle }{m^{2}}(\Delta P)^{2}.
\end{equation}
The uncertainty product goes up because of the fuzziness
we have introduced in the position. Consequently, there exists
a minimal uncertainty in position given by
\begin{equation}
\Delta X_{0} = \frac{2}{m}\sqrt{\langle X\rangle\langle P\rangle\hbar}.
\end{equation}
The existence of minimal uncertainties and their consequences for
structure were first examined by Kempf, albeit, in a different
context \cite{kempf1}\cite{kempf2}. We note that the product
$\langle X\rangle \langle P\rangle$ is in general nonnegative. It
can be made negative by moving the center of coordinates but this
would imply that the Hamiltonian of the underlying system is
translationally invariant such as the free particle or the
particle in a box (for bound systems $\langle P\rangle = 0$). For
all such systems the Hamiltonian does not depend on the position
(or fuzzy position) and incorporating the fuzziness of the
particle into our quantum description is irrelevant to the
dynamics. Hence, the Compton wavelength can be set to zero in
such cases which is the correspondence limit with ordinary quantum
mechanics. If we view the uncertainty product as a measure of the
cell volume of phase space we observe that quantized phase
acquires an added fuzziness and the cell volume no longer has a
uniform value equal to the Planck constant. Fuzzy phase space has
a direct implication for the quantization of spacetime as we will
demonstrate in section \ref{quant}.
In view of the special theory of relativity, particles are
actually located at spacetime points. The introduction of
smearing in the spatial direction demands that we introduce
fuzziness in the time direction, otherwise, the instantaneous
annihilation of a particle of finite extent would violate
causality. As was the case with the fuzzy position the smearing
is achieved by convolving the time coordinate with a Gaussian in
the zeroth component of the momentum operator (the Hamiltonian)
giving rise to
\begin{eqnarray}
T_{f} &=& (Te^{-H^{2}/m^{2}})\rightarrow (te^{-H^{2}/m^{2}})\\
H &\rightarrow& i\frac{d}{dt}.
\end{eqnarray}
We observe that in our representation we choose to view time as
an operator on the same footing as the position operator. This is
in keeping with the modern unified view of spacetime and is
further evidenced when we discuss the nontrivial commutation
relations between the 4-positions. The smeared time operator
$T_{f}$ reverts to its smooth time coordinate representation in
the limit as the characteristic times of the quantum system become
much longer than the flight time of the particle. The time of
flight of a particle is defined as the time it takes to traverse a
distance of the Compton wavelength at the maximally allowable
speed c. Due to the fuzziness we have introduced in the time
direction the energy-time uncertainty principle gets modified in a
manner analogous to the phase space uncertainty product giving
rise to
\begin{equation}
\Delta H\Delta T \, \geq \, \frac{\hbar}{2} \,+\, \frac{2\langle
H\rangle\langle T\rangle}{m^{2}}(\Delta H)^{2}.
\end{equation}
This relation implies a minimal uncertainty in time given by
\begin{equation}
\Delta T_{0} = \frac{2}{m}\sqrt{\langle H\rangle \langle T \rangle\hbar}
\end{equation}
which is expected since the time operator has been smeared out.
The product $\langle H\rangle\langle T\rangle$ is in general
non-negative. It can be made negative by moving the center of the
time coordinate but this would imply that the Hamiltonian of the
underlying system obeys time translational invariance. For all
such systems the Hamiltonian is time independent and incorporating
the time smearing into our quantum description is irrelevant to
the dynamics. Hence, the Compton wavelength can be set to zero in
such cases which is the correspondence limit with ordinary quantum
mechanics. Thus, by introducing these self-adjoint operator
representations for position and time we are able to quantify and
characterize the finite extent of a particle. We now proceed to
formulate the Hilbert space representation theory of these
operators.
\section{Hilbert Space Representation}
The fuzzy position operator $X_{f}$ and the momentum operator P
satisfy the uncertainty relation Eq.~(\ref {f-eqn}). This
relation does not imply a minimal uncertainty in the fuzzy
position or the momentum. As a consequence, the eigenstates of
the self-adjoint fuzzy position and momentum operators can be
approximated to arbitrary precision by sequences
$|\psi_{n}\rangle$ of physical states of increasing localization
in position or momentum space:
\begin{equation}
\lim_{n \rightarrow \infty}\Delta X_{f_{|\psi_{n}\rangle}} = 0 \quad
\mbox{or} \quad
\lim_{n \rightarrow \infty}\Delta P_{|\psi_{n}\rangle} = 0.
\end{equation}
Hence, the fuzzy position and momentum operators admit a
continuous position or momentum space representation in the
Hilbert space. Since the momentum operator is identical to the
one used in ordinary quantum mechanics it has the usual orthogonal
plane wave eigenstates. The eigenvalue problem of the fuzzy
position operator
\begin{equation}
X_{f}\psi = \lambda\psi
\end{equation}
can be written in the momentum basis (which we choose for
convenience) as
\begin{equation}
e^{-p^{2}/2m^{2}}\frac{d}{dp}(e^{-p^{2}/2m^{2}}\psi) = -i\lambda\psi.
\end{equation}
Defining the function $\phi = e^{-p^{2}/m^{2}}\psi$ and
introducing the measure transformation $dr = e^{p^{2}/m^{2}}dp$ we
obtain the eigensolutions as
\begin{equation}
\psi(p) = \frac{1}{\sqrt{2\pi}}\,e^{p^{2}/2m^{2}\,+\,i\lambda r},
\end{equation}
where freedom in scale has been used to normalize the solution.
The eigenfunctions are orthogonal with respect to the transformed
measure $L^{2}(e^{-p^{2}/m^{2}}dr)$ because
\begin{equation}
\langle \psi_{\lambda}(p)|\psi_{\lambda'}(p)\rangle = \frac{1}{2\pi}\int_{-\infty}^{\infty}e^{i(\lambda - \lambda')r}dr = \delta(\lambda - \lambda').
\end{equation}
The inner product
$\langle\psi_{\lambda}(p)|\psi_{\lambda'}(p)\rangle$ is divergent
in the space $L^{2}(dp)$ but is equal to the Dirac delta function
in the space $L^{2}(e^{-p^{2}/m^{2}}dr)$. As $p$ ranges from
$-\infty$ to $\infty$ the volume element $dp$, under the measure
transformation, is squeezed into a Gaussian width times the line
element $dr$, and consequently the orthogonality of the fuzzy
position eigenstates is preserved. We note that had we tried to
construct the formal position eigenstates (eigenstates of $X$) we
would have had to sacrifice orthogonality due to the appearance of
the minimal uncertainty in position. The eigenfunctions of the
fuzzy position operator in the position representation will be
Fourier transforms of the eigensolutions in the momentum
representation since the Fourier transform of an $L^{2}$ function
will be an $L^{2}$ function in the same measure.
\section{Translational and Rotational Invariance}
We will now examine the behavior of the quantum mechanics of
extended objects under translations and rotations and solve the
eigenvalue problem of fuzzy angular momentum.
\subsection{Translational Invariance}
Under a translation of the coordinate $x \rightarrow x + \epsilon$
we have the fuzzy translation
\begin{eqnarray}
\langle X_{f}\rangle &\rightarrow & \langle X_{f}\rangle +
\epsilon\langle e^{-P^{2}/m^{2}}\rangle , \nonumber \\ \langle
P\rangle & \rightarrow & \langle P\rangle.
\end{eqnarray}
In the passive transformation picture
\begin{eqnarray}
\label{t-eqn}
T^{\dagger}(\epsilon)X_{f}T(\epsilon) & = &
X_{f} + \epsilon \,e^{-P^{2}/m^{2}}, \nonumber \\
T^{\dagger}(\epsilon)PT(\epsilon) &=& P,
\end{eqnarray}
where $T(\epsilon)$ is the translation operator which translates
the state $|\psi\rangle$. Expanding $T(\epsilon)$ to first order
and feeding into Eq.~(\ref{t-eqn})we obtain
\begin{equation}
[X_{f},G] = ie^{-P^{2}/m^{2}},
\end{equation}
where $G$ is the generator of infinitesimal translations. Thus,
the momentum is still the generator of fuzzy spatial translations
and analogously, we find that the Hamiltonian is the generator of
fuzzy time translations. Since these are the same generators as
found in ordinary quantum mechanics, we can conclude by similar
reasoning and by Ehrenfest's theorem that fuzzy space (time)
translational invariance will ensure the time independence of the
momentum (Hamiltonian).
\subsection{Rotational Invariance}
Let us denote the operator that rotates two-dimensional vectors by
$R(\phi_{0}\hat{k})$ for a rotation by $\phi_{0}$ about the
z-axis. Let $U[R]$ be the operator associated with this rotation.
For an infinitesimal rotation $\epsilon_{z}\hat{k}$ we set
\begin{equation}
U[R] = I - i\epsilon_{z}L_{f_{z}},
\end{equation}
where $L_{f_{z}}$ is the generator of fuzzy rotations. We can
determine $L_{f_{z}} = X_{f}P_{y} - Y_{f}P_{x}$ by feeding this
$U[R]$ into the passive transformation equations for an
infinitesimal rotation:
\begin{equation}
U^{\dagger}[R]X_{f}U[R] = X_{f} - Y_{f}\epsilon_{z},
\end{equation}
and so on. $L_{f_{z}}$ is conserved in a problem with rotational
invariance: if
\begin{equation}
U^{\dagger}[R]H(X_{f},P_{x};Y_{f},P_{y})U[R] = H(X_{f},P_{x};Y_{f},P_{y})
\end{equation}
it follows (by choosing an infinitesimal rotation) that
\begin{equation}
[L_{f_{z}},H] = 0 \quad \mbox{or}\quad \langle {\dot L_{f_{z}}}\rangle = 0
\end{equation}
by Ehrenfest's theorem.
\subsection{The eigenvalue problem of $L_{f_{z}}$}
In the momentum basis the two dimensional fuzzy angular momentum
operator can be written as
\begin{equation}
L_{f_{z}} \rightarrow
e^{-p^{2}/2m^{2}}(i\frac{\partial}{\partial_{p_{x}}}e^{-p^{2}/2m^{2}}p_{y}
- i\frac{\partial}{\partial_{p_{y}}}e^{-p^{2}/2m^{2}}p_{x}),
\end{equation}
where $p^{2} = p_{x}^{2} + p_{y}^{2}$. This is the correct
generalization of the smeared position operator to higher
dimensions (in this case two) as can be seen by letting $X_{f}$
act on a wavefunction in two dimensions. We can further simplify
the derivatives in $L_{f_{z}}$ and switch to polar coordinates to
obtain
\begin{equation}
L_{f_{z}} \rightarrow
-ie^{-p^{2}/2m^{2}}\frac{\partial}{\partial_{p_{\phi}}}e^{-p^{2}/2m^{2}}.
\end{equation}
The eigenvalue problem of $L_{f_{z}}$,
\begin{equation}
L_{f_{z}}\psi(p_{\rho},p_{\phi}) = l_{f_{z}}\psi(p_{\rho},p_{\phi}),
\end{equation}
can be written in the momentum basis as
\begin{equation}
-ie^{-p^{2}/2m^{2}}\frac{\partial}{\partial_{p_{\phi}}
}(\psi e^{-p^{2}/2m^{2}}) = l_{f_{z}}\psi.
\end{equation}
Defining $\phi = \psi e^{-p^{2}/m^{2}}$ and using the transformed measure,
\begin{equation}
dp_{\phi} = \frac{1}{2\pi}[\frac{\sqrt{\pi}m}{2i}erf(2\pi i)]
e^{-p_{\phi}^{2}/m^{2}}\, dr
\end{equation} we arrive at
\begin{equation}
\psi(p_{\rho},p_{\phi}) \,\sim\,
e^{il_{f_{z}}e^{p_{\rho}^{2}/m^{2}}r\, +\, p^{2}/2m^{2}},
\end{equation}
where the numerical factor in the measure transformation has been
chosen so that as $p_{\phi}$ ranges from 0 to $2\pi$,
$r$ also ranges from 0 to $2\pi$. The eigenfunctions are orthogonal
with respect to the transformed measure
$L^{2}(e^{-p_{\phi}^{2}/m^{2}}p_{\rho}dp_{\rho}dr)$ where the
numerical factor has been suppressed. We observe that
$l_{f_{z}}$ seems to be arbitrary and even complex since the range
of $r$ is restricted. The fact that complex eigenvalues enter the
solution signals that we are overlooking the Hermiticity
constraint. Imposing this condition we have
\begin{equation}
\langle \psi_{1}|L_{f_{z}}|\psi_{2}\rangle = \langle
\psi_{2}|L_{f_{z}}|\psi_{1}\rangle^{*},
\end{equation}
which becomes in the momentum basis
\begin{equation}
\int_{0}^{\infty}\int_{0}^{2\pi}
\phi_{1}^{*}(-i\frac{\partial}{\partial_{p_{\phi}}})\phi_{2}\,
p_{\rho}dp_{\rho}dp_{\phi} =
\left[\int_{0}^{\infty}\int_{0}^{2\pi}
\phi_{2}^{*}(-i\frac{\partial}{\partial_{p_{\phi}}})\phi_{1}\,
p_{\rho}dp_{\rho}dp_{\phi}\right]^{*},
\end{equation}
where $\phi = \psi e^{-p^{2}/2m^{2}}$. If this requirement is to
be satisfied by all $\phi_{1}$ and $\phi_{2}$, one can show (by
integrating by parts) that it is enough if each
$\phi(p_{\rho},p_{\phi})$ obeys
\begin{equation}
\phi(p_{\rho},0) = \phi(p_{\rho},2\pi).
\end{equation}
If we impose this constraint on the $L_{f_{z}}$ eigenfunctions we
find that the eigenvalues $l_{f_{z}}$ have to obey the following
relation
\begin{equation}
l_{f_{z}} = e^{-p_{\rho}^{2}/m^{2}}k,
\end{equation}
where $k$ is an integer. The fuzzy angular momentum is equal to
an integral multiple of $\hbar$ times a smearing factor. This is
an example of smeared or fuzzy quantization and as the Compton
wavelength vanishes we regain the usual relation for ordinary
quantized angular momentum.
\section{Quantization of Spacetime}
\label{quant} The raised phase space uncertainty product which we
have discussed before implies that phase space acquires an added
fuzziness due to the smearing of the position operator. By
considering the algebra of smooth functions over fuzzy phase space
generated by fuzzy positions and momenta, and by using the
Gel'fand and Naimark reconstruction theorem one can recover all
information about the underlying space. However, since we already
know the mathematical form of the fuzzy position operator, we use
a more simple approach and directly construct the nontrivial
commutators between the fuzzy positions. In the momentum basis
the commutator between fuzzy positions in 4-dimensional spacetime
is
\begin{equation}
\left[X_{f_{\mu}},X_{f_{\nu}}\right] =
-e^{-p^{2}/2m^{2}}(\partial_{p_{\mu}}e^{-p^{2}/m^{2}}\partial_{p_{\nu}}
-
\partial_{p_{\nu}}e^{-p^{2}/m^{2}}\partial_{p_{\mu}})e^{-p^{2}/2m^{2}}.
\end{equation}
The derivative terms can be further simplified and introducing
$X_{\mu} \rightarrow i\partial_{p_{\mu}}$ and $P \rightarrow p$ we
obtain
\begin{equation}
\label{xf-eqn}
\left[X_{f_{\mu}},X_{f_{\nu}}\right] =
\frac{i}{m^{2}}e^{-P^{2}/2m^{2}}(P_{\nu}X_{\mu} -
P_{\mu}X_{\nu})e^{-P^{2}/2m^{2}}.
\end{equation}
The nontrivial commutation relation between the fuzzy positions implies
that fuzzy spacetime is quantized. When the confines are much larger
than the Compton wavelength, that is, when we are viewing a larger patch
of spacetime, ${p^{2}}/{m^{2}} \ll 1$, and
the Gaussian (smearing) factors in Eq.~(\ref{xf-eqn}) become negligible.
In this limit $X_{f_{\mu}} \rightarrow X_{\mu}$, and we obtain
\begin{equation}
\label{q-eqn}
[X_{f_{\mu}},X_{f_{\nu}}] \rightarrow
[X_{\mu},X_{\nu}] = \frac{i}{m^{2}}(P_{\nu}X_{\mu} -
P_{\mu}X_{\nu}).
\end{equation}
Thus, as long as the Compton wavelength is nonzero, the ordinary
4-positions also exhibit a nontrivial commutation relation given
by Eq.~(\ref{q-eqn}).This result is identical to the one obtained
by Snyder in 1947\cite{snyder}. In his paper Snyder demonstrates
that the assumption of Lorentz covariance does not exclude a
quantized spacetime which he develops by defining the 4-positions
in terms of the homogenous (projective) coordinates of a De Sitter
space. In the limit as the natural unit of length (the Compton
wavelength) vanishes our quantized spacetime changes to the
ordinary continuous spacetime and the commutators revert to their
standard values. Therefore, our formulation of the quantum
mechanics of extended objects implies that spacetime is quantized
and that it has a Lorentz covariant structure.
\section{Fuzzy (extended object) Harmonic Oscillator}
Before we study the quantum mechanical fuzzy harmonic oscillator
let us understand the classical analog of such an oscillator.
Classically, we can model an extended object as a point mass
connected to a nonlinear spring of stiffness constant, say
$k_{1}$. When this spring-mass system is connected to another
linear spring of stiffness constant, say $k_{2}$ we essentially
have a classical, one dimensional, extended object oscillator.
When the wavelength of oscillation is small compared to the size
of the extended object (in this case the length of the nonlinear
spring of stiffness constant $k_{1}$) the oscillator will exhibit
harmonic behavior since the small oscillations do not disturb the
configuration of the extended object. As the wavelength of
oscillation becomes comparable to the size of the extended object,
anharmonic vibrations set in. Again, as the wavelength of
oscillation becomes much larger than the size of the extended
object, the point particle approximation becomes tenable and
harmonic vibrations are recovered. We would expect the quantum
version of the extended object oscillator to exhibit similar
behavior albeit with quantized energy levels. In the first
regime, when the wavelength of oscillation is small compared to
the size of the extended object, since small oscillations do not
disturb the configuration of the extended object to any
appreciable extent we will obtain the usual quantized energy
levels of the simple harmonic oscillator. It is in the second and
third regimes where we would need to apply the quantum mechanics
of extended objects. The Hamiltonian for a one dimensional fuzzy
harmonic oscillator can be written as
\begin{equation}
H = \frac{P^{2}}{2m} + \frac{1}{2}m\omega^{2}X_{f}^{2}.
\end{equation}
Introducing the operator representation for the fuzzy position and
momentum in the momentum basis and simplifying terms, we obtain
\begin{equation}
\label{de-eqn} \frac{1}{2}m\omega^{2}
\left[\frac{d^{2}\phi}{dp^{2}} - (\frac{p^{2}}{m^{4}} -
\frac{1}{m^{2}})\phi\right]
= (\frac{p^{2}}{2m} - E)e^{2p^{2}/m^{2}}\phi,
\end{equation}
where $\phi = e^{-p^{2}/m^{2}} \psi$, $H\psi = E\psi$, and $\phi$
lies in $L^2(dp)$. When the wavelength of oscillation (the
confines) is large compared to the size of the extended object,
${p^{2}}/{m^{2}} \ll 1$, in which case we can approximate
$e^{2p^{2}/m^{2}} \approx 1 + {2p^{2}}/{m^{2}}$. In this
approximation Eq.~(\ref{de-eqn}) can be rewritten as:
\begin{equation}
\frac{d^{2}\phi}{dp^{2}} + 2m({\tilde E} -\frac{1}{2}m\Omega^{2})\phi = 0,
\end{equation}
where
\begin{eqnarray}
2m{\tilde E} &=& \frac{2E}{m\omega^{2}} + \frac{1}{m^{2}}, \\
m^{2}\Omega^{2} &=&\frac{-4E}{m^{3}\omega^{2}} +
\frac{1}{m^{4}} + \frac{1}{m^{2}\omega^{2}}.
\end{eqnarray}
This is simply the differential equation for a simple harmonic
oscillator in terms of the dummy energy ${\tilde E}$ and frequency
$\Omega$. For well behaved solutions we require the quantization
condition
\begin{equation}
{\tilde E}_{n} = (n + \frac{1}{2})\Omega,\; n = 0,1,2,\ldots.
\end{equation}
Re-expressing this relation in terms of the physical energy $E$
and frequency $\omega$ and retaining terms up to $o(\hbar^{2})$,
we obtain
\begin{equation}
\label{es-eqn}
E_{n} = (n + \frac{1}{2})\omega - \frac{\omega^{2}}{2m},\; n = 0,1,2,\ldots.
\end{equation}
As we would expect, the fuzzy particle exhibits harmonic behavior
when the wavelength of oscillation is large compared to the size
of the particle. In this approximation, the eigenvalue spectrum
of the fuzzy harmonic oscillator is equivalent to the spectrum of
a displaced simple harmonic oscillator. The shift in the energy
spectrum can be understood by observing that in the classical
spring-mass model, the extended object (the nonlinear spring)
would undergo compression due to the oscillations of the linear
spring thereby displacing the equilibrium position. The quantum
counterpart exhibits the same behavior and when $\omega \ll m$ in
Eq.~(\ref{es-eqn}), that is, when the point particle approximation
becomes tenable we obtain the eigenspectrum of the simple harmonic
oscillator. In the classical analog this would mean that, at
sufficiently large oscillation wavelengths the compression of the
nonlinear spring becomes insignificant. Retaining terms up to
$o(\hbar^{2})$, the eigenfunctions of the harmonic oscillator in
this approximation are given by:
\begin{equation}
\psi(p)\,\sim\, e^{p^{2}/m^{2}(1 -
\frac{m}{2\omega})}H_{n}[\sqrt{{(m\omega)^{-1}}}p],
\end{equation}
where $H_{n}$ are the Hermite polynomials. Since $\psi$ lies in
$L^2(e^{-2p^2/m^2}dp)$, the eigenfunctions will be
normalizable.
By
inserting these approximate solutions into the exact differential
equation Eq.~(\ref{de-eqn}) we find that they do not differ by
derivative terms and hence they are close in some sense to the
exact solutions.
If we include higher values of momenta in our approximation and write
$
e^{2p^{2}/m^{2}} \approx 1 + {2p^{2}}/{m^{2}} + {2p^{4}}/{m^{4}},
$
we obtain the differential equation
\begin{equation}
\frac{d^{2}\phi}{dp^{2}} + 2m(\frac{\alpha}{2m} -
\frac{\beta}{2m}p^{2} - \frac{\gamma}{2m}p^{4})\phi = 0,
\end{equation}
where
\begin{eqnarray}
\alpha &=& \frac{2E}{m\omega^{2}} + \frac{1}{m^{2}}, \\
\beta &=&
\frac{-4E}{m^{3}\omega^{2}} + \frac{1}{m^{4}} +
\frac{1}{m^{2}\omega^{2}}, \\ \gamma &=& \frac{2}{m^{4}\omega^{2}}
- \frac{4E}{m^{5}\omega^{2}}.
\end{eqnarray}
This is the differential equation for an anharmonic oscillator.
As we would expect when higher momentum values become important or
equivalently as the wavelength of oscillation becomes comparable
to the size of the fuzzy particle, anharmonic vibrations set in.
We can compute the eigenspectrum of the anharmonic oscillator
using perturbation theory. We note that the perturbation
expansion breaks down for some large enough $n$. Retaining terms
up to $o(\hbar^{2})$ the eigenspectrum is found to be
\begin{equation}
E_{n} = (n + \frac{1}{2})\omega - \frac{\omega^{2}}{2m} +
\frac{3\omega^{2}}{4m}(1 + 2n + 2n^{2}), \; n = 0,1,2,\ldots.
\end{equation}
Figure 1 shows a plot of the first two anharmonic oscillator
eigenfunctions. For comparison the first two harmonic oscillator
eigenfunctions are also shown. The anharmonic oscillator
eigenfunctions have a steeper slope because the particle is placed
in a stronger potential as compared to the harmonic oscillator
potential. If we include even higher values of momenta in our
approximation we find that the anharmonicity increases and in the
limit of large quantum numbers our quantum descriptions pass
smoothly to their classical counterparts. Therefore, the quantum
mechanics of extended objects provides a description of the fuzzy
harmonic oscillator which augments our classical intuition. Such
a description could be useful when we study harmonic excitations
of quasiparticles which cannot be localized to arbitrary
precision. The quantum mechanics of extended objects can also be
used to describe compound particles such as baryons or mesons in
situations where their nonzero size matters but the details of the
internal structure do not contribute. One such situation is the
description of the nucleon-nucleon interaction at very short
distances which we proceed to examine.
\section{The Yukawa Potential}
At present the physics of the nucleon-nucleon interaction can be
divided into three major regions\cite{weise}
\begin{enumerate}
\item The {\it long-distance} region $r \geq 2$ fm $\approx
1.5m_{\pi}^{-1}$ where one-pion exchange dominates and the
quantitative behavior of the potential is very well established;
\item The {\it intermediate} region $0.8$ fm $\leq r \leq 2$ fm
where the dynamical contributions from two-pion exchange
(effective boson exchange) compete with or exceed the one-pion
exchange potential;
\item The {\it inner} region $r \leq 0.8$ fm
has a complicated dynamics not readily accessible to a
quantitative theoretical description. This region is expected to
be influenced by heavy mesons and or by quark/gluon degrees of
freedom. It is usually approached in a phenomenological way.
\end{enumerate}
Moreover, the inner region contains a repulsive hard core of
radius $0.6$ fm which was first proposed by Jastrow in 1951 in
order to fit nucleon-nucleon scattering data\cite{jastrow}. The
presence of a repulsive nucleon core is necessary to explain the
saturation of nuclear forces. This short range and repulsive
nucleon force is believed to be mediated by an $\omega$ meson of
mass $782$ MeV and the intermediate range attractive nucleon force
is mediated by a $\sigma$ meson (effective boson) of mass $550$
MeV\cite{walecka}. Once the masses are fixed, the coupling
constants which measure the strength of the coupling between a
meson and a baryon are chosen to reproduce nucleon-nucleon
scattering phase shifts and deuteron properties. These
phenomenological coupling constants\cite{walecka} are found to be
${g_{\omega}^{2}}/{4\pi} = 10.83$ and ${g_{\sigma}^{2}}/{4\pi} =
7.303$. It is our objective to theoretically determine the radius
of the repulsive nucleon core and to reproduce the
phenomenological $\omega$ meson coupling constant using the
quantum mechanics of extended objects which becomes relevant to
the dynamics in the inner region due to the finite extent of the
nucleon.
In order to reproduce consistent results we will focus attention
on the bound state nucleon-nucleon interaction, namely, the
deuteron. The deuterium nucleus ($A = 2, Z = N = 1$) is a bound
state of the neutron-proton system, into which it may be
disintegrated by irradiation with $\gamma$ rays of energy above
the binding energy\cite{sachs} of $2.226$ MeV. The ground state
of the deuteron is a triplet $S$ state and it has no excited
states. The force between the proton and the neutron can be
described in good approximation by a potential energy function of
the form
\begin{equation}
V(r) = -V_{0}\frac{e^{-r/r_{0}}}{r/r_{0}}.
\end{equation}
This is the well known Yukawa potential and is central to the
mesonic theory of nuclear forces. The range of the force $r_{0}$
is equal to ${1}/{\mu}$, where $\mu$ is the mass of the associated
meson and the strength $V_{0}$, or depth of the potential well is
connected with the strength of the coupling between the meson and
the nucleon field. In the center-of-mass coordinates the
Hamiltonian for the $S$ state of the deuteron is
\begin{equation}
H = \frac{p^{2}}{2m} +V(r),
\end{equation}
where $m$ is the reduced mass of the deuteron and $r$ determines
the neutron-proton separation. For ease of comparison with the
quantum mechanics of extended objects in which the momentum basis
is more convenient, we can transcribe the Hamiltonian to the
momentum basis by virtue of the exchange transformation
\begin{equation}
r \rightarrow pr_{0}^{2}, \quad {\rm and } \quad p \rightarrow -r/r_{0}^{2}.
\end{equation}
The exchange transformation is a canonical transformation and does
not affect the dynamics\cite{goldstein}. The Hamiltonian in the
momentum basis is
\begin{equation}
H = \frac{r^{2}}{2mr_{0}^{4}} + V(p),
\end{equation}
where ${\bf r} \rightarrow i\nabla_{\bf p}$ is the position operator and
$V(p) = -V_{0}{e^{-pr_{0}}}/{pr_{0}}$. The binding energy
$E_{0} = -2.226$ MeV can be estimated by means of the variational principle
using the simple trial wavefunction
\begin{equation}
\psi(p) = e^{-\alpha pr_{0}},
\end{equation}
in which we treat $\alpha$ as a variable parameter. Our choice of
the trial wavefunction is motivated by the fact that we expect the
ground state wavefunction to have no angular momentum, no nodes,
and for $p\psi(p)$ to vanish as $p \rightarrow \infty$ as required
for bound states. The variational method determines the energy as
\begin{equation}
E = \frac{\langle\psi|H|\psi\rangle}{\langle\psi|\psi\rangle}.
\end{equation}
The energy $E$ serves as an upper bound on the ground state energy $E_{0}$. If we substitute $E_{0} = -2.226$ MeV for $E$ we can perform an approximate calculation of the relation between $V_{0}$ and $r_{0}$ (range-depth relation) that must hold if the potential function $V(p)$ is to give the value $E_{0} = -2.226$ MeV for the binding energy. Figure $2$ shows a plot of the range-depth relation for the Yukawa potential (deuteron) as determine by this method. By comparing the values of $V_{0}$ for various values of $r_{0}$ with the results of an exact calculation using numerical integration we are able to estimate the accuracy of our approximate result. The approximate result is within a few percent of the exact result and the error decreases with increasing $r_{0}$\cite{sachs}. Therefore, our choice of the trial wavefunction is justified.
Let us now analyze the same potential problem using the quantum
mechanics of extended objects. In the momentum basis the fuzzy
Hamiltonian for the $S$ state of the deuteron is
\begin{equation}
H = \frac{r_{f}^{2}}{2mr_{0}^{4}} + V(p),
\end{equation}
where
\begin{equation}
{\bf r}_f \rightarrow i\, e^{-p^{2}/2m^{2}}\nabla_{\bf p}e^{-p^{2}/2m^{2}}
\end{equation}
is the fuzzy position operator which now determines the
neutron-proton separation. Figure $3$ shows a plot of the $S$
state eigenfunctions as a function of momentum for $r_{0} = 1.43$
fm, which correspond to a $\pi$ meson of mass $139.6$ MeV, and for
$r_{0} = 0.3596$ fm, which corresponds to a $\sigma$ meson of mass
$550$ MeV. The eigenfunctions obtained from ordinary quantum
mechanics are also shown for comparison. The eigenfunctions
obtained from the quantum mechanics of extended objects are pushed
out in comparison to the usual eigenfunctions implying that there
is a repulsive component to the potential which has the effect of
pushing out the eigenfunctions as at the edge of an infinite well
(compare with figure 1). By examining the plots of $\phi(p) =
e^{-p^{2}/m^{2}}\psi(p)$ (figure 4 shows one such plot for $r_{0}
= 1.43$ fm) where $\psi(p)$ are the eigenfunctions obtained from
the quantum mechanics of extended objects, we observe that
$\phi(p)$ lies in $L^{2}(d^{3}p)$. Therefore, the eigenfunctions
obtained from the extended object analysis are normalizable with
respect to $L^{2}(e^{-2p^{2}/m^{2}}d^{3}p)$. This motivates us to
choose as our trial wavefunction
\begin{equation}
\label{trial}
\psi(p) = e^{p^{2}/m^{2} - \alpha pr_{0}}.
\end{equation}
The normalizability criterion in this measure ensures that
\begin{equation}
e^{-p^{2}/m^{2}}p\psi(p) \rightarrow 0 \mbox{ as } p \rightarrow \infty
\end{equation}
as required for bound states (and as is the case with our trial
wavefunction). Furthermore, when the confines are large
($p^{2}/m^{2} \ll 1$), $\psi(p)$ in Eq.~(\ref{trial}) passes
smoothly into the trial wavefunction we had used when we applied
ordinary quantum mechanics and which had yielded an accurate
range-depth relation. Hence, our choice of the trial wavefunction
is justified and with the given volume element we can determine
the approximate range-depth relation that must hold if the
potential function $V(p)$ is to give the value $E_{0} = -2.226$
MeV for the binding energy. Numerical calculations performed in
Mathematica reveal the range-depth relation shown in figure 5. The
strength of the potential or depth of the well $V'_{0}$ in figure
5 is lower than the strength of the potential $V_{0}$ obtained
from ordinary quantum mechanics (figure 2) particularly for
smaller values of $r_{0}$. The existence of a repulsive component
to the potential which we have already observed from a plot of the
eigenfunctions shown in figure 3 is verified. Moreover, the depth
of the well $V'_{0}$ in figure 5 is negative for $r_{0} \leq
0.563$ fm. This implies the existence of a repulsive nucleon core
with a radius $r_{c} = 0.563$ fm, which is consistent with the
phenomenologically obtained value of $0.6$ fm.
Let us model the effective nucleon-nucleon interaction by a
potential of the form
\begin{equation}
\label{nn-eqn}
V(r) = -V_{0}\frac{e^{-r/r_{0}}}{r/r_{0}} +
V_{1}\frac{e^{-r/r_{1}}}{r/r_{1}},
\end{equation}
where $r_{0} = 0.3596$ fm corresponding to $\sigma$ meson exchange
(attraction)and $r_{1} = 0.2529$ fm corresponding to $\omega$
meson exchange (repulsion). This potential describes the main
qualitative features of the nucleon-nucleon interaction: a short
range repulsion between baryons coming from $\omega$ exchange and
an intermediate range attraction coming from $\sigma$
exchange\cite{walecka}. The repulsive component of the effective
nucleon-nucleon interaction must be held accountable for the drop
in the well depth from $V_{0}$ to $V'_{0}$, which is observed at
$r_{0} = 0.3596$ fm. Since the $\omega$ exchange occurs at a
range of $r_{1} = 0.2529$ fm we require that
\begin{equation}
V(r = r_{1}) = -V'_{0}\,\frac{e^{-r_{1}/r_{0}}}{r_{1}/r_{0}}.
\end{equation}
The quantities $V_{0} = 660.77$ MeV and $V'_{0} = -81.0$ MeV can
be computed numerically or can be read from figures 2 and 5. A
simple calculation yields the strength of the repulsive potential
as $V_{1} = 1419.07$ MeV. Figure 6 shows a plot of the effective
nucleon-nucleon interaction. The potential is attractive at large
distances and repulsive for small $r$. In terms of the coupling
constants we can rewrite the effective nucleon-nucleon interaction
as
\begin{equation}
V(r) = \frac{-g_{\sigma}^{2}}{4\pi}\,\frac{e^{-r/r_{0}}}{r} +
\frac{g_{\omega}^{2}}{4\pi}\,\frac{e^{-r/r_{1}}}{r}.
\end{equation}
Comparison with Eq.~(\ref{nn-eqn}) yields ${g_{\sigma}^{2}}/{4\pi}
= 1.20$ and ${g_{\omega}^{2}}/{4\pi} = 1.815$. Note that we are
working in units with $\hbar = c = 1$. These theoretically
obtained values of the coupling constants will differ from the
phenomenological coupling constants because in our simple Yukawa
model of the effective nucleon-nucleon interaction we have
neglected important tensor interactions and spin-orbit terms which
contribute to the form of the potential\cite{weise}. However, the
ratio of the theoretical coupling constants
${g_{\omega}^{2}}/{g_{\sigma}^{2}} = 1.512$ which compares the
relative strength of the repulsive coupling and the attractive
coupling must be equal to the ratio of the phenomenologically
determined coupling constants
${g_{\omega_{p}}^{2}}/{g_{\sigma_{p}}^{2}}$ in order for our
simple Yukawa model to successfully describe the effective
nucleon-nucleon interaction and to ensure the stability of the
deuteron. Using the value ${g_{\sigma_{p}}^{2}}/{4\pi} = 7.303$
and multiplying by the ratio 1.512 we obtain the value of the
phenomenological coupling constant of the $\omega$ meson as
${g_{\omega_{p}}^{2}}/{4\pi} = 11.03$. This value of the coupling
constant differs by $1.85$ percent from the value obtained from
fitting the nucleon-nucleon scattering phase shifts and deuteron
properties which is equal to $10.83$. Therefore, the quantum
mechanics of extended objects leads us to values of the $\omega$
meson coupling constant and of the repulsive core radius which are
consistent with the phenomenologically obtained values.
\section{Conclusion}
In this paper we have developed the Hilbert space representation
theory of the quantum mechanics of extended objects and applied it
to the fuzzy harmonic oscillator and the Yukawa potential. The
results of the fuzzy harmonic oscillator are consistent with our
classical intuition and in the case of the Yukawa potential we
obtain accurate theoretical predictions of the hitherto
phenomenologically obtained nucleon core radius and the $\omega$
meson coupling constant. In an age of increasing miniaturization,
it is conceivable that as the confines of various quantum systems
become comparable to the finite extent of the confined particles,
the quantum mechanics of extended objects will play an important
role in determining the dynamics. Furthermore, the infinite
dimensional generalization of the quantum mechanics of extended
objects, namely, the quantum field theory of extended objects
needs to be understood. Since the ubiquitous and troublesome
vertex in quantum field theory is effectively smeared out in such
a treatment, it is possible that the problem of nonrenormalizable
quantum field theories can be rendered tractable. The author is
pursuing investigations in this direction.
\vspace{1in}
\centerline{Acknowledgements}
I would like to thank E.C.G.~Sudarshan and L.~Sadun for insightful
discussions. I would also like to thank R. Zgadzaj for helping
me with the numerical calculations in Mathematica.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,642 |
\section*{Nomenclature}
The main notation used throughout the text is stated below for quick reference. Other symbols are defined as required.
\subsection*{Sets and Indices}
\begin{ldescription}{$xxxx$}
\item [$\mathcal{B}$] Set of energy blocks, indexed by $b$.
\item [$\mathcal{B}^{c/d}$] Set of energy blocks associated with the charging/discharging power, indexed by $b$.
\item [$\mathcal{T}$] Set of time periods, indexed by $t$ and $\tau$.
\item [$\Omega^{X}$] Set of time periods belonging to the set $X = \{tr, v, test\}$ where $tr$, $v$, $test$ refer to the training, validation, and test set, in that order.
\end{ldescription}
\subsection*{Parameters}
\begin{ldescription}{$xxxxxxx$}
\item [$\underline{E}_{b,t}, \overline{E}_{b,t}$] Width for the aggregate discharging/charging power block $b$ in time period $t$ [kW].
\item [$H$] Feasibility penalty parameter.
\item [$K_{t,\tau}$] Value of the kernel on two feature vectors at time periods $t$ and $\tau$.
\item [$M$] Regularization hyper-parameter.
\item [$N_B$] Number of energy blocks.
\item [$\lambda_{t}$] Electricity price in time period $t$ [\euro/kWh].
\item [$\boldsymbol{z}_{t}$] Vector of regressors in period $t$.
\item [$\gamma$] Hyper-parameter related to the Gaussian kernel.
\end{ldescription}
\subsection*{Decision Variables}
\begin{ldescription}{$xxxxx$}
\item [$m_{b,t}$] Marginal utility of block $b$ of the aggregate power in time period $t$ [\euro/kWh].
\item [$p_{b,t}$] Power in block $b$ and time period $t$ [kW].
\item [$\underline{P}_t, \overline{P}_t$] Lower and upper bound for the aggregate power in time period $t$ [kW].
\item [$\underline{\alpha}_{t}, \overline{\alpha}_{t}$] Coefficient relative to the kernel regression of the lower/upper power bounds in period $t \in \Omega^{tr}$ [kW].
\item [$\epsilon_t$] Duality gap in time period $t \in \Omega^{tr}$ [\euro].
\item [$\underline{\mu}, \overline{\mu}$] Intercept for the lower/upper power bounds [kW].
\item [$\nu_b$] Intercept for the marginal utility of block $b$ [\euro/kWh].
\item [$\underline{\xi}^{+}_t$, $\underline{\xi}^{-}_t$] Slack variables associated with the lower power bound in time period $t$ [kW].
\item [$\overline{\xi}^{+}_t$, $\overline{\xi}^{-}_t$] Slack variables associated with the upper power bound in time period $t$ [kW].
\item [$\rho_{t}$] Coefficient relative to the kernel regression of the marginal utility in time period $t \in \Omega^{tr}$ [\euro/kWh].
\end{ldescription}
\section{Introduction}
According to the White Paper on transport of the \cite{COM2011_whitepaper}, one of the main goals to achieve a sustainable transport system is to \textit{halve the use of 'conventionally fuelled' cars in urban transport by 2030; phase them out in cities by 2050; achieve essentially CO$_2$-free city logistics in major urban centres by 2030}. This will spur the use of electric vehicles (EVs) across Europe \citep{COM2011_whitepaper}. Although nowadays the penetration of EVs in the European market is slow albeit steady, the estimated electricity demand from all EVs worldwide was 54 TWh in 2017 \citep{bunsen2018global}. Thus, the growing electrification of the road transport will impact the power system operation and planning of the future and, as a consequence, new actors and facilities will come into play, e.g. aggregator agents \citep{bandpey2018two}, or battery swap stations \citep{yang2015battery}.
Within the context of the restructured power industry, the aggregator agents face several challenges: (i) the forecast of the charging power of the fleet of EVs in the short-term, and (ii) the determination of a bid curve to participate in the electricity market to maximize their profits when the fleet of EVs is large enough. \textcolor{black}{Nowadays, the EVs may be prepared with bi-directional vehicle-to-grid (V2G) capabilities, which means that the EVs can extract power from and inject power into the electrical grid while parked \citep{kempton2005vehicle}. This is possible as long as the EVs are equipped with the necessary smart metering-and-control infrastructure as well as a suitable connection to the electrical grid. In this case,} the aggregator will also need to forecast the EV-fleet discharging power.
Short-term load forecasting is widely applied in the power sector to predict the electricity demand (and price) for different granularity levels \citep{shahidehpour2003market}. In the last years, EV charging load forecasting tools have been proposed in the technical literature by means of ARIMA-based models \citep{amini2016, korolko2015}; machine-learning techniques \citep{majidpour2016,sun2016,xydas2013}, such as support vector regression; or big data technologies \citep{arias2016}. All these papers neglected the bi-directional V2G capabilities of the EVs. Moreover, the above methodologies aimed to provide a single-purpose application, i.e., the forecasting of the charging power of either an EV or a fleet of EVs. Instead, we propose here a multi-purpose application for the aggregator of EVs in order to not only forecast the EV-fleet power, but also to derive a bid/offer curve according to the rules of the electricity market, e.g. see \cite{omie}.
In this paper, we apply inverse optimization (IO) to forecast the EV-fleet power while deriving a bid/offer curve. The goal of an IO problem is to infer the optimization model parameters given a set of observed decision variables or measurements collected by an observer. For instance, \cite{zhang2010inverse} applied IO for linearly-constrained convex problems in the industrial and managerial areas but its application was limited to single observed decisions. \cite{aswani2018} proposed a statistically consistent methodology for IO when the measurements of the optimal decisions of a convex optimization problem are noisy. In a more general context, when the observer has imperfect information, \cite{esfahani2018data} devised a distributionally robust inverse optimization problem. IO has also been applied for equilibrium problems \citep{bertsimas2015data}, multiobjective convex optimization \citep{roland2016finding}, or robust optimization \citep{chan2019inverse}. However, few papers have implemented IO in the field of power systems \citep{saez2016data, Saez-Gallego2018,lu2018data, ruiz2013,zhou2010}.
\cite{zhou2010} applied IO in the context of generation expansion planning to find an effective incentive policy; \cite{ruiz2013} estimated rival marginal offer prices for a strategic producer in a network-constrained day-ahead market by using IO; \cite{saez2016data} prescribed an IO approach by using bi-level programming to infer the market bid parameters of a pool of price-responsive consumers; in \cite{Saez-Gallego2018}, a novel IO approach was devised to statistically estimate the aggregate load of a pool of price-responsive buildings in the short-term; and, finally, \cite{lu2018data} applied IO to estimate the demand response characteristics of price-responsive consumers, as similarly done in \cite{saez2016data}. Unlike existing works \citep{saez2016data, Saez-Gallego2018,lu2018data, ruiz2013,zhou2010}, we address the EV-fleet power forecasting with an IO approach in which the prediction tool accounts for two distinctive features: (i) the pool of EVs may be equipped with V2G capabilities, and (ii) there may exist a strong nonlinear relationship between the EV-fleet power and the explanatory variables, namely past EVs' charging/discharging patterns and past electricity prices. To capture these nonlinear relations, we endogenously introduce kernels into the proposed IO approach.
Kernels are widespread in the literature on machine learning, as can be seen in \cite{hofmann2008kernel,trevor2009elements,benitez2019cost}, just to name a few; and, in power systems, they were mainly used to predict electricity prices \citep{dudek2018probabilistic,kekatos2013day,kekatos2014electricity}. \cite{kekatos2013day} applied a kernel regression to forecast the electricity prices from the Midwest Independent System Operator day-ahead market in which the kernel itself is constructed by the product of three kernels: one for vectorial data and other two to account for non-vectorial data such as time and nodal information. This approach was generalized to low-rank kernel-based learning models in \cite{kekatos2014electricity}. Finally, \cite{dudek2018probabilistic} devised a probabilistic forecast method built on the Nadaraya-Watson estimator to predict the electricity prices from the Polish balancing and day-ahead markets.
The contributions of this paper are threefold:
\begin{itemize}
\item From a modeling perspective, we provide an IO framework to forecast the aggregate power of a fleet of EVs with V2G capabilities. In addition, the outcome of this framework may be used to bid/offer in the electricity market by using the estimated price-quantity tuples. To the best of the authors' knowledge, this is the first time in the technical literature that IO has been used to forecast the aggregate power of a price-responsive EVs' aggregator and to derive a suitable bid/offer curve for such an aggregator.
\item \textcolor{black}{We approximate the solution of the generalized IO problem by using a data-driven two-step estimation procedure. This procedure requires solving two different convex programming problems, which makes the process of building the forecasting model computationally affordable}. \textcolor{black}{A}s a salient feature of this work, a kernel is endogenously incorporated into the regression functions.
\item We thoroughly analyze the performance of the proposed methodology by using real-life data based on the latest National Household Travel Survey \citep{NHTS} and we compare the results against those provided by two machine-learning techniques, namely support vector regression and kernel-ridge regression. The former has been reported to exhibit the best forecasting performance for the present application in the technical literature \citep{xydas2013,sun2019optimal}.
\end{itemize}
The rest of the document is organized as follows: Section \ref{sec:methodology} provides the IO methodology; Section \ref{sec:benchmark} gives a general overview on the comparison methodologies; in Section \ref{sec:case}, we analyze a case study for a residential aggregator of EVs; conclusions are duly drawn in Section \ref{sec:conclusion}; and, finally, \ref{sec:simulator} presents a mixed-integer linear programming problem to generate synthetic data on the behavior of an EV fleet.
\section{Inverse Optimization Methodology}
\label{sec:methodology}
To put the problem in context, we aim to forecast or learn the EV-fleet power $p_t$ (also known as aggregate power) in time period $t$ of \textcolor{black}{a price-responsive} aggregator, who is also interested in deriving a bid/offer curve to be submitted to the electricity market. The participants of the electricity market, namely consumers and producers, must submit a bid/offer curve consisting of blocks of energy and price. For the consumers, the bid curve should be monotonically non-increasing, whereas, for the producers, the offer curve should be monotonically non-decreasing, e.g. see \cite{omie}. \textcolor{black}{We assume a rational aggregator, which means that the market strategy of the EV fleet fundamentally relies on \emph{arbitrage}, by behaving as a consumer when the electricity price is low and, on the contrary, by acting as a producer when the price is high.}
In order to predict the EV-fleet aggregate power and to derive a bid/offer curve, the aggregator may use past observed data, which are denoted as explanatory variables, features or regressors. \textcolor{black}{As one should expect for a price-responsive aggregator, t}he regressors in time period $t$ can be the lagged electricity price $\lambda^{\prime}_{t-l}$ or aggregate power $p^{\prime}_{t-l}$, $\forall l= 1, 2, ...$. In addition, past EV driving patterns, meteorological data, or categorical data (e.g., time information) can also be used for forecasting purposes.
Within this context, we first introduce the proposed forecasting\footnote{This problem is also known as forward or reconstruction problem in the IO jargon.} model in Section \ref{sec:forecasting_model}. Subsequently, Section \ref{sec:kernels} explains how we can account for past information. Finally, Section \ref{sec:estimation} thoroughly describes the two-step procedure to estimate the required parameters of the forecasting model.
\subsection{Forward Model}
\label{sec:forecasting_model}
The key idea of this work is to forecast the EV-fleet power by using a simple optimization (linear programming) model which may, to some extent, \textit{mimic} its real behavior. In addition, unlike other forecasting techniques, this model is able to derive a bid/offer curve, as imposed by rules of electricity markets. Therefore, the formulation of the forward model that, we assume, represents the aggregate response of an EV fleet to the electricity prices at time period $t$, is mathematically expressed as:
\begin{subequations}
\label{ev_agg}
\begin{align}
&\max_{p_{b,t}} \quad \sum_{b \in \mathcal{B}} p_{b,t} \left(m_{b,t} - \lambda_t \right) \label{fo_fwp} \\
& \text{subject to:} \notag\\
& \underline{P}_t \leq \sum_{b \in \mathcal{B}} p_{b,t} \leq \overline{P}_t : (\underline{\beta}_t, \overline{\beta}_t) \label{const1_fwp} \\
& 0 \leq p_{b,t} \leq \overline{E}_{b,t} : (\underline{\phi}^c_{b,t}, \overline{\phi}^c_{b,t}), \quad \forall b \in \mathcal{B}^c \label{const2_fwp} \\
& \underline{E}_{b,t} \leq p_{b,t} \leq 0 : (\underline{\phi}^d_{b,t}, \overline{\phi}^d_{b,t}), \quad \forall b \in \mathcal{B}^d, \label{const3_fwp}
\end{align}
\end{subequations}
\noindent where dual variables are represented in parentheses after a colon in the respective constraints. For the sake of unit consistency, hourly time periods are considered.
The reconstruction problem \eqref{ev_agg} aims to maximize the welfare of the EV aggregator, as given by the objective function \eqref{fo_fwp}. This objective function is made up of the EV fleet's surplus, which is related to the aggregate charging \textcolor{black}{and discharging} power. \textcolor{black}{The aggregate power is positive when the EVs' aggregator is charging, i.e., it behaves as a consumer. Otherwise, the aggregate power takes on negative values when the aggregator is discharging, i.e., it acts as a producer}. We assume step-wise offer/bid price functions as depicted in Fig. \ref{fig:bid}, as is customary in real-world electricity markets, e.g. see \cite{omie}. Constraints \eqref{const1_fwp} represent the lower and upper bounds on the aggregate power. Constraints \eqref{const2_fwp} impose the lower and upper bound on each block $b$ within the set $\mathcal{B}^c$ of charging power blocks. Since the charging power is assumed to be non-negative, then $p_{b,t}$ is bounded between $0$ and a positive power bound $\overline{E}_{b,t}$. Likewise, constraints \eqref{const3_fwp} impose the lower and upper bound on each block $b$ within the set $\mathcal{B}^d$ of the discharging power blocks. We assume that the discharging power is non-positive and thus $p_{b,t}$ is bounded between a negative power bound $\underline{E}_{b,t}$ and $0$. Note that the total power $p_t = \sum_{b} p_{b,t}$.
\begin{figure}[h] \centering
\begin{tikzpicture}[scale=0.5]
\begin{axis}[
width=1.2\textwidth,
height=8cm,
xmin = -120,
xmax = 100,
ymin = 0,
ymax = 80,
clip marker paths=true,
xlabel = Power $p$ (kW),
ylabel = Price (\euro/kWh),
ytick = {20, 30, 40, 50, 60, 70},
yticklabels={$m_3$, $m_2$, $m_1$, $m_{-1}$, $m_{-2}$, $m_{-3}$},
label style={font=\Large},
tick label style={font=\Large} ]
\addplot[line width=1pt,draw=black] table [x=power, y=price, col sep=comma] {bid_example.csv};
\addplot[black] coordinates {(0,0)(0,100)};
\addplot[dashed, black] coordinates {(-100,0)(-100,70)};
\addplot[dashed, black] coordinates {(-80,0)(-80,60)};
\addplot[dashed, black] coordinates {(-120,70)(-100,70)};
\node[fill=white, font=\Large] at (30, 10) {$\underline{E}_{-3}$};
\draw[<-,-triangle 60] (axis cs:-80, 5) -- (axis cs:-100, 5);
\addplot[dashed, black] coordinates {(-60,0)(-60,50)};
\addplot[dashed, black] coordinates {(-120,60)(-80,60)};
\node[fill=white, font=\Large] at (50, 10) {$\underline{E}_{-2}$};
\draw[<-,-triangle 60] (axis cs:-60, 5) -- (axis cs:-80, 5);
\addplot[dashed, black] coordinates {(-120,50)(-60,50)};
\node[fill=white, font=\Large] at (95, 10) {$\underline{E}_{-1}$};
\draw[<-,-triangle 60] (axis cs: 0, 5) -- (axis cs:-60, 5);
\addplot[dashed, black] coordinates {(45,0)(45,30)};
\addplot[dashed, black] coordinates {(-120,40)(0,40)};
\node[fill=white, font=\Large] at (140, 10) {$\overline{E}_{1}$};
\draw[->,-triangle 60] (axis cs: 0, 5) -- (axis cs:45, 5);
\addplot[dashed, black] coordinates {(65,0)(65,20)};
\addplot[dashed, black] coordinates {(-120,30)(45,30)};
\node[fill=white, font=\Large] at (175, 10) {$\overline{E}_{2}$};
\draw[->,-triangle 60] (axis cs: 45, 5) -- (axis cs:65, 5);
\addplot[dashed, black] coordinates {(85,0)(85,20)};
\addplot[dashed, black] coordinates {(-120,20)(65,20)};
\node[fill=white, font=\Large] at (195, 10) {$\overline{E}_{3}$};
\draw[->,-triangle 60] (axis cs: 65, 5) -- (axis cs:85, 5);
\end{axis}
\end{tikzpicture} \\
\vspace{-0.2cm}
\caption{Three-block stepwise offer (bid) price function of the EVs' aggregator. In this example, the offer (bid) price function is represented to the left (right) of the y-axis, and the sets $\mathcal{B}^d = \{-3, -2, -1\}$ and $\mathcal{B}^c = \{1, 2, 3\}$. } \label{fig:bid}
\end{figure}
As previously stated, we want to {\color{black} anticipate} the EV-fleet power response by solving \eqref{ev_agg}. However, to this end, the set of parameters $\Phi = \{ \underline{E}_{b,t}, \overline{E}_{b,t}, m_{b,t}, \underline{P}_t, $ $ \overline{P}_t \}$ needs to be estimated since they are a priori unknown. \textcolor{black}{These parameters should be functions of time and of any regressor that the forecaster may consider meaningful and explanatory of the EV-fleet's operational behavior and, therefore, are to be inferred from past observations of the} aggregate power $p^{\prime}_t$, \textcolor{black}{the} electricity price $\lambda^{\prime}_t$, and \textcolor{black}{the regressors that are eventually considered}. This fact gives rise to a generalized IO problem, which is highly nonlinear and non-convex. This problem can be naturally formulated as a bilevel optimization problem, which may be computationally nonviable when moderately increasing the sample size. To deal with such complexity, we apply a methodology that builds on the one first proposed in \cite{Saez-Gallego2018}. In that paper, however, the regression function is linear in their features and may be limited to capture nonlinear relations between the EV-fleet power and the regressors. To circumvent such a caveat, and as one of the salient features of this work, we incorporate kernels into the regression functions. Furthermore, the forward model we propose, i.e. problem \eqref{ev_agg}, allows for power intakes and outputs, unlike the one used in \cite{Saez-Gallego2018}. This extra dose of model flexibility is critical to capture the behavior of an EV fleet with V2G capabilities \textcolor{black}{since the aggregator power may be positive when the net power comes from the grid (i.e. the aggregator acts as a consumer) or negative when the net power flows into the grid (i.e. the aggregator behaves as a producer). Therefore, the forecasting model is tailored to account for this dual operational mode by introducing differentiated marginal utility blocks both for charging and discharging.}
\subsection{Accounting for Past Information: Kernels}
\label{sec:kernels}
In the realm of machine learning, the kernel functions are rather popular in learning algorithms \citep{hofmann2008kernel} since they are able to capture nonlinear relationships between the dependent and the explanatory variables. Unlike in \cite{Saez-Gallego2018}, where affine functions were used to model the dependence of the parameters of the forward model \eqref{ev_agg} on the regressors, we propose the use of kernel regressions to estimate $\underline{P}_t$, $\overline{P}_t$, and $m_{b,t}$:
\begin{align}
& \underline{P}_t = \underline{\mu} + \sum_{\tau \in \Omega^{tr}} \underline{\alpha}_{\tau} K_{t,\tau}, \quad \forall t \in \mathcal{T} \label{pmin_kernel_regression} \\
& \overline{P}_t = \overline{\mu} + \sum_{\tau \in \Omega^{tr}} \overline{\alpha}_{\tau} K_{t,\tau}, \quad \forall t \in \mathcal{T} \label{pmax_kernel_regression}\\
& m_{b,t} = \nu_b + \sum_{\tau \in \Omega^{tr}} \rho_{\tau} K_{t,\tau}, \quad \forall t \in \mathcal{T}. \label{m_kernel_regression}
\end{align}
Many kernel functions can be used: polynomial, hyperbolic tangent, Gaussian, among others. For the sake of illustration purposes, the Gaussian kernel \citep{trevor2009elements} can be defined as follows:
\begin{align}
& K_{t,\tau} = K \left( \boldsymbol{z}_t, \boldsymbol{z}_{\tau}\right) = e^{-\gamma \lVert \boldsymbol{z}_t - \boldsymbol{z}_{\tau} \rVert_2^2}, \quad \forall t \in \mathcal{T}, \tau \in \Omega^{tr}, \label{kernel_eq1}
\end{align}
\noindent wherein $\gamma$ is a scale parameter inversely proportional to the variance of the Gaussian function; and $\lVert \boldsymbol{z}_t - \boldsymbol{z}_{\tau} \rVert_2^2$ is the squared Euclidean distance between two feature vectors at time periods $t$ and $\tau$. Thus, the Gaussian kernel can be interpreted as a similarity measure between two time periods, i.e., if the two feature vectors are identical $\boldsymbol{z}_t=\boldsymbol{z}_{\tau}$, then the value of $K_{t,\tau} = 1$, otherwise its value ranges in the interval $(0, 1]$.
\textcolor{black}{As previously mentioned, meaningful or explanatory features should be used in the kernel regression function for adequately inferring the estimates. In the proposed IO methodology, the power bounds $\underline{P}_t$ and $\overline{P}_t$ are key to capturing the price-responsiveness of the aggregator since they determine the width and the range of the step-wise price-response function of the EV fleet. For instance, if the electricity price is high, one should expect that the EV fleet will behave as a producer (in V2G mode) and therefore, the power bounds would take on negative values, this way producing a step-wise \emph{offer} curve displaced towards negative power values (i.e., discharge). On the contrary, if the price is low, one should expect the opposite: the EV fleet would act as a consumer, with the power bounds taking positive values and defining a step-wise \emph{bidding} curve displaced towards positive power values (i.e., charge). On the other hand, the marginal utilities aim to capture the price-sensitivity of the EVs aggregate power. Therefore, by making both the power bounds and the marginal utilities dependent on past prices along with past values of the aggregate EV-fleet power and/or other external factors, we can capture the changes in the EV-fleet power due to price variations over time.}
\textcolor{black}{\emph{Illustrative example.} L}et us assume that $\boldsymbol{z}_t$ comprises only one regressor, {\color{black} e.g.} the electricity price in the previous time period, i.e., $\boldsymbol{z}_t$ = $\lambda_{t-1}$. Thus, Fig. \ref{fig:kernel} provides the values of the kernel for each time period $t$ of a day with respect to the second time period $\tau = 2$, i.e., $K_{t, \tau=2}$, for different values of parameter $\gamma$. Moreover, the values of the regressor $\boldsymbol{z}_t$ for the 24 hours are shown in the figure. We can observe that high values of $\gamma$ lead to kernel values equal to $1$ just when the two regressors are very close to each other (e.g., see time periods 21--23 for $\gamma=1$); conversely, low values of $\gamma$ lead to kernel values equal to $1$ even when the regressors are very different from each other (e.g., see values for all time periods when $\gamma=0.001$). Therefore, we need to carefully tune the hyper-parameter $\gamma$, as described in Section \ref{tuning}.
\begin{figure}[h] \centering
\begin{tikzpicture}[scale=0.5]
\begin{axis}[
width=1.2\textwidth,
height=8cm,
xmin = 0.5,
xmax = 25.5,
ymin = 0,
ymax = 110,
ylabel = Regressor $\boldsymbol{z}_t \text{=} \lambda_{t-1}$ (\euro/MWh),
xlabel = Time period $t$ (h),
xtick= {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24},
ytick={0, 20, 40, 60, 80, 100},
major grid style={line width=0,draw=white!50},
label style={font=\Large},
tick label style={font=\large},
ybar interval=0.6]
\addplot[area legend] table [x=t, y=regressor, col sep=comma] {kernel_plot_data_with_regressors.csv};\label{plot_one_k}
\end{axis}
\begin{axis}[
ylabel near ticks,
yticklabel pos=right,
axis x line=none,
width=1.2\textwidth,
height=8cm,
xmin = 0,
xmax = 25,
ymin = 0,
ymax = 1.1,
legend style={at={(0.5,-0.2)},anchor=north, legend columns=5, draw=none,font=\Large},
ylabel = Value of $K_{t,\tau=2}$,
xtick= {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24},
ytick={0, 0.2, 0.4, 0.6, 0.8, 1.0},
label style={font=\Large},
tick label style={font=\large}]
\addlegendimage{/pgfplots/refstyle=plot_one_k}\addlegendentry{Regressor}
\addplot[line width=1pt, mark=*, mark size=3, mark options={solid}, draw=black] table [x=t, y=1, col sep=comma] {kernel_plot_data_with_regressors.csv}; \label{plot_two_k} \addlegendentry{$\gamma=1$}
\addplot[line width=1pt, mark=o, mark size=3, mark options={solid}, draw=black] table [x=t, y=0.1, col sep=comma] {kernel_plot_data_with_regressors.csv};\label{plot_three_k} \addlegendentry{$\gamma=0.1$}
\addplot[line width=1pt, mark=square, mark size=3, mark options={solid}, draw=black] table [x=t, y=0.01, col sep=comma] {kernel_plot_data_with_regressors.csv};\label{plot_three_k} \addlegendentry{$\gamma=0.01$}
\addplot[dashed, line width=1pt, mark=triangle, mark size=3, mark options={solid}, draw=black] table [x=t, y=0.001, col sep=comma] {kernel_plot_data_with_regressors.csv};\label{plot_three_k} \addlegendentry{$\gamma=0.001$}
\end{axis}
\end{tikzpicture} \\
\vspace{-0.2cm}
\caption{Values of the Gaussian kernel for each time period $t$ of a day with respect to period $\tau = 2$, i.e., $K_{t, \tau=2}$, for different values of the parameter $\gamma$ in the right y-axis and the corresponding regressor values in the left y-axis. } \label{fig:kernel}
\end{figure}
\subsection{Two-step Estimation Procedure}
\label{sec:estimation}
The thrust of this work is the estimation of the set of parameters $\Phi = \{ \underline{E}_{b,t}, \overline{E}_{b,t}, m_{b,t}, \underline{P}_t, \overline{P_t} \}$ and the corresponding coefficient estimates $\underline{\mu}, \underline{\alpha}_t, \overline{\mu}, \overline{\alpha_t}, $ $ \nu_b, \rho_t$ of the regression functions described in \eqref{pmin_kernel_regression}--\eqref{m_kernel_regression}. To do that, we can use bilevel optimization, however, as mentioned previously, it may lead to a prohibitive computational burden when moderately increasing the sample size. Therefore, we resort to a two-step procedure based on two convex programming problems: (i) the \emph{feasibility problem}, which is devoted to estimating all parameters that determine the feasibility of the observed EV-fleet power values in the forward problem \eqref{ev_agg} (i.e., the power bounds), and (ii) the \emph{optimality problem}, which estimates the marginal utility of the EVs' aggregator, i.e., the parameters of problem \eqref{ev_agg} that are related to the optimality of the observed power values. The key idea of the \emph{feasibility problem} is to shape the power bounds $\underline{P}_{t}$ and $\overline{P}_{t}$ so that a certain percentage $H$ of the observed EV-fleet power values are feasible for the forward problem \eqref{ev_agg}. Note that the width for the aggregate power blocks $\underline{E}_{b,t}$ and $\overline{E}_{b,t}$ can be easily computed from the estimated power bounds by assuming that the energy blocks are all of same length. Conversely, the \emph{optimality problem} estimates the marginal utilities $m_{b,t}$ driven by the minimization of the duality gap of the forward problem once the power bounds are fixed. Its aim is thus to make the observed EV-fleet power values as optimal as possible for problem \eqref{ev_agg} (recall that we use \eqref{ev_agg} as the forward model). It should be noted that the pair $(m_{b,t},\overline{E}_{b,t})$ for all blocks constitutes the bid curve of the aggregator at time period $t$. Likewise, the pair $(m_{b,t},-\underline{E}_{b,t})$ for all blocks constitutes the offer curve of the aggregator at time period $t$. In practice, those curves may be submitted to the market operator, who is the entity responsible for the financial management of electricity markets, e.g. see \cite{omie}.
\subsubsection{Feasibility Problem}
\label{sec:feas_problem}
Given a fixed value of control parameter $H \in [0, 1)$, this problem can be formulated as:
\begin{subequations}
\label{feasibility_problem}
\begin{align}
&\min_{\Xi^{fp}} \sum_{t \in \Omega^{tr}} H \left( \overline{\xi}_t^{-} + \underline{\xi}_t^{-} \right) + \sum_{t \in \Omega^{tr}} \left( 1 - H \right) \left( \overline{\xi}_t^{+} + \underline{\xi}_t^{+} \right) \label{fo_fp}\\
& \text{subject to:} \notag\\
& \overline{P}_t - p^{\prime}_t = \overline{\xi}_t^{+} - \overline{\xi}_t^{-}, \quad \forall t \in \Omega^{tr} \label{const1_fp}\\
& p^{\prime}_t - \underline{P}_t = \underline{\xi}_t^{+} - \underline{\xi}_t^{-}, \quad \forall t \in \Omega^{tr} \label{const2_fp}\\
& \overline{P}_t \geq \underline{P}_t, \quad \forall t \in \Omega^{tr} \label{const3_fp}\\
& \text{Constraints \eqref{pmin_kernel_regression}--\eqref{pmax_kernel_regression}} \label{const4_fp}\\
& \overline{\xi}_t^{+}, \underline{\xi}_t^{+}, \overline{\xi}_t^{-}, \underline{\xi}_t^{-} \geq 0, \quad \forall t \in \Omega^{tr}, \label{const5_fp}
\end{align}
\end{subequations}
\noindent where the set of variables to be optimized is $\Xi^{fp} = \{ \underline{P}_t, \overline{P}_t,$ $ \overline{\xi}_t^{+}, \underline{\xi}_t^{+}, \overline{\xi}_t^{-}, \underline{\xi}_t^{-}, \underline{\mu},$ $\overline{\mu}, \underline{\alpha}_t, \overline{\alpha}_t\}$. Note that problem \eqref{feasibility_problem} is a convex program.
The objective function \eqref{fo_fp} minimizes the sum of feasibility and infeasibility slack variables associated with the power bounds. Constraints \eqref{const1_fp}--\eqref{const2_fp} are the power bound constraints with the feasibility and infeasibility slack variables, where $p^{\prime}_t$ is the observed EV-fleet power value at time period $t$. Constraints \eqref{const3_fp} ensure that the upper bound of the aggregate power is greater than its respective lower bound. Constraints \eqref{const4_fp} impose kernel regression functions for the power bounds wherein the coefficients to be estimated are $\underline{\mu}$, $\overline{\mu}$, $\underline{\alpha}_t$, $\overline{\alpha}_t$. Finally, constraints \eqref{const5_fp} declare the variables $\overline{\xi}_t^{+}, \underline{\xi}_t^{+}, \overline{\xi}_t^{-}, \underline{\xi}_t^{-}$ as non-negative. Importantly, the higher the value of $H$, the wider the power bounds delivered by \eqref{feasibility_problem} and, therefore, the more price-responsive the EV fleet is expected to be.
The use of kernels increases the flexibility of the regression function when increasing the size of the training set. However, it also tends to over-fitting. To control the risk of over-fitting, a regularization parameter $M \in [0, 1]$ is used to factor in the sum of the squared values of the coefficient estimates $\underline{\alpha}_{t}$ and $\overline{\alpha}_{t}$, similarly to what is typically done in kernel-ridge regression \citep{trevor2009elements}. Thus, the objective function \eqref{fo_fp} \textcolor{black}{should be replaced with}:
{\color{black}
\begin{align}
&\min_{\Xi^{fp}} M \sum_{t \in \Omega^{tr}} \left( \underline{\alpha}_{t}^2 + \overline{\alpha}_{t}^2 \right) \notag\\
& \hspace{2cm}+ \left( 1 - M \right) \bigl[\sum_{t \in \Omega^{tr}} H \left( \overline{\xi}_t^{-} + \underline{\xi}_t^{-} \right) + \sum_{t \in \Omega^{tr}} \left( 1 - H \right) \left( \overline{\xi}_t^{+} + \underline{\xi}_t^{+} \right)\bigr].
\end{align}
Both hyper-parameters $M$ and $H$ in the objective function and parameter $\gamma$ of the kernel regression function must be adequately adjusted to modulate the power bounds to the observed EV-fleet power values so that the out-of-sample forecasting error is minimized.}
\subsubsection{Optimality Problem}
\label{sec:opt_problem}
Once the power bounds (i.e., $\widehat{\underline{P}}_t$, $\widehat{\overline{P}}_t$) are estimated from \eqref{feasibility_problem}, we can compute the power block limits $\widehat{\overline{E}}_{b,t}$, $\forall b \in \mathcal{B}^c$ and $\widehat{\underline{E}}_{b,t}$, $\forall b \in \mathcal{B}^d$ based on the assignments described in Table \ref{tab:power_block_limits}. The optimality problem can then be derived by using results from duality theory of linear programming and it can be formulated as:
\begin{subequations}
\label{optimality_problem}
\begin{align}
&\min_{\Xi^{op}} \quad \sum_{t \in \Omega^{tr}} \epsilon_t \label{fo_op}\\
& \widehat{\overline{P}}_t \overline{\beta}_t - \widehat{\underline{P}}_t \underline{\beta}_t + \sum_{b \in \mathcal{B}^c} \widehat{\overline{E}}_{b,t} \overline{\phi}^c_{b,t} - \sum_{b \in \mathcal{B}^d} \widehat{\underline{E}}_{b,t} \underline{\phi}^d_{b,t} - \epsilon_t = \notag\\
&\hspace{6.5cm}\sum_{b \in \mathcal{B}} p^{\prime}_{b,t} \left( m_{b,t} - \lambda_t\right), \forall t \in \Omega^{tr} \label{const1_op}\\
& - \underline{\phi}^c_{b,t} + \overline{\phi}^c_{b,t} - \underline{\beta}_t + \overline{\beta}_t = m_{b,t} - \lambda_t, \quad \forall b \in \mathcal{B}^c, t \in \Omega^{tr} \label{const2_op}\\
& - \underline{\phi}^d_{b,t} + \overline{\phi}^d_{b,t} - \underline{\beta}_t + \overline{\beta}_t = m_{b,t} - \lambda_t, \quad \forall b \in \mathcal{B}^d, t \in \Omega^{tr} \label{const3_op}\\
&\text{Constraints \eqref{m_kernel_regression}} \label{const4_op}\\
& \nu_b \geq \nu_{b+1}, \quad \forall b \in \mathcal{B} \setminus \{b = N_B\} \label{const5_op}\\
& \underline{\beta}_t, \overline{\beta}_t, \underline{\phi}^c_{b,t}, \overline{\phi}^c_{b,t}, \underline{\phi}^d_{b,t}, \overline{\phi}^d_{b,t} \geq 0, \quad \forall t \in \Omega^{tr}, \label{const6_op}
\end{align}
\end{subequations}
\noindent where the set of decision variables is $\Xi^{op} = \{ m_{b,t}, \epsilon_t,\underline{\beta}_t, \overline{\beta}_t, \underline{\phi}^c_{b,t}, \overline{\phi}^c_{b,t}, \underline{\phi}^d_{b,t}, \overline{\phi}^d_{b,t}, $ $ \nu_b, \rho_t \}$. Note that problem \eqref{optimality_problem} is a convex program.
\begin{table}[h!]
\caption{Value of $\widehat{\overline{E}}_{b,t}$, $\forall b \in \mathcal{B}^c$ and $\widehat{\underline{E}}_{b,t}$, $\forall b \in \mathcal{B}^d$}
\label{tab:power_block_limits}
\centering
\begin{tabular}{c@{\hspace{1\tabcolsep}}c@{\hspace{1\tabcolsep}}c@{\hspace{1\tabcolsep}}c@{\hspace{1\tabcolsep}}c}
\cline{3-5}
\\[-17pt]
\multicolumn{2}{c}{} & $\widehat{\overline{P}}_t \geq \widehat{\underline{P}}_t \geq 0$ & $\widehat{\underline{P}}_t \leq \widehat{\overline{P}}_t \leq 0$ & $\widehat{\overline{P}}_t \geq 0 \geq \widehat{\underline{P}}_t$ \\
\hline
\multirow{2}{*}{$\widehat{\overline{E}}_{b,t}$} & $b=1$ & $\widehat{\underline{P}}_t$ & $0$ & $\widehat{\overline{P}}_t/N_B$ \\
& $b \in \mathcal{B}^c \setminus \{1\}$ & $\frac{\left(\widehat{\overline{P}}_t - \widehat{\underline{P}}_t\right)}{N_B - 1}$ & $0$ & $\widehat{\overline{P}}_t/N_B$ \\
\hline
\multirow{2}{*}{$\widehat{\underline{E}}_{b,t}$} & $b=-1$ & $0$ & $\widehat{\overline{P}}_t$ & $\widehat{\underline{P}}_t/N_B$ \\
& $b \in \mathcal{B}^d \setminus \{-1\}$ & $0$ & $\frac{\left(\widehat{\underline{P}}_t - \widehat{\overline{P}}_t\right)}{N_B - 1}$ & $\widehat{\underline{P}}_t/N_B$ \\
\hline
\end{tabular}
\end{table}
The objective function \eqref{fo_op} minimizes the sum of the duality gaps of problem \eqref{ev_agg}.
Constraints \eqref{const1_op} is the relaxed equality constraint associated with the strong duality theorem. Constraints \eqref{const2_op}--\eqref{const3_op}, \eqref{const6_op} are the dual feasibility constraints. Constraints \eqref{const4_op} impose a kernel regression function, with $\nu_b$ and $\rho_t$ as the coefficients to be estimated, in order to relate the marginal utilities and the regressors. Finally, constraints \eqref{const5_op} set the marginal utilities to be monotonically non-increasing, as imposed by rules in electricity markets \citep{omie}.
\subsubsection{Statistical Computation of Hyper-Parameters}
\label{tuning}
The main goal of this work is to learn the EV-fleet power for each period $t \in \Omega^{test}$ with the forward model \eqref{ev_agg}, which relies on the knowledge of a series of parameters, i.e., the power bounds and the marginal utilities. Those parameters are estimated with the models described in Sections \ref{sec:feas_problem} and \ref{sec:opt_problem}, whose outcome depend on the value of three hyper-parameters: $H$, $M$, and $\gamma$. Their optimal values are computed by using a grid search technique. We recursively solve problems \eqref{feasibility_problem} and \eqref{optimality_problem} for the training set $\Omega^{tr}$; and we then solve the forward problem \eqref{ev_agg} over the validation set $\Omega^{v}$ by using the estimated parameters $\Phi = \{ \underline{E}_{b,t}, \overline{E}_{b,t}, m_{b,t}, \underline{P}_t, \overline{P}_t \}$ as well as the electricity price at time period $t \in \Omega^{v}$. Thus, we set as the optimal values of the hyper-parameters those that lead to the least out-of-sample forecasting error in $\Omega^v$.
\vspace{-0.3cm}
\section{Comparison Methodologies}
\label{sec:benchmark}
We compare the performance of the proposed kernel-based IO approach, hereinafter referred to as \textit{kio}, against (i) the state-of-the-art model to forecast the EV-fleet power, namely kernelized support vector regression (\textit{svr}), (ii) a kernel-ridge regression model (\textit{krr}), (iii) an IO approach with linear kernels (\textit{lio}), and (iv) persistence or naive models. Note that we use a Gaussian kernel in the regression functions of the \textit{feasibility problem} and a linear kernel in the regression function of the \textit{optimality problem}, as this combination exhibited the best trade-off between forecasting performance and simplicity in our numerical experiments.
Regarding the \textit{svr} and \textit{krr}, we respectively use the epsilon-\textit{svr} and the kernel-ridge regression models implemented in the scikit-learn library \citep{scikit-learn} under the Python programming language. The interested reader is referred to \cite{smola2004tutorial} for a detailed description of the \textit{svr}. For the sake of comparison, we also use the Gaussian kernel and we tune the corresponding hyper-parameters via grid search. Specifically, we tune the cost of constraints violation $C$ and the parameter associated with the kernel $\gamma$ for \textit{svr}; and the penalty parameter $\delta$ and the $\gamma$ parameter for \textit{krr}.
Regarding the naive models, we use three different ones since the EV-fleet power may experience seasonal patterns: \textit{h-naive}, \textit{d-naive}, and \textit{w-naive}, in which the forecast value of the aggregate power at time $t$ is equal to the observed value at time $t-1$, $t-24$, and $t-168$, in that order. Note that the forecast error of the naive models provides insight into the difficulty of prediction.
The performance of the methods is compared with two metrics: the mean absolute error (MAE) and the root mean square error (RMSE) on the test set.
\section{Case Study}
\label{sec:case}
We first describe the data used for the case study in Section \ref{sec:ev_fleet_data}. Subsequently, we comprehensively analyze the results from the proposed approach for three cases of charging behavior without enabling the V2G capabilities in Section \ref{sec:results_g2v}. Finally, Section \ref{sec:results_v2g} presents the results for two cases when the electric vehicles are integrated with V2G services.
\subsection{EV-fleet Data}
\label{sec:ev_fleet_data}
For learning purposes, we would be only interested in the time series of electricity prices, the aggregate power of an EV fleet, and the total number of available vehicles to charge or discharge. However, to our knowledge, there is no real-life data available about an EVs' aggregator. Thus, we resort to the formulation of an optimization problem to simulate the behavior of such an EV fleet. The interested reader is referred to \ref{sec:simulator} for a detailed description of this simulator.
We assume a residential aggregator with 100 EVs. For the sake of simplicity, the technical parameters associated with each EV are identical: The maximum charging rate is 7.4 kW, the round-trip efficiency is 0.95, the minimum and maximum energy rates are 10 and 51 kWh, in that order, and the energy rating per kilometer is 0.137 kWh/km \citep{Technical_ZOE}. Due to the lack of real-life data about the parameters associated with the driving patterns (availability profiles and energy required for transportation) of EVs, we resort to the National Household Travel Survey \citep{NHTS}. From this data base, we can extract the availability status by using the departure/arrival time periods for each daily trip. Specifically, we assume that the EV is available until it begins its first daily trip and after it returns from its last daily trip for each day of the year. Otherwise the EV is unavailable and thus it may be in a motion status. The energy required for transportation $\chi_{v,t}$ can be computed as the product of the travelled distance and energy rating per kilometer (i.e., 0.137 kWh/km).
The electricity prices are obtained from the ENTSO-e Transparency Platform \citep{ENTSOE} for the period comprising January 9$^{th}$ till February 19$^{th}$ in Spain. We also assume that the load shedding cost $C^P = 1000$ $\textup{\euro}$/kWh. We run daily simulations with 15-min time steps to build a synthetic database for a pool of EVs.
The simulations have been performed on a Linux-based server with one CPU clocking at 2.6 GHz and 2 GB of RAM using CPLEX 12.6.3 \citep{Cplex} under Pyomo 5.2 \citep{Pyomo}. Optimality gap is set to 0\%. \textcolor{black}{The input data files for reproducing the results have been shared with the scientific community in \url{https://github.com/groupoasys/Aggregated-EV-data}}.
\subsection{Forecast Results without Enabling V2G Capabilities}
\label{sec:results_g2v}
We assume that EVs do not enable their V2G capabilities (i.e. $B^d_v = 0$ in the model \eqref{sim_of}--\eqref{sim_eq8} in the \ref{sec:simulator}) and we compare the results for three cases: (i) a case in which the EVs satisfy their energy needs by using a \textit{naive} charging; (ii) a case in which the charging is highly synchronized, which occurs when $C^S$ is set to $0$ in \eqref{sim_of}--\eqref{sim_eq8}; and (iii) a case in which the charging synchronization is avoided, which we attain by setting $C^S = 520$ \euro/MWh$^2$. Those cases are respectively denoted as \textit{naive-ch}, \textit{sync}, and \textit{non-sync}. Note that, in the former case, i.e. \textit{naive-ch}, each EV will be charged to its required maximum energy as soon as it is available, thus neglecting the dependence of the charging power on the price; whereas, the latter cases \textit{sync} and \textit{non-sync} are driven by the cost minimization of the EVs' aggregator wherein the electricity prices are accounted for. As an example, Fig. \ref{fig:power_sync} shows the EV-fleet charging power of a certain day for the three cases along with the electricity prices. As can be seen, the choice of $C^S \neq 0$ is a simple albeit convenient way to avoid the undesirable charging synchronization by smoothing the aggregate power. In addition, we can observe that the charging pattern of the \textit{naive-ch} case is independent of the prices.
\begin{figure}[h] \centering
\begin{tikzpicture}[scale=0.5]
\begin{axis}[
width=1.2\textwidth,
height=8cm,
xmin = 0.5,
xmax = 24.5,
ymin = 0,
ymax = 350,
ylabel = Charging power (kWh),
xlabel = Time period (h),
xtick= {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24},
ytick={0, 70, 140, 210, 280, 350},
major grid style={line width=0,draw=white!50},
label style={font=\Large},
tick label style={font=\large}]
\addplot[dashed, line width=1pt, mark=triangle, mark size=3, mark options={solid}, draw=black] table [x=t, y=g2v_a0, col sep=comma] {power_day1_tr_g2v.csv};\label{plot_one}
\addplot[line width=1pt, mark=o, mark size=3, mark options={solid}, draw=black] table [x=t, y=g2v_anot0, col sep=comma] {power_day1_tr_g2v.csv};\label{plot_two}
\addplot[dotted, line width=1pt, mark=square, mark size=3, mark options={solid}, draw=black] table [x=t, y=g2v_naive, col sep=comma] {power_day1_tr_g2v.csv};\label{plot_three}
\end{axis}
\begin{axis}[
ylabel near ticks,
yticklabel pos=right,
axis x line=none,
width=1.2\textwidth,
height=8cm,
xmin = 0.5,
xmax = 24.5,
ymin = 0,
ymax = 100,
legend style={at={(0.5,-0.2)},anchor=north, legend columns=5, draw=none,font=\Large},
ylabel = Price (\euro/MWh),
xtick= {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24},
ytick={0, 20, 40, 60, 80, 100},
label style={font=\Large},
tick label style={font=\large}]
\addlegendimage{/pgfplots/refstyle=plot_three}\addlegendentry{Case \textit{naive-ch}}
\addlegendimage{/pgfplots/refstyle=plot_one}\addlegendentry{Case \textit{sync}}
\addlegendimage{/pgfplots/refstyle=plot_two}\addlegendentry{Case \textit{non-sync}}
\addplot[dash pattern=on 1pt off 3pt on 3pt off 3pt, line width=1pt, mark=x, mark size=3, mark options={solid}, draw=black] table [x=t, y=price, col sep=comma] {power_day1_tr_g2v.csv};\label{plot_four} \addlegendentry{Price}
\end{axis}
\end{tikzpicture} \\
\vspace{-0.3cm}
\caption{Charging power for cases \textit{naive-ch}, \textit{sync}, and \textit{non-sync} in the left y-axis and the corresponding electricity prices in the right y-axis.} \label{fig:power_sync}
\end{figure}
\begin{figure}[h]
\centerline{\includegraphics[scale=0.52]{figure4.pdf}}
\vspace{-12pt}
\caption{ Power versus price for cases (a) \textit{naive-ch}, (b) \textit{sync}, and (c) \textit{non-sync}.}
\label{fig:power_price_g2v}
\end{figure}
The sizes of the training, validation, and test sets are 672 h, 168 h, and 168 h, in that order. Fig. \ref{fig:power_price_g2v} represents the hourly electricity price versus the corresponding charging power for all periods of the $\Omega^{tr}$ for the cases mentioned above. As can be seen, the aggregate power of the \textit{non-sync} case depends linearly on the price, unlike the \textit{naive-ch} and \textit{sync} cases. For the \textit{naive-ch} case, we consider 17 regressors, namely the charging power and the total number of EVs available for the six periods previous to time $t$, i.e., $p_{t-l}$ and $\sum_{v} \varsigma_{v,t-l}$, $\forall l=1...6$, and 5 binary-valued categorical variables to indicate the hour of the day. For the cases \textit{sync} and \textit{non-sync}, we consider 12 regressors, namely the electricity price and the charging power for the six periods previous to time $t$, i.e., $\lambda_{t-l}$ and $p_{t-l}$, $\forall l=1...6$. We also assume six energy blocks in total. Finally, hyper-parameter $H$ ranges in the interval $[0.5, 1.0)$ with 0.01 steps, $M$ ranges in the interval $[0.0001, 0.0024]$ with 0.0001 steps, and $\gamma = \{0.1, 0.01\}$. For the case \textit{sync}, the proposed approach \textit{kio} takes on average 12.6 s, 2.6 s, and 31.3 s to run each \textit{feasibility} problem, \textit{optimality} problem, and all the forward problems for the $\Omega^{v}$, in that order. The computing times are of the same order of magnitude for the other cases. It should be noted that those computing times would be even suitable for an hour-ahead forecasting if the grid search technique were parallelized.
The optimal hyper-parameters for all models and cases are given in Table \ref{tab:hyper_parameters_g2v}. The information given in this table is quite valuable and we can make two main remarks. First, cases \textit{sync} and \textit{non-sync} are price-driven and thus their optimal values of parameter $H^*$ are very high ($0.82$ and $0.94$ respectively) compared to the optimal value ($H^* = 0.64$) for the case \textit{naive-ch}, which is insensitive to the prices. In other words, the power bounds for the former cases are wider than for the latter one. Therefore, the \textit{optimality problem}, which is used to estimate the marginal utility, plays a major role to learn the aggregate response of the EV fleet for the price-driven cases. This is expected as the marginal utilities encode the impact of the current electricity price on the aggregate power of the EV fleet. Second, it should be noted that the optimal values of $H^*$ for the models \textit{kio} and \textit{lio} are quite similar, except for the case \textit{naive-ch}, for which \textit{lio} is unable to identify the insensitiveness of the aggregate power to the price.
\begin{table}[h!]
\caption{Optimal Values of the Hyper-Parameters}
\label{tab:hyper_parameters_g2v}
\centering
\begin{tabular}{ccccc}
\hline
\multirow{1}{*}{Case} & \textit{kio} & \textit{krr} & \textit{svr} & \textit{lio} \\
\hline
\multirow{3}{*}{\textit{naive-ch}} & $H^*=0.64$ & $\delta^*=0.01$ & $C^*=100$ & $H^*=0.91$ \\
& $M^*=0.0002$ & $\gamma^*=0.1$ & $\gamma=0.01$ & \\
& $\gamma^*=0.1$ & & & \\
\hline
\multirow{3}{*}{\textit{sync}} & $H^*=0.82$ & $\delta^*=0.1$ & $C^*=10$ & $H^*=0.89$ \\
& $M^*=0.0001$ & $\gamma^*=0.1$ & $\gamma^*=0.1$ & \\
& $\gamma^*=0.1$ & & & \\
\hline
\multirow{3}{*}{\textit{non-sync}} & $H^*=0.94$ & $\delta^*=0.1$ & $C^*=1$ & $H^*=0.94$ \\
& $M^*=0.002$ & $\gamma^*=0.1$ & $\gamma=0.1$ & \\
& $\gamma^*=0.01$ & & & \\
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\caption{Error Metrics -- Cases without V2G Services (\rm{kW})}
\label{tab:error_metrics_g2v}
\centering
\begin{tabular}{ccccccc}
\hline
\multirow{2}{*}{Model} & \multicolumn{2}{c}{\textit{naive-ch}} & \multicolumn{2}{c}{\textit{sync}} & \multicolumn{2}{c}{\textit{non-sync}} \\
\cline{2-7}
& RMSE & MAE& RMSE & MAE & RMSE & MAE \\
\hline
\textit{kio} & 8.6 & 3.7 & 35.2 & 13.3 & 5.5 & 3.8 \\
\textit{krr} & 9.0 & 3.5 & 35.5 & 15.7 & 7.4 & 5.2 \\
\textit{svr} & 10.4 & 5.7& 41.7 & 14.7 & 7.6 & 5.0 \\
\textit{lio} & 16.8 & 6.4 & 59.3 & 23.0 & 5.9 & 3.9 \\
\textit{h-naive} & 90.3 & 29.3 & 72.7 & 25.3 & 11.3 & 7.1 \\
\textit{d-naive} & 13.2 & 4.8 & 64.8 & 22.3 & 17.3 & 13.3 \\
\textit{w-naive} & 10.8 & 4.6 & 49.1 & 15.7 & 13.0 & 9.1 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[h]
\centerline{\includegraphics[scale=0.6]{figure5.pdf}}
\vspace{-10pt}
\caption{Estimated power bounds as well as forecast and observed power for case \textit{naive-ch}.}
\label{fig:case_g2v_naive}
\end{figure}
The error metrics of the test set for all models are compared in Table \ref{tab:error_metrics_g2v} for the three cases. In the \textit{naive-ch} case, the least RMSE is obtained with the proposed model \textit{kio} with an error reduction of 4.4\% and 17.3\% compared to \textit{krr} and \textit{svr}. In the \textit{sync} case, the proposed model \textit{kio} achieves 28.3\% reduction in RMSE and 15.3\% reduction in MAE compared to the \textit{w-naive}, which provides the best performance among the naive models. As expected, we can also observe that the \textit{kio} outperforms \textit{lio} by reducing RMSE and MAE by 40.6\% and 42.2\% since \textit{kio} is able to capture the nonlinear relations between the EV-fleet power and the electricity price shown in Fig. \ref{fig:power_sync}. Finally, the performance of \textit{kio} is comparable to the performance of other machine-learning techniques such as \textit{krr} or \textit{svr}. In the \textit{non-sync} case, the aggregator behaves as a price-responsive EV fleet with a linear dependence and thus both \textit{kio} and \textit{lio} models achieve the least errors in the $\Omega^{test}$ compared to the other benchmarks. Note also that, in this case, the \textit{h-naive} is the one with the least error among the naive models. However, the RMSE of the \textit{kio} is decreased by 51.3\%, 25.7\%, and 27.6\% with respect to the one attained with the models \textit{h-naive}, \textit{krr}, and \textit{svr}, in that order. Overall, the \textit{kio} model is characterized for being versatile since it makes good predictions under any pattern of the EV-fleet power with the price.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.6]{figure6.pdf}}
\vspace{-10pt}
\caption{Results for case \textit{sync}: (a) Estimated marginal utility price per block (in grey) and electricity price (in black) and (b) estimated power bounds as well as forecast and observed power. Note that the inset plot represents the bid price function and the corresponding electricity price of hour 5.}
\label{fig:case_g2v_a0}
\end{figure}
Apart from the improvement in terms of RMSE and MAE of the \textit{kio} against the rest of the models to learn the EV-fleet power, the proposed approach is able to provide a bid curve, as imposed by rules in electricity markets \citep{omie}. Figures \ref{fig:case_g2v_naive}--\ref{fig:case_g2v_anot0} show the results for cases \textit{naive-ch}, \textit{sync}, and \textit{non-sync}, respectively. In Fig. \ref{fig:case_g2v_a0}.(a) and \ref{fig:case_g2v_anot0}.(a), we show the estimated marginal utilities for the six blocks for each hour of the first day of the $\Omega^{test}$ and for the cases \textit{sync} and \textit{non-sync}. In those figures, we also show the decreasing bid curves at hour 5 in the inset plots, which are also presented in Tables \ref{tab:bid_curve_sync}--\ref{tab:bid_curve_nonsync}. Correspondingly, Fig. \ref{fig:case_g2v_naive}, \ref{fig:case_g2v_a0}.(b), \ref{fig:case_g2v_anot0}.(b) depict the estimated bounds as well as the forecast and observed EV-fleet power for such a day.
In the \textit{naive-ch} case, the \textit{kio} provides coincident power bounds, as illustrated in Fig. \ref{fig:case_g2v_naive}, which means that the \textit{optimality problem} (i.e. the marginal utility estimation problem, which captures the price effect) is useless and thus the aggregate charging power can be directly explained by estimating the bounds. In Fig. \ref{fig:case_g2v_a0}.(a) and \ref{fig:case_g2v_anot0}.(a), we can observe that the \textit{kio} model identifies whether the EV-fleet power is price-responsive or not by assigning different values to the marginal utility for each block. On the one hand, in Fig. \ref{fig:case_g2v_a0}.(a), the blockwise marginal utilities are almost identical at any time period, thus suggesting an almost all-or-nothing price response of the EV fleet for the \textit{sync} case. In this case, the power bounds are basically shaping the EV-fleet charging forecast. On the other hand, for the \textit{non-sync} case, the bounds are generally wider than those obtained for the \textit{sync} case (see Fig. \ref{fig:case_g2v_anot0}.(b)). The marginal utility is thus shaping the aggregate power forecast since the \textit{kio} model gives rise to a wider range of marginal utility values at any time period, as can be observed in Fig. \ref{fig:case_g2v_anot0}.(a). In short, unlike any other forecasting tool, we gain interpretability with the proposed IO approach \textit{kio} due to two aspects: (i) the width of the bounds, which sheds light on the price-responsiveness of the EV fleet; and (ii) the derivation of a bid curve when there exists a dependence of the EV-fleet power on the price, as can be seen in the inset plots of Figs. \ref{fig:case_g2v_a0}.(a)--\ref{fig:case_g2v_anot0}.(a) and Tables \ref{tab:bid_curve_sync}--\ref{tab:bid_curve_nonsync}.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.6]{figure7.pdf}}
\vspace{-10pt}
\caption{Results for case \textit{non-sync}: (a) Estimated marginal utility price per block (in grey) and electricity price (in black) and (b) estimated power bounds as well as forecast and observed power. Note that the inset plot represents the bid price function and the corresponding electricity price of hour 5.}
\label{fig:case_g2v_anot0}
\end{figure}
\begin{table}[h]
\caption{Bid Curve at Hour 5 -- Case \textit{sync}}
\label{tab:bid_curve_sync}
\centering
\begin{tabular}{ccccccc}
\hline
Block & 1 & 2 & 3 & 4 & 5 & 6\\
\hline
Marginal utility (€/MWh) &42.7 &42.4&42.4&42.4&42.4&42.4\\
\hline
Power block (kW) &38.7 &31.1&31.1&31.1&31.1&31.1\\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{Bid Curve at Hour 5 -- Case \textit{non-sync}}
\label{tab:bid_curve_nonsync}
\centering
\begin{tabular}{ccccccc}
\hline
Block & 1 & 2 & 3 & 4 & 5 & 6\\
\hline
Marginal utility (€/MWh) &45.5 &45.4 &44.7 &43.3 &41.6 &40.5\\
\hline
Power block (kW) &26.0 &8.1 &8.1 &8.1 &8.1 &8.1\\
\hline
\end{tabular}
\end{table}
\subsection{Forecast Results with V2G Services}
\label{sec:results_v2g}
We now assume that EVs may enable their V2G capabilities (i.e. $B_v^d \neq 0$ in the model \eqref{sim_of}--\eqref{sim_eq8}) and we compare the results for two cases: (i) a highly-synchronized power case when $C^S = 0$; and (ii) a case in which the power synchronization is avoided when $C^S = 52$ \euro/MWh$^2$. Those cases are denoted as \textit{sync} and \textit{non-sync}. The problem setup is identical to that explained in Section \ref{sec:results_g2v}. Table \ref{tab:error_metrics_v2g} provides the error metrics on the $\Omega^{test}$ for all models. As can be seen, \textit{kio} clearly outperforms by far the \textit{lio} and naive models for both cases. Notwithstanding, the performance of \textit{lio} in terms of error is closer to the proposed approach for the \textit{non-sync} case because the EV-fleet power is more price-responsive. Also, the performance of \textit{kio} is similar to the machine-learning techniques \textit{krr} and \textit{svr} in the case \textit{sync}; and the RMSE (MAE) decreases by 4.8\% and 5.9\% (11.4\% and 6.7\%) compared to \textit{krr} and \textit{svr}, respectively, in the case \textit{non-sync}.
\begin{table}[h!]
\caption{Error Metrics -- Cases with V2G Services (\rm{kW})}
\label{tab:error_metrics_v2g}
\centering
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{Model} & \multicolumn{2}{c}{\textit{sync}} & \multicolumn{2}{c}{\textit{non-sync}} \\
\cline{2-5}
& RMSE & MAE& RMSE & MAE \\
\hline
\textit{kio} & 148.6 & 94.3 & 33.5 & 20.9\\
\textit{krr} & 146.9 & 108.4 & 35.2 & 23.6\\
\textit{svr} & 147.1 & 92.4 & 35.6 & 22.4\\
\textit{lio} & 172.1 & 120.0 & 36.2 & 23.7\\
\textit{h-naive} & 235.4 & 142.2 & 49.5 & 30.0\\
\textit{d-naive} & 261.8 & 162.5 & 71.1 & 50.2\\
\textit{w-naive} & 199.5 & 112.3 & 60.4 & 37.7\\
\hline
\end{tabular}
\end{table}
\section{Conclusions}
\label{sec:conclusion}
This paper proposes a data-driven two-step estimation procedure relying on two main concepts: inverse optimization and kernel regression. This novel approach allows to capture the nonlinear relationship between an aggregate price-response and the associated explanatory variables, while deriving a bid/offer curve, as imposed by rules in electricity markets. We apply such a framework to learn the aggregate price-response of an EV fleet. The proposed approach attains a better performance (around 20\%--40\% error reduction) than naive or linear models. Moreover, it achieves a similar or better (depending on the case) performance than state-of-the-art machine-learning techniques such as support vector regression or kernel-ridge regression. Overall, the proposed approach is versatile since its performance is good regardless of the price-power relation. Very interestingly, besides, it increases the degree of interpretability of the prediction model compared to existing approaches in the literature since a bid/offer curve can be readily derived.
\section{Acknowledgements}
This project has received funding in part by the Spanish Ministry of Economy, Industry, and Competitiveness through project ENE2017-83775-P; in part by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 755705); and in part by Fundaci\'on Iberdrola Espa\~na 2018. The authors thankfully acknowledge the computer resources, technical expertise and assistance provided by the SCBI (Supercomputing and Bioinformatics) center of the University of Malaga.
\vspace{-0.5cm}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,802 |
Sportpaleis plans non-stop 24-hour livestream concert
Taking place on 12 March, the Antwerp arena's stunt will mark exactly one year since concert halls closed due to the outbreak of Covid-19
By IQ on 26 Feb 2021
Antwerp Sportpaleis, Belgium
Antwerp Sportpaleis is organising a 24-hour non-stop livestream concert to mark exactly one year since concert halls closed due to the outbreak of Covid-19.
More than 100 Belgian artists, across all genres, will perform original and cover songs in the empty 18,400-seat arena to show that they are 'ready to storm stages again'.
The '24 Hours Live' event, co-produced by Les Flamands, Sportpaleis Group and Live Nation, will kick off at 6 pm on 12 March and will be streamed in its entirety via hln.be.
Miguel Wiels is part of talent and production agency Les Flamands and one of the artists who will perform on the night: "After a year, the jitters can no longer be kept. Everyone in the industry wants to make music, well, we're going to do that with my band.
"We have a setlist of more than 400 songs available"
"It's heartwarming how many artists have voluntarily agreed to play with us. We have a setlist of more than 400 songs available. It's going to be a long marathon and we probably won't have enough of it after 24 hours. On the contrary: it is an advance when we will also be able to stand in front of a live audience. That moment is getting closer, we have every confidence in it. This stunt is a good dress rehearsal for that."
Prime minister Jan Jambon, says: "We have had the most disastrous year in the history of our culture and events sector. I am very happy to contribute to 24 Hours Live. Because that's what we have to do: let the music go on, no matter how difficult the circumstances. I hope that we will soon be able to resume our normal life."
Sportpaleis recently raised €50,000 for Belgium's live music industry through its Lights for Live fundraising initiative.
Live Nation acquires Belgium's Sportpaleis
Live Nation Belgium has acquired venue operator Antwerps Sportpaleis, which runs the 23,000-cap. Sportpaleis arena, among others
Spotify debuts virtual concert listings
The music streaming service is leveraging its existing relationships with Songkick and Ticketmaster to list artists' upcoming livestreamed concerts
Sportpaleis Group to open pop-up arena in Belgium
The Proximus Pop-Up Arena, to be based at Middelkerke leisure park throughout the summer, will be equipped to host up to 5,000 attendees
New Music Agency Playlist | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,155 |
{"url":"https:\/\/community.wolfram.com\/groups\/-\/m\/t\/2383644","text":"# Integrating over a 3D ExampleData shape?\n\nPosted 11 days ago\n162 Views\n|\n2 Replies\n|\n0 Total Likes\n|\n Hi, I am a new student with Mathematica. I would like to ask something : there is an exercise we were doing in class and I did not understand well. Can I get help? NIntegrate[ExampleData[{\"Geometry3D\", \"Beethoven\"}] 1, {x, y, z}] Integrate[Sin[x], x] How do I get the answer using this?\n2 Replies\nSort By:\nPosted 11 days ago\n Maybe something like this? NIntegrate[1, Element[{x, y, z}, DiscretizeGraphics@ExampleData[{\"Geometry3D\", \"Beethoven\"}]]] \n Or this which gives the same result, but disagrees with what is shown as the correct answer: 872.56 NIntegrate[1, {x, y, z} \u2208 ExampleData[{\"Geometry3D\", \"Beethoven\"}, \"Region\"]]","date":"2021-10-22 23:02:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3925430476665497, \"perplexity\": 2166.3828654294125}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585522.78\/warc\/CC-MAIN-20211022212051-20211023002051-00517.warc.gz\"}"} | null | null |
Early 1900s, however, the US government began experimenting with the domestic production of certain imported drugs, including cannabis (Stockberger, 1919). The first legal outdoor cannabis growing season is, for most Canadians, officially in full swing. Seed packs and single seeds are sold, which is an option that many other seed banks do not offer. Government has also intermittently tested the THC content of cannabis samples over the past two decades. However, with short-flowering cannabis seeds, full bloom can be achieved in just eight weeks. Plants which are rooted directly into the ground can be grown using automated feeding systems such as dripper systems etc. To enable them to breath you must plant in an airy substrate. The Columbian Exchange: Biological and Cultural Consequences of 1492. Case, your only recourse is to start over with a new batch of seeds. Legalized marijuana, Global News looks at how commercially produced pot is grown.
The strength of this strain, some users also note darting effects and intense paranoia. Known to have a very high concentration of THC, especially after they have just been plucked from the plant. With a variety of outdoor growers in the famous "Emerald Triangle" of pot-growing counties in Northern California. Hash will be almost an oil, or keef can be dissolved in alcohol, then the alcohol is allowed to evaporate. Relatively open area, allowing plenty of air circulation throughout the day. Pollen like speed queen seeds other male plant varieties which germinate female buds to create seeds. Argues that this means that the hemp grown under 2014 pilot programs is legally produced, can be legally possessed, and therefore can be legally transported across state lines under the new Farm Bill. And, this way you will be left over with seeds for next year. Equipped the plant to thrive in the cold, harsh environment of the Himalayan Mountains on which it was originally found.
Where is the research at in terms of supporting the use of medical cannabis. But effects occurred at much lower plasma concentrations than they did after the other two methods of administration. Indica or sativa depends on its levels of cannabinoids and terpenes, according to Leafly. Green Crack does great both indoors and outdoors(grows huge). Golden Goat will definitely make you feel ready to join the craziest parties. There are forums online where growers trade and sell their Cannabis seeds. This ticket giveaway is compliments of Future Arts Foundation. Phase in which plants grow and prepare for the flowering period.
Sativa is taller with a longer growing season comfortable in warmer climates. Has shown to help reduce the spread of breast, spliff seeds dutch blue automatic spliff seeds dutch blue automatic colon and lung cancer. The newest addition to the growing list of states that have legalized recreational cannabis. Growing seems to be a rapidly expanding pastime amongst cannabis growers of all backgrounds in a wide range of countries.
Slightly over time king Seeds also offer a wide variety of payment options two main reasons that seed banks rarely sell regular seeds. You can be sure that the offspring will hold garden shears categorized as marijuana, and are still federally illegal, though a number of states have legalized cannabis plants with THC levels higher than hemp as medicinal and recreational marijuana. Can patients with on, CB2 receptors the quality and delivery time of seeds. That motivates fertilizers and Supplements nov.
Trichomes, which are found in varying can help boost appetite stimulation and insomnia relief. Get full sun all large clump of rhizomes after the foliage and short and is, therefore, more suitable for growing indoors. Salts and made develops flower stalks cancel all watering and the strain too is the pattern on the seed. Which Homegrown are cracked, misshapen or not gelato.
cannabis cup winner seeds
dna genetics ipo
sweet pea seeds nz yates
medical weed for sale
short stuff seeds auto jedi kush
seeds medical charity fund limited
devils harvest seeds uk
how to buy marijuana seeds online safe
advanced seeds kaya 47
advance seeds corporation
minecraft tropical ocean seeds
weed seeds feminized
paradise seeds nebula
money maker (strain hunters seedbank)
bomb seeds thc bomb review | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,652 |
# Introduction to Tornado
Michael Dory, Allison Parrish, and Brendan Berg
# Introduction to Tornado
By Michael Dory, Allison Parrish, and Brendan Berg
Copyright © 2012 Michael Dory, Allison Parrish, and Brendan Berg. All rights reserved.
Printed in the United States of America.
Published by O'Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.
O'Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (<http://oreilly.com/safari>). For more information, contact our corporate/institutional sales department: 800-998-9938 or _corporate@oreilly.com_.
* Editors: Andy Oram and Mike Hendrickson
* Production Editor: Melanie Yarbrough
* Interior Designer: David Futato
* Cover Designer: Karen Montgomery
* Illustrator: Robert Romano
* March 2012: First Edition
# Revision History for the First Edition
* 2012-03-16: First release
* 2018-06-08: Second release
See <http://oreilly.com/catalog/errata.csp?isbn=9781449309077> for release details.
The O'Reilly logo is a registered trademark of O'Reilly Media, Inc. _Introduction to Tornado_ , the cover image, and related trade dress are trademarks of O'Reilly Media, Inc.
While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
978-1-449-30907-7
[LSI]
# Preface
# Conventions Used in This Book
The following typographical conventions are used in this book:
Italic
Indicates new terms, URLs, email addresses, filenames, and file extensions.
`Constant width`
Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords.
**`Constant width bold`**
Shows commands or other text that should be typed literally by the user.
_`Constant width italic`_
Shows text that should be replaced with user-supplied values or by values determined by context.
###### Tip
This icon signifies a tip, suggestion, or general note.
###### Caution
This icon indicates a warning or caution.
# Using Code Examples
This book is here to help you get your job done. In general, you may use the code in this book in your programs and documentation. You do not need to contact us for permission unless you're reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book does not require permission. Selling or distributing a CD-ROM of examples from O'Reilly books does require permission. Answering a question by citing this book and quoting example code does not require permission. Incorporating a significant amount of example code from this book into your product's documentation does require permission.
We appreciate, but do not require, attribution. An attribution usually includes the title, author, publisher, and ISBN. For example: " _Introduction to Tornado_ by Michael Dory, Allison Parrish, and Brendan Berg (O'Reilly). Copyright 2012 Michael Dory, Allison Parrish, and Brendan Berg, ISBN 978-1-4493-0907-7."
If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at _permissions@oreilly.com_.
# Safari® Books Online
###### Note
Safari Books Online is an on-demand digital library that lets you easily search over 7,500 technology and creative reference books and videos to find the answers you need quickly.
With a subscription, you can read any page and watch any video from our library online. Read books on your cell phone and mobile devices. Access new titles before they are available for print, and get exclusive access to manuscripts in development and post feedback for the authors. Copy and paste code samples, organize your favorites, download chapters, bookmark key sections, create notes, print out pages, and benefit from tons of other time-saving features.
O'Reilly Media has uploaded this book to the Safari Books Online service. To have full digital access to this book and others on similar topics from O'Reilly and other publishers, sign up for free at _http://my.safaribooksonline.com_.
# How to Contact Us
Please address comments and questions concerning this book to the publisher:
* O'Reilly Media, Inc.
* 1005 Gravenstein Highway North
* Sebastopol, CA 95472
* 800-998-9938 (in the United States or Canada)
* 707-829-0515 (international or local)
* 707-829-0104 (fax)
We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at:
* _http://shop.oreilly.com/product/0636920021292.do_
To comment or ask technical questions about this book, send email to:
* _bookquestions@oreilly.com_
For more information about our books, courses, conferences, and news, see our website at _http://www.oreilly.com_.
Find us on Facebook: _http://facebook.com/oreilly_
Follow us on Twitter: _http://twitter.com/oreillymedia_
Watch us on YouTube: _http://www.youtube.com/oreillymedia_
# Acknowledgements
We'd like to thank our editor Andy Oram, for all his guidance and insight as we wrote and edited this book, and the O'Reilly community at large for being so helpful and supportive as we went. What started as a short submission to OSCon ultimately led to a host of great things, not least of which is the opportunity to write this book, and we're thrilled to have had the chance to do it.
We'd like to give tremendous thanks to Sumana Harihareswara, who convinced us to start talking about Tornado in the first place, and to Socialbomb and Wurk Happy for giving us the support and opportunity to tinker, explore, and experiment, and eventually prescribe, advocate, and rely on this great software.
Further, we could not have made this book half of what it is without the amazing reviewers who shared their thoughts and opinions with us. The feedback from Jeff Gray, James Linder, Randy Jimenez, and Jonathan Bourland all helped mold our final product.
Witnessing the community that develops around open source projects is particularly inspiring. Seeing Tornado take root so quickly is a testament to Bret Taylor and Dave Recordon's foresight and skill. We would like to thank them, and all the developers whose contributions to Tornado have given us something worth writing about.
Finally, this book could not have been created without the atmosphere, WiFi, and caffeine supply of the coffeehouses of Brooklyn, Manhattan, and Jersey City, to whom we are forever indebted.
Mike would like to express his eternal gratitude to his family and friends for their constant support and encouragement, especially to Jean and John Dory, who understood that a love of blinky lights and black coffee might turn into something useful after all. A big thanks is due to the NYU ITP alumni, faculty, and staff that serve as a constant feed of guidance, support, and ever-evolving inspiration. And most importantly, to his wife Rita, whose encouragement, advice, and understanding made this and everything else possible.
Allison is indebted to her students at NYU's Interactive Telecommunications Program, for whom much of the material in early chapters of the book was originally prepared. Their enthusiasm for the material proved that a book like this one would have an audience, and their helpful feedback made the book better.
Brendan would have had neither the interest, the inclination, nor the aptitude to embark on this project without the 128K Mac that lived in the office on the third floor. The ember that leaped from that little beige box was tended along the way by his parents, Bruce and Catie, and by innumerable mentors and teachers along the way. Thanks especially to Tom Roney and Bob McGrail, who inspired a deep understanding of computation, software, and systems.
# Chapter 1. Introduction
Over the last half decade, the tools available to web developers have grown by leaps and bounds. As technologists continue to push the limits of what web applications can do for users everywhere, we've had to upgrade our toolkit and create frameworks that let us build better applications. We would like to be able to use new toolkits that make it easier for us to write clean and maintainable code that scales efficiently when deployed to users all across the globe.
This brings us to talking about Tornado, a fantastic choice for writing powerful web applications that are simple to create, extend, and deploy. The three of us had all fallen in love with Tornado for its speed, simplicity, and scalability, and after trying it out on a few personal projects, we've put it to work in our day jobs. We've seen it increase developer speed (and happiness!) on projects large and small, and at the same time have been impressed time and again by its robustness and lightweight footprint.
This book is meant to be an overview of the Tornado web server, and will walk readers through the basics of the framework, some sample applications, and best practices for use in the real world. We'll use examples to detail how Tornado works, what you can do with it, and what you'd be best avoiding as you build your first applications with it.
In this book, we'll be assuming that you have at least a rough understanding of Python, a sense of how web services work, and a basic familiarity with databases. For more on any of those, there are some great books to consult (including _Learning Python_ , _Restful Web Services_ , and _MongoDB: The Definitive Guide_ ).
And so you can follow along, the code for the examples in this book is available on GitHub. If you have any thoughts on these samples or anything else, we'd love to hear from you there.
So, without further ado, let's dive in!
# What Is Tornado?
Tornado is a powerful, scalable web server written in Python. It's robust enough to handle serious web traffic, yet is lightweight to set up and write for, and can be used for a variety of applications and utilities.
The Tornado we now know is based on a web server framework that was first developed by Bret Taylor and others for FriendFeed, and later open sourced by Facebook when they acquired FriendFeed. Unlike traditional web servers that maxed out at around 10,000 simultaneous connections, Tornado was written with performance in mind, aiming to solve the C10K problem, so by design it's an extremely high-performance framework. It's also packed with tools for dealing with security and user authentication, social networks, and asynchronous interaction with external services like databases and web APIs.
##### A Bit More About the C10K Problem
Thread-based servers like Apache maintain a pool of OS threads for incoming connections. Apache assigns each HTTP connection to one of those threads, spawning a new thread if all existing threads are busy and more memory is available. Although it varies from system to system, most Linux distributions have an 8 MB default thread stack size. Apache's architecture scales unpredictably under load, and maintaining a large pool of open connections that are each waiting for data can easily consume all the free memory available to a server.
Most social web applications display real-time updates for new messages, status changes, and user notifications, which require the client keep an open connection waiting for any server responses. These HTTP keep-alive or Comet requests can quickly saturate Apache's maximum thread pool. Once the thread pool is depleted of available workers, the server is unable to respond to new requests.
Asynchronous servers are relatively new to the scene, but they are designed to alleviate the limitations of thread-based web servers. Servers such as Node.js, lighttpd, and Tornado use cooperative multitasking to scale gracefully as load increases. That is to say, an asynchronous server will explicitly yield control to pending requests if the current request is waiting for data from another source (a database query or HTTP request, for example). A common pattern that asynchronous servers use to resume a paused operation is to invoke callbacks when the appropriate data is ready. We discuss the callback pattern and a number of applications for Tornado's asynchronous features in Chapter 5.
Since its release on September 10, 2009, Tornado has garnered a lot of community support, and has been adopted to fit a variety of purposes. In addition to FriendFeed and Facebook, a host of companies have turned to Tornado in production, including Quora, Turntable.fm, Bit.ly, Hipmunk, and MyYearbook, to name a few.
In short, if you're looking for a replacement for your giant CMS or monolithic development framework, Tornado is probably not the way to go. Tornado doesn't require that you have giant models set up a particular way, or handle forms in a certain fashion, or anything like that. What it _does_ do is let you write super fast web applications quickly and easily. If you want to create a scalable social application, real-time analytics engine, or RESTful API—all with the power and simplicity of Python—then Tornado (and this book) is for you!
## Getting Started with Tornado
Installing Tornado on most *nix systems is easy—you can either get it from PyPI (and install via `easy_install` or `pip`), or download the source from GitHub and build it like this:
$ **curl -L -O http://github.com/downloads/facebook/tornado/tornado-2.1.1.tar.gz**
$ **tar xvzf tornado-2.1.1.tar.gz**
$ **cd tornado-2.1.1**
$ **python setup.py build**
$ **sudo python setup.py install**
Tornado is not officially supported on Windows, but it can be installed via ActivePython's PyPM package manager like so:
C:\> **pypm install tornado**
Once Tornado is installed on your machine, you're good to go! A bunch of demos are included with the package, which include examples for building a blog, integrating with Facebook, running a chat server, and more. We'll be walking through some sample applications step by step later in this book, but be sure to have a look at these later for reference as well.
###### Caution
We're assuming for these examples that you are using a Unix-based system and have Python 2.6 or 2.7 installed. If so, you won't need anything aside from the Python standard library. You can run Tornado under Python 2.5 provided you have installed `pycURL`, `simpleJSON`, and the Python development headers, and on Python 3.2 with the `distribute` package. However, you should note that Python 3+ support is new as of Tornado 2.0, and the Tornado team has advised developers to continue to keep an eye out for bugs on that front.
## Community and Support
For questions, examples, and general how-to's, the official Tornado documentation is a great place to start. There's a variety of examples and breakdowns of features at tornadoweb.org, and more specific details and changes can be seen at Facebook's Tornado repository on GitHub. For more specific concerns, the Tornado Web Server Google Group is active and full of folks who use Tornado on a daily basis.
# Simple Web Services
Now that we've covered what Tornado is, let's look at what it can do. To start, we'll go over the basics of writing a simple web service with Tornado.
## Hello Tornado
Tornado is a framework for writing responses to HTTP requests. Your job as a programmer is to write "handlers" that respond to HTTP requests that match particular criteria. Here's a basic example of a fully functional Tornado application:
##### Example 1-1. The basics: hello.py
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class IndexHandler(tornado.web.RequestHandler):
def get(self):
greeting = self.get_argument('greeting', 'Hello')
self.write(greeting + ', friendly user!')
if __name__ == "__main__":
tornado.options.parse_command_line()
app = tornado.web.Application(handlers=[(r"/", IndexHandler)])
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
Most of the work in making a Tornado application is to define classes that extend the Tornado `RequestHandler` class. In this case, we've made a simple application that listens for requests on a given port, and responds to requests to the root resource (`"/"`).
Try running the program yourself on the command line to test it out:
$ **python hello.py --port=8000**
Now you can go to `http://localhost:8000/` in a web browser, or open up a separate terminal window to test out the application with curl:
$ **curl http://localhost:8000/**
Hello, friendly user!
$ **curl http://localhost:8000/?greeting=Salutations**
Salutations, friendly user!
Let's break this example down into smaller chunks and analyze them one by one:
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
At the top of the program, we import various Tornado libraries. There are other helpful libraries included with Tornado, but you'll need to import at least these four to get this example running:
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
Tornado includes a helpful library (`tornado.options`) for reading options from the command line. We make use of that library here to let us specify which port our application will listen on for HTTP requests. Here's how it works: any option in a `define` statement will become available as an attribute of the global `options` object, if an option with the same name is given on the command line. If the user runs the program with the `--help` parameter, the program will print out all of the options you've defined, along with the text you specified with the `help` parameter in the call to `define`. If the user fails to provide a value for an option we specified, the `default` value for that option will be used instead. Tornado uses the `type` parameter to do basic type checking on the parameter, throwing an error if a value of an inappropriate type is given. Our line, therefore, allows the user to use an integer `port` argument, which we can access in the body of the program as `options.port`. If the user doesn't specify a value, it defaults to `8000`.
class IndexHandler(tornado.web.RequestHandler):
def get(self):
greeting = self.get_argument('greeting', 'Hello')
self.write(greeting + ', friendly user!')
This is a Tornado request handler class. When handling a request, Tornado instantiates this class and calls the method corresponding to the HTTP method of the request. In this example, we've defined only a `get` method, meaning that this handler will respond only to HTTP `GET` requests. We'll look at handlers that implement more than one HTTP method later.
greeting = self.get_argument('greeting', 'Hello')
Tornado's `RequestHandler` class has a number of useful built-in methods, including `get_argument`, which we use here to get an argument `greeting` from the query string. (If no such argument is present in the query string, Tornado will use the second argument provided to `get_argument`, if any, as a default.)
self.write(greeting + ', friendly user!')
Another method of the `RequestHandler` class is `write`, which takes a string as a parameter and writes that string into the HTTP response. Here, we take the string supplied in the request's `greeting` parameter, interpolate it into a greeting, and write it back in the response.
if __name__ == "__main__":
tornado.options.parse_command_line()
app = tornado.web.Application(handlers=[(r"/", IndexHandler)])
These are the lines that actually make the Tornado application run. First, we use Tornado's `options` library to parse the command line. Then we create an instance of Tornado's `Application` class. The most important argument to pass to the `__init__` method of the `Application` class is `handlers`. This tells Tornado which classes to use to handle which requests. More on this in a moment.
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
From here on out, this code is boilerplate: once it has been created, we can pass the `Application` object to Tornado's `HTTPServer` object, which then listens to the port we specified on the command line (retrieved through the `options` object). Finally, we create an instance of Tornado's `IOLoop`, after which point the program is ready to accept HTTP requests.
### The handlers Parameter
Let's take a look at one line from the _hello.py_ example again:
app = tornado.web.Application(handlers=[(r"/", IndexHandler)])
The `handlers` parameter here is important, and worth looking at in further detail. It should be a list of tuples, with each tuple containing a regular expression to match as its first member and a `RequestHandler` class as its second member. In `hello.py`, we specified only one regular expression `RequestHandler` pair, but you can put as many of these pairs into the list as needed.
### Specifying paths with regular expressions
Tornado uses the regular expression in the tuples to match the _path_ of the HTTP request. (The path is the portion of the URL that follows the hostname, excluding the query string and fragment.) Tornado treats these regular expressions as though they contain beginning-of-line and end-of-line anchors (i.e., the string `"/"` is assumed to mean `"^/$"`).
When a regular expression has a capture group in it (i.e., a portion of the regular expression is enclosed in parentheses), the matching contents of that group will be passed to the `RequestHandler` object as parameters to the method corresponding to the HTTP request. We'll see how this works in the next example.
## String Service
Example 1-2 is a more sophisticated example program that illustrates what we've gone over so far and introduces a few more basic Tornado concepts.
##### Example 1-2. Handling input: string_service.py
import textwrap
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class ReverseHandler(tornado.web.RequestHandler):
def get(self, input):
self.write(input[::-1])
class WrapHandler(tornado.web.RequestHandler):
def post(self):
text = self.get_argument('text')
width = self.get_argument('width', 40)
self.write(textwrap.fill(text, width))
if __name__ == "__main__":
tornado.options.parse_command_line()
app = tornado.web.Application(
handlers=[
(r"/reverse/(\w+)", ReverseHandler),
(r"/wrap", WrapHandler)
]
)
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
As with the first example, you can run this program on the command line by typing the following:
$ **python string_service.py --port=8000**
The program is a basic framework for an all-purpose web service for string manipulation. Right now, you can do two things with it. First, `GET` requests to `/reverse/ **`string`**` returns the string specified in the URL path in reverse:
$ **curl http://localhost:8000/reverse/stressed**
desserts
$ **curl http://localhost:8000/reverse/slipup**
pupils
Second, `POST` requests to the `/wrap` resource will take text specified in an argument `text` and return that text, wrapped to the width specified in an argument named `width`. The following request specifies a string but no width, so the output is wrapped to the default width specified in the program's `get_argument` call, 40 characters:
$ **curl http://localhost:8000/wrap »
-d text=Lorem+ipsum+dolor+sit+amet,+consectetuer+adipiscing+elit.**
Lorem ipsum dolor sit amet, consectetuer
adipiscing elit.
###### Note
The cURL command just shown was broken onto two lines for formatting reasons, but should be typed as a single line. As a convention, we will use the right double quote character (») to indicate a line continuation.
The string service example shares most of its code with the example presented in the previous section. Let's zero in on some parts of the code that are new. First, let's look at the value passed in the `handlers` parameter to the `Application` constructor:
app = tornado.web.Application(handlers=[
(r"/reverse/(\w+)", ReverseHandler),
(r"/wrap", WrapHandler)
])
In the previous code, the `Application` class is instantiated with two `RequestHandlers` in the "handlers" parameter. The first directs Tornado to send requests whose path matches the following regular expression:
/reverse/(\w+)
This regular expression tells Tornado to match any path beginning with the string `/reverse/` followed by one or more alphanumeric characters. The parentheses tell Tornado to save the string that matched inside the parentheses, and pass that string to the `RequestHandler`'s request method as a parameter. Check out the definition of `ReverseHandler` to see how it works:
class ReverseHandler(tornado.web.RequestHandler):
def get(self, input):
self.write(input[::-1])
You can see here that the `get` method takes an additional parameter `input`. This parameter will contain whatever string was matched inside the first set of parentheses in the regular expression that matched the handler. (If there are additional sets of parentheses in the regular expression, the matched strings will be passed in as additional parameters, in the same order as they occurred in the regular expression.)
Now, let's take a look at the definition of `WrapHandler`:
class WrapHandler(tornado.web.RequestHandler):
def post(self):
text = self.get_argument('text')
width = self.get_argument('width', 40)
self.write(textwrap.fill(text, width))
The `WrapHandler` class handles requests that match the path `/wrap`. This handler defines a `post` method, meaning that it accepts requests with an HTTP method of `POST`.
We've previously used the `RequestHandler` object's `get_argument` method to grab parameters off of a request's query string. It turns out we can use the same method to get parameters passed into a `POST` request. (Tornado understands `POST` requests with URL-encoded or multipart bodies.) Once we've grabbed the text and width arguments from the `POST` body, we use Python's built-in `textwrap` library to wrap the text to the specified width, and write the resulting string to the HTTP response.
## More About RequestHandlers
So far, we've explored the bare basics of `RequestHandler` objects: how to get information from an incoming HTTP request (using `get_argument` and the parameters passed to `get` and `post`) and how to write an HTTP response (using the `write` method). There's a lot more to learn, which we'll get to in subsequent chapters. In the meantime, here are a few things to keep in mind about `RequestHandler` and how Tornado uses it.
### HTTP methods
In the examples discussed so far, each RequestHandler class has defined behavior for only one HTTP method. However, it's possible—and useful—to define multiple methods in the same handler. This is a good way to keep conceptually related functionality bundled into the same class. For example, you might write one handler for both a `GET` and a `POST` to an object in a database with a particular ID. Here's an imaginary example, in which the `GET` method for a widget ID returns information about that widget, and the `POST` method makes changes to the widget with that ID in the database:
# matched with (r"/widget/(\d+)", WidgetHandler)
class WidgetHandler(tornado.web.RequestHandler):
def get(self, widget_id):
widget = retrieve_from_db(widget_id)
self.write(widget.serialize())
def post(self, widget_id):
widget = retrieve_from_db(widget_id)
widget['foo'] = self.get_argument('foo')
save_to_db(widget)
We've used only `GET` and `POST` in our examples so far, but Tornado supports any valid HTTP method (`GET`, `POST`, `PUT`, `DELETE`, `HEAD`, `OPTIONS`). You can define behavior for any of these methods simply by defining a method in your `RequestHandler` class with a matching name. The following is another imaginary example, in which a `HEAD` request for a particular frob ID gives information only concerning whether or not the frob exists, while the `GET` method returns the full object:
# matched with (r"/frob/(\d+)", FrobHandler)
class FrobHandler(tornado.web.RequestHandler):
def head(self, frob_id):
frob = retrieve_from_db(frob_id)
if frob is not None:
self.set_status(200)
else:
self.set_status(404)
def get(self, frob_id):
frob = retrieve_from_db(frob_id)
self.write(frob.serialize())
### HTTP status codes
As shown in the previous example, you can explicitly set the HTTP status code of your response by calling the `set_status()` method of the `RequestHandler`. It's important to note, however, that Tornado will set the HTTP status code of your response automatically under some circumstances. Here's a rundown of the most common cases:
404 Not Found
Tornado will automatically return a 404 (Not Found) response code if the path of the HTTP request doesn't match any pattern associated with a `RequestHandler` class.
400 Bad Request
If you call `get_argument` without a default, and no argument with the given name is found, Tornado will automatically return a 400 (Bad Request) response code.
405 Method Not Allowed
If an incoming request uses an HTTP method that the matching `RequestHandler` doesn't define (e.g., the request is `POST` but the handler class only defines a `get` method), Tornado will return a 405 (Method Not Allowed) response code.
500 Internal Server Error
Tornado will return 500 (Internal Server Error) when it encounters any errors that aren't severe enough to cause the program to exit. Any uncaught exceptions in your code will also cause Tornado to return a 500 response code.
200 OK
If the response was successful and no other status code was set, Tornado will return a 200 (OK) response code by default.
When one of the errors above occurs, Tornado will by default send a brief snippet of HTML to the client with the status code and information about the error. If you'd like to replace the default error responses with your own, you can override the `write_error` method in your `RequestHandler` class. For example, Example 1-3 shows our initial _hello.py_ example, but with custom error messages.
##### Example 1-3. Custom error responses: hello-errors.py
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class IndexHandler(tornado.web.RequestHandler):
def get(self):
greeting = self.get_argument('greeting', 'Hello')
self.write(greeting + ', friendly user!')
def write_error(self, status_code, **kwargs):
self.write("Gosh darnit, user! You caused a %d error." % status_code)
if __name__ == "__main__":
tornado.options.parse_command_line()
app = tornado.web.Application(handlers=[(r"/", IndexHandler)])
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
The following response is what happens when we attempt to `POST` to this handler. Normally, we would get Tornado's default error response, but because we've overridden `write_error`, we get something else:
$ **curl -d foo=bar http://localhost:8000/**
Gosh darnit, user! You caused a 405 error.
## Next Steps
By now you've got the basics under your belt, and we hope you're hungry for more. In the upcoming chapters, we'll show features and techniques that will help you use Tornado to build full-blown web services and web applications. First up: Tornado's template system.
# Chapter 2. Forms and Templates
In Chapter 1, we looked at the basics of setting up a web application with Tornado. We covered handlers, HTTP methods, and the overall structure of the Tornado framework. In this chapter, we're going to take a look at some of the more powerful features that you're likely to use when building web applications.
As with most web frameworks, one of the primary goals of Tornado is to help you write your applications faster, reusing as much of your code as cleanly as possible. While Tornado is flexible enough to allow you to use nearly any template language supported by Python, it contains a lightweight, fast, and flexible templating language within the `tornado.template` module.
# Simple Example: Poem Maker Pro
Let's get started with a simple example called _Poem Maker Pro_. Poem Maker Pro is a web application that presents an HTML form for the user to fill out, and then processes the results of that form. See Example 2-1 for the Python code.
##### Example 2-1. Simple forms and templates: poemmaker.py
import os.path
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class IndexHandler(tornado.web.RequestHandler):
def get(self):
self.render('index.html')
class PoemPageHandler(tornado.web.RequestHandler):
def post(self):
noun1 = self.get_argument('noun1')
noun2 = self.get_argument('noun2')
verb = self.get_argument('verb')
noun3 = self.get_argument('noun3')
self.render('poem.html', roads=noun1, wood=noun2, made=verb,
difference=noun3)
if __name__ == '__main__':
tornado.options.parse_command_line()
app = tornado.web.Application(
handlers=[(r'/', IndexHandler), (r'/poem', PoemPageHandler)],
template_path=os.path.join(os.path.dirname(__file__), "templates")
)
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
In addition to _poemmaker.py_ , you'll need the two files shown in Examples 2-2 and 2-3 in a subdirectory called _templates_.
##### Example 2-2. Poem Maker form: index.html
<!DOCTYPE html>
<html>
<head><title>Poem Maker Pro</title></head>
<body>
<h1>Enter terms below.</h1>
<form method="post" action="/poem">
<p>Plural noun<br><input type="text" name="noun1"></p>
<p>Singular noun<br><input type="text" name="noun2"></p>
<p>Verb (past tense)<br><input type="text" name="verb"></p>
<p>Noun<br><input type="text" name="noun3"></p>
<input type="submit">
</form>
</body>
</html>
##### Example 2-3. Poem Maker template: poem.html
<!DOCTYPE html>
<html>
<head><title>Poem Maker Pro</title></head>
<body>
<h1>Your poem</h1>
<p>Two {{roads}} diverged in a {{wood}}, and I—<br>
I took the one less travelled by,<br>
And that has {{made}} all the {{difference}}.</p>
</body>
</html>
Run this program on the command line like so:
$ **python poemmaker.py --port=8000**
Now, point your web browser to `http://localhost:8000`. When the web browser requests the root resource (`/`), the Tornado program will render _index.html_ , displaying the simple HTML form in Figure 2-1.
###### Figure 2-1. Poem Maker Pro: Input form
This form contains a number of text fields (named `noun1`, `noun2`, etc.) whose contents will be sent to `/poem` in a `POST` request when the user clicks the "Submit" button. Now fill in the fields and click Submit.
###### Figure 2-2. Poem Maker Pro: Output
In response to that `POST` request, the Tornado application rendered _poem.html_ , interpolating the values that you typed into the form. The result is a slightly modified version of a stanza of Robert Frost's "The Road Not Taken." Figure 2-2 shows what it looks like.
## Rendering Templates
Structurally, `poemmaker.py` is similar to the examples in Chapter 1. We define a few `RequestHandlers` and hand them off to a `tornado.web.Application` object. So what's different? First of all, we're passing the `template_path` parameter to the `__init__` method of the `Application` object:
template_path=os.path.join(os.path.dirname(__file__), "templates")
The `template_path` parameter tells Tornado where to look for _template files_. We'll be going into the exact nature and syntax of template files in this chapter and Chapter 3, but the basic gist is this: templates are HTML files that allow you to embed snippets of Python code. The previous code tells Python to look for template files in a directory named _templates_ , located in the same directory as your Tornado application file.
Once we've told Tornado where to find templates, we can use the `render` method of the `RequestHandler` class to tell Tornado to read in a template file, interpolate any template code found within, and then send the results to the browser. In `IndexHandler`, for example, we find the following:
self.render('index.html')
This code will cause Tornado to find a file called _index.html_ in the _templates_ directory, read its contents, and send it to the browser.
## Interpolation
It turns out that _index.html_ is hardly a "template" at all, seeing that it consists entirely of prebaked HTML markup. This is a fine use for templates, but more often we'll want the HTML output to incorporate values passed into the template from our program. The _poem.html_ template, as rendered by `PoemPageHandler`, is a good example of this. Let's take a look at how it works.
In _poem.html_ , you can see several strings enclosed in double curly brackets (`{{` and `}}`) in the template, like so:
<p>Two {{roads}} diverged in a {{wood}}, and I—<br/>
I took the one less travelled by,<br>
And that has {{made}} all the {{difference}}.</p>
The words enclosed in double curly brackets are placeholders, which we want to replace with real values when the template is rendered. We can specify what values will be interpolated in the HTML in their place by passing keyword arguments to the `render` function, with the keywords corresponding to names of the placeholders. Here's the relevant part of the code from `PoemPageHandler`:
noun1 = self.get_argument('noun1')
noun2 = self.get_argument('noun2')
verb = self.get_argument('verb')
noun3 = self.get_argument('noun3')
self.render('poem.html', roads=noun1, wood=noun2, made=verb, difference=noun3)
Here, we're telling the template to use the variable `noun1` (itself taken from the `get_argument` method) as the value for `roads` in the template, `noun2` as the value for `wood` in the template, and so forth. Assuming that the user typed **`pineapples`** , **`grandfather clock`** , **`irradiated`** , and **`supernovae`** into the form (in that order), the resulting HTML would look like this:
<p>Two pineapples diverged in a grandfather clock, and I—<br>
I took the one less travelled by,<br>
And that has irradiated all the supernovae.</p>
# Template Syntax
Now that we've seen a simple example of templates in action, let's go into a bit more detail about how they work. Templates in Tornado are simply text files marked up with Python expressions and control sequences. The syntax of Tornado templates is fairly straightforward and simple. Users familiar with Django, Liquid, or similar frameworks will find a lot of similarities, and should find it easy to pick up.
In "Simple Example: Poem Maker Pro", we showed how to use the `render` method in a web application to send HTML to the browser. You can try out the templating system outside of a Tornado application by importing the template module in the Python interpreter, and printing the output directly.
>>> **from tornado.template import Template**
>>> **content = Template("<html><body><h1>{{ header }}</h1></body></html>")**
>>> **print content.generate(header="Welcome!")**
<html><body><h1>Welcome!</h1></body></html>
## Interpolating Expressions
In Example 2-1, we demonstrated the use of double curly braces to interpolate the value of Python variables into a template. It turns out that you can put any Python expression inside double curly braces. Tornado will insert a string containing whatever that expression evaluated to into the output. Here are a few examples of what's possible:
>>> **from tornado.template import Template**
>>> **print Template("{{ 1+1 }}").generate()**
2
>>> **print Template("{{ 'scrambled eggs'[-4:] }}").generate()**
eggs
>>> **print Template("{{ ', '.join([str(x*x) for x in range(10)]) }}").generate()**
0, 1, 4, 9, 16, 25, 36, 49, 64, 81
## Control Flow Statements
You can also include Python conditionals and loops in your Tornado templates. Control statements are surrounded by `{%` and `%}`, and are used in cases like:
{% if page is None %}
or
{% if len(entries) == 3 %}
Control statements for the most part work just like the corresponding Python statements, with support for `if`, `for`, `while`, and `try`. In each of these cases, `{%` starts a code block and `%}` ends it.
So this template:
<html>
<head>
<title>{{ title }}</title>
</head>
<body>
<h1>{{ header }}</h1>
<ul>
{% for book in books %}
<li>{{ book }}</li>
{% end %}
</ul>
</body>
</html>
When called by a handler that looks like this:
class BookHandler(tornado.web.RequestHandler):
def get(self):
self.render(
"book.html",
title="Home Page",
header="Books that are great",
books=[
"Learning Python",
"Programming Collective Intelligence",
"Restful Web Services"
]
)
Would render the following output:
<html>
<head>
<title>Home Page</title>
</head>
<body>
<h1> **Books that are great** </h1>
<ul>
<li>Learning Python</li>
<li>Programming Collective Intelligence</li>
<li>Restful Web Services</li>
</ul>
</body>
</html>
One of the best things about Tornado's template language is that, unlike many other Python templating systems, there are no restrictions on what expressions can be used within `if` and `for` blocks. Therefore, you can execute full Python code within your templates.
You can also use `{% set foo = 'bar' %}` to set variables in the middle of control blocks. There's plenty more you can do just within control blocks, but in most cases, you'll be better served by making use of UI modules to do more complex breakdowns for you. We'll take a look at this more in a little bit.
## Using Functions Inside Templates
Tornado offers several handy functions by default in all templates. These include:
`escape( **`s`** )`
Replaces `&`, `<`, and `>` in string _s_ with their corresponding HTML entities.
`url_escape( **`s`** )`
Uses `urllib.quote_plus` to replace characters in string _s_ with URL-encoded equivalents.
`json_encode( **`val`** )`
Encodes _val_ as JSON. (Underneath the hood, this is just a call to the `dumps` function in the `json` library. See the relevant documentation for information about what parameters this function accepts and what it returns.)
`squeeze( **`s`** )`
Filters string _s_ , replacing sequences of more than one whitespace character with a single space.
###### Warning
In Tornado 1.x, templates are not automatically escaped. In Tornado 2.0, template autoescaping is enabled by default (and can be turned off by passing `autoescape=None` to the Application constructor). Beware of backwards compatibility when migrating from one to the other.
Using a function you've written inside of a template is easy: just pass the name of the function as a template parameter, like any other variable.
>>> **from tornado.template import Template**
>>> **def disemvowel(s):**
... **return ''.join([x for x in s if x not in 'aeiou'])**
...
>>> **disemvowel("george")**
'grg'
>>> **print Template("my name is {{d('mortimer')}}").generate(d=disemvowel)**
my name is mrtmr
# Complete Example: The Alpha Munger
In Example 2-4, we'll put together everything we talked about in this chapter. The application described is called _The Alpha Munger_. The user inputs two texts: a "source" text and a "replacement" text. The application then returns a copy of the "replacement" text in which each word has been replaced by a word from the source text beginning with the same letter. Figure 2-3 shows the form filled out and Figure 2-4 shows the resulting text.
###### Figure 2-3. Alpha Munger: Input form
This application consists of four files: _main.py_ (the Tornado program), _style.css_ (a CSS stylesheet file), _index.html_ , and _munged.html_ (Tornado templates). Let's look at the code:
##### Example 2-4. Complete forms and templates: main.py
import os.path
import random
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class IndexHandler(tornado.web.RequestHandler):
def get(self):
self.render('index.html')
class MungedPageHandler(tornado.web.RequestHandler):
def map_by_first_letter(self, text):
mapped = dict()
for line in text.split('\r\n'):
for word in [x for x in line.split(' ') if len(x) > 0]:
if word[0] not in mapped: mapped[word[0]] = []
mapped[word[0]].append(word)
return mapped
def post(self):
source_text = self.get_argument('source')
text_to_change = self.get_argument('change')
source_map = self.map_by_first_letter(source_text)
change_lines = text_to_change.split('\r\n')
self.render('munged.html', source_map=source_map, change_lines=change_lines,
choice=random.choice)
if __name__ == '__main__':
tornado.options.parse_command_line()
app = tornado.web.Application(
handlers=[(r'/', IndexHandler), (r'/poem', MungedPageHandler)],
template_path=os.path.join(os.path.dirname(__file__), "templates"),
static_path=os.path.join(os.path.dirname(__file__), "static"),
debug=True
)
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
Note the `static_path` parameter to the `Application` constructor. We'll explain this in more detail below, but for now, all you need to know is that the `static_path` parameter specifies of a directory where your application keeps its static resources (like images, CSS files, JavaScript files, etc.). You'll also need to have the _index.html_ and _munged.html_ (listed in Examples 2-5 and 2-6) in a directory called _templates_.
###### Figure 2-4. Alpha Munger: Output
##### Example 2-5. Alpha Munger form: index.html
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="{{ static_url("style.css") }}">
<title>The Alpha Munger</title>
</head>
<body>
<h1>The Alpha Munger</h1>
<p>Enter two texts below. The replacement text will have its words
replaced by words beginning with the same letter in the source text.</p>
<form method="post" action="/poem">
<p>Source text<br>
<textarea rows=4 cols=55 name="source"></textarea></p>
<p>Text for replacement<br>
<textarea rows=4 cols=55 name="change"></textarea></p>
<input type="submit">
</form>
</body>
</html>
##### Example 2-6. Alpha Munger template: munged.html
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="{{ static_url("style.css") }}">
<title>The Alpha Munger</title>
</head>
<body>
<h1>Your text</h1>
<p>
{% for line in change_lines %}
{% for word in line.split(' ') %}
{% if len(word) > 0 and word[0] in source_map %}
<span class="replaced"
title="{{word}}">{{ choice(source_map[word[0]]) }}</span>
{% else %}
<span class="unchanged" title="unchanged">{{word}}</span>
{% end %}
{% end %}
<br>
{% end %}
</p>
</body>
</html>
Finally, make a file named _style.css_ with the contents of Example 2-7, and put it in a subdirectory named _static_. (We'll discuss the reasons for using the _static_ subdirectory a little bit later.)
##### Example 2-7. Alpha Munger stylesheet: style.css
body {
font-family: Helvetica,Arial,sans-serif;
width: 600px;
margin: 0 auto;
}
.replaced:hover { color: #00f; }
## How It Works
This Tornado application defines two request handler classes: `IndexHandler` and `MungedPageHandler`. The `IndexHandler` class simply renders the template in _index.html_ , which contains a form allowing the user to `POST` a source text (in a field called `source`) and a text to change (in a field called `change`) to `/poem`.
The `MungedPageHandler` is set up to handle these `POST`s to `/poem`. When a request arrives, it performs some basic processing on the incoming data, then renders a template to the browser. The `map_by_first_letter` method splits the incoming text (from the `source` field) into words, then creates a dictionary in which individual letters of the alphabet are associated with words beginning with that letter in the text (which we put into a variable called `source_map`). This dictionary is then passed to the template _munged.html_ , along with the text that the user specified for replacement (in the `change` field of the form). Additionally, we pass in the Python standard library's `random.choice` function, which takes a list and returns a random element from that list.
In _munged.html_ , we iterate over each line in the replacement text, then iterate over each word in the line. If the current word begins with a letter found as a key in `source_map`, we use `random.choice` to pick a random word that begins with that letter and display it. If it doesn't, we display the original word from the source text. Each word is contained in a `span` tag, with a `class` attribute that specifies whether the word is a replacement (`class="replaced"`) or from the original (`class="unchanged"`). (We also put the original word in the `span` tag's `title` attribute, so that the user can mouse over the word to see what word was replaced. You can see this in action in Figure 2-5.)
###### Figure 2-5. Alpha Munger with tooltip showing the replaced word
###### Tip
In these examples, you'll notice the use of `debug=True`. This invokes a handy testing mode, calling the `tornado.autoreload` module, where Tornado will attempt to restart the server each time the main Python file is modified, and refresh templates as they change. It's great for quick changes and live updating, but don't leave it on in production, because it prevents Tornado from caching templates!
## Serving Static Files
When writing web applications, you'll often want to serve "static content" like stylesheets, JavaScript files, and images without writing individual handlers for every file. Tornado provides several helpful shortcuts to make serving static content easy.
### Setting the static_path
You can tell Tornado to serve static files from a particular location on the filesystem by passing a `static_path` parameter to the constructor of the `Application` class. The relevant snippet from the Alpha Munger source code follows:
app = tornado.web.Application(
handlers=[(r'/', IndexHandler), (r'/poem', MungedPageHandler)],
template_path=os.path.join(os.path.dirname(__file__), "templates"),
static_path=os.path.join(os.path.dirname(__file__), "static"),
debug=True
)
Here, we set the `static_path` parameter to a subdirectory named _static_ , found in the directory of the current application. Now the application will respond to requests to a path like `/static/filename.ext` by reading _filename.ext_ from the _static_ directory and returning it in the body of the response.
### Generating static URLs with static_url
The Tornado template module provides a function called `static_url` to generate URLs to files found in the _static_ directory. Let's look at the call to `static_url` from _index.html_ as an example in the following code:
<link rel="stylesheet" href="{{ static_url("style.css") }}">
This call to `static_url` evaluates to a URL, and the rendered output would look something like this:
<link rel="stylesheet" href="/static/style.css?v=ab12">
So why use `static_url` instead of just hardcoding the path in your templates? There are a number of reasons. One is that the `static_url` function creates a hash based on the content of the file and appends it to the end of the URL (the `v` parameter in the query string). The hash ensures that browsers will always load the latest version of a file instead of relying on a previously cached version. This is helpful both during development and when deploying your application for production use, since your users won't have to clear their browser's cache in order to see changes to your static content.
Another benefit is that you could potentially change the structure of your application's URLs without changing the code in your templates. For example, you could configure Tornado to serve static content in response to requests to a path like `/s/filename.ext` instead of the default `/static` path. If you've been using `static_url` instead of hardcoding the paths, your code won't need to change. Let's say you wanted to move your static content from the _static/_ directory we've been using to a new _s/_ directory. You could simply change the static path from `static` to `s` and every reference wrapped in `static_url` will be updated. If you had hardcoded the static portion of the path in each filename you reference in your source, you'd have to manually change every template.
## Next Steps with Templates
By now, you should have a handle on the basic features of Tornado's templating system. For many simple web applications, like the Alpha Munger, the basic features may be all you need. But we're not done with templates yet. Tornado still has a few template tricks up its sleeve in the form of blocks and modules, two features that make it easier to write and maintain sophisticated web applications. We'll look at these features in Chapter 3.
# Chapter 3. Extending Templates
In Chapter 2, we saw how the Tornado template system could be used to easily pass information from handlers to web pages, letting you keep your web markup clean while easily interpolating dynamic data. However, most sites will want to make use of repurposable content like headers, footers, and layout grids. In this chapter, we'll take a look at how you can accomplish this by extending Tornado templates, or using UI modules.
# Blocks and Substitutions
When you've taken the time to set up and lay out templates for your web application, it only seems logical that you'd want to reuse your frontend code as much as your backend Python, right? Fortunately, Tornado lets you do just that. Tornado supports template inheritance through `extends` and `block` statements, which give you the control and flexibility to make fluid templates that can be repurposed as you see fit.
To extend an existing template, you just need to put an `{% extends "filename.html" %}` at the top of the new template file. For example, to extend a parent template ( _main.html_ here) into a new template, you'd just use:
{% extends "main.html" %}
This will let the new file inherit all the markup of _main.html_ , and then overwrite content where desired. With this system, you can create master templates, switch in other subpages for special needs, and have both default and dynamic text and markup ready to go.
## Basics of Blocks
Extending a template makes it easy to repurpose content you've previously written, but that doesn't offer you all that much unless you can then adapt and change those previous templates. This is where `block` statements come in.
A block statement encapsulates some element of a template that you might want to change when you extend it. For example, in order to make use of a dynamic header block that can be overwritten on a page-by-page basis, you could put this into the parent template _main.html_ :
<header>
{% block header %}{% end %}
</header>
Then, to overwrite that `{% block header %}{% end %}` section from the child template _index.html_ , you can just reference the block of that name and put in whatever content you might like:
{% extends main.html %}
{% block header %}
<h1>Hello world!</h1>
{% end %}
Any file inheriting the template can include its own `{% block header %}` and `{% end %}` tags to plug in something different as well.
To call this child template from a web application, you'd simply render it from your Python script the way you would any other template we've shown so far, like so:
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render("index.html")
So here, the `body` block from _main.html_ would be filled out with the message "Hello world!" in _index.html_ on load (see Figure 3-1).
Already, we can see how this would be useful for dealing with overall page structure and would save time for multipage sites. Better yet, you can make use of multiple blocks for each page, so dynamic elements like headers and footers can be included in the same flow.
###### Figure 3-1. Hello World!
As an example, if we add multiple blocks to our parent template, _main.html_ :
<html>
<body>
<header>
{% block header %}{% end %}
</header>
<content>
{% block body %}{% end %}
</content>
<footer>
{% block footer %}{% end %}
</footer>
</body>
</html>
We can reference those blocks from our child template, _index.html_ , when we extend the parent, _main.html_.
{% extends "main.html" %}
{% block header %}
<h1>{{ header_text }}</h1>
{% end %}
{% block body %}
<p>Hello from the child template!</p>
{% end %}
{% block footer %}
<p>{{ footer_text }}</p>
{% end %}
Our Python script to load this looks much the same as before, except in this case we're passing in some string variables for use inside the template (shown in Figure 3-2):
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render(
"index.html",
header_text = "Header goes here",
footer_text = "Footer goes here"
)
###### Figure 3-2. Block Basics
You can also leave default text and markup inside of block statements in parent templates, which will be rendered as-is if the extending template does not specify its own version of the block. This way, you can replace things only as needed on a page-by-page basis, which is especially useful for including or replacing scripts, CSS files, and markup blocks.
###### Warning
As the template documentation notes, "error-reporting is currently...uh, interesting." A syntax mistake or failure to close `{% block %}` statements can result in `500: Internal Server Error` (or a full Python stack trace, if you are running in `debug` mode) being printed directly out to the browser (see Figure 3-3).
In short, you'll do yourself a favor by making your templates as robust as possible, and catching errors before the templates are rendered.
###### Figure 3-3. Block Error
## Templates in Practice: Burt's Books
So you think this sounds like fun, but you can't picture how one might use it in a standard web application? Well let's take a look at an example here, where our friend Burt runs a bookstore called Burt's Books.
Burt sells a lot of books through his store, and his website needs to show a variety of different content like new arrivals, store information, and more. Burt wants to have a consistent look and feel for the website, but also be able to update pages and sections easily.
To do this, Burt's Books has a Tornado-based website that uses a main template with all the styling, layout, and header/footer details, and then uses lightweight child templates to handle pages. With this system in place, Burt can put together pages for new releases, employee recommendations, upcoming events, and more, all sharing common base attributes.
The Burt's Books website uses one primary base template called _main.html_ that contains the general structure for the site, and looks like this:
<html>
<head>
<title>{{ page_title }}</title>
<link rel="stylesheet" href="{{ static_url("css/style.css") }}" />
</head>
<body>
<div id="container">
<header>
{% block header %}<h1>Burt's Books</h1>{% end %}
</header>
<div id="main">
<div id="content">
{% block body %}{% end %}
</div>
</div>
<footer>
{% block footer %}
<p>
For more information about our selection, hours or events, please email us at
<a href="mailto:contact@burtsbooks.com">contact@burtsbooks.com</a>.
</p>
{% end %}
</footer>
</div>
<script src="{{ static_url("js/script.js") }}"></script>
</body>
</html>
This page defines the structure, applies a CSS stylesheet, and loads the primary JavaScript file. Other templates can extend this, and replace the header, body, and footer blocks as necessary.
The site's index page ( _index.html_ ) greets friendly web visitors and provides information about the store. By extending _main.html_ , this file needs to contain only information that should replace the default text in the header and body blocks:
{% extends "main.html" %}
{% block header %}
<h1>{{ header_text }}</h1>
{% end %}
{% block body %}
<div id="hello">
<p>Welcome to Burt's Books!<p>
<p>...</p>
</div>
{% end %}
This also makes use of the Tornado template default behavior for the footer block, and leaves that contact information inherited from the parent template.
To serve the site and pass information to the index template, this is the Python script ( _main.py_ ) that Burt's Books could run:
import tornado.web
import tornado.httpserver
import tornado.ioloop
import tornado.options
import os.path
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class Application(tornado.web.Application):
def __init__(self):
handlers = [
(r"/", MainHandler),
]
settings = dict(
template_path=os.path.join(os.path.dirname(__file__), "templates"),
static_path=os.path.join(os.path.dirname(__file__), "static"),
debug=True,
)
tornado.web.Application.__init__(self, handlers, **settings)
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render(
"index.html",
page_title = "Burt's Books | Home",
header_text = "Welcome to Burt's Books!",
)
if __name__ == "__main__":
tornado.options.parse_command_line()
http_server = tornado.httpserver.HTTPServer(Application())
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
###### Note
The structure of this example differs a bit from what we've seen before, but it's nothing to be frightened of. Instead of creating an instance of `tornado.web.Application` by invoking its constructor with a list of handlers and other keyword arguments, we are defining our own Application subclass, which we're calling, simply, `Application`. In the `__init__` method we define, we create the list of handlers and a dictionary of settings and pass those values into the call to initialize the superclass like so:
tornado.web.Application.__init__(self, handlers, **settings)
So with this system in place, Burt's Books can make easy changes to the index page while keeping the base template intact for use with other pages. Additionally, they can make use of Tornado's real power, serving dynamic content from the Python script and/or a database. We'll look more at that in a bit.
## Autoescaping
By default, Tornado will automatically _escape_ content in templates, turning tags into their associated HTML entities. This helps protect against malicious script attacks within database-backed websites. For example, say you have a comment section of your site where users could add any text they liked as part of the discussion. While some HTML tags do not pose significant threats beyond markup and style conflicts (like an unclosed `<h1>` in a comment), unescaped `<script>` tags can allow attackers to load external JavaScript files, opening up the door for cross-site scripting, or XSS, vulnerabilities.
Let's consider a user feedback page on the Burt's Books site. Melvin, who is feeling particularly malicious today, could use the comment form to submit the following text:
Totally hacked your site lulz »
<script>alert('RUNNING EVIL H4CKS AND SPL01TS NOW...')</script>
###### Figure 3-4. Web exploit problem
When we construct the page for an unsuspecting user without escaping user content, the script tag is interpreted as an HTML element and executed by the browser, so Alice sees the alert window shown in Figure 3-4. Thankfully, Tornado will automatically escape the expressions rendered between double curly braces. Escaping the text Melvin entered earlier will inactivate HTML tags and render the following string:
Totally hacked your site lulz »
<script>alert('RUNNING EVIL H4CKS AND SPL01TS NOW...')</script>
Now when Alice visits the site, no malicious scripts are executed, and she sees the page as shown in Figure 3-5.
###### Figure 3-5. Web Exploit Problem—Fixed!
###### Warning
In Tornado 1.x, templates are not automatically escaped, so the protection we've discussed requires explicitly calling `escape()` on unsanitized user input.
So here, we can see how autoescaping can protect your visitors from malicious script attacks. However, it can also catch you off guard when serving HTML dynamically via templates and modules.
For example, if Burt wanted to set the contact email link in his footer using a template variable, he would not get the link HTML he expected. Consider the template excerpt below:
{% set mailLink = "<a href=\"mailto:contact@burtsbooks.com\">Contact Us</a>" %}
{{ mailLink }}
It would be rendered in the page source like this:
<a href="mailto:contact@burtsbooks.com">Contact Us</a>
This is the autoescaping at work, and for obvious reasons, this won't help people get in touch with Burt.
In order to handle this situation, you can disable autoescaping, either by passing `autoescape=None` to the Application constructor, or by changing the autoescape behavior on a page-by-page basis, like so:
{% autoescape None %}
{{ mailLink }}
These `autoescape` blocks do not require end tags, and can either be set to `xhtml_escape` to enable autoescaping (which is the default behavior), or `None` to turn it off.
Ideally, however, you'd want to keep autoescaping active so it will continue to protect you. Therefore, on a tag-by-tag basis you can use the `{% raw %}` directive to output unescaped content instead:
{% raw mailLink %}
This is all especially important to keep in mind when making use of functions like Tornado's `linkify()` and `xsrf_form_html()` functions, which are affected by autoescaping settings. So if you wanted to use `linkify()` to include a link in the previous example's footer (where autoescaping is enabled), you could do so via a `{% raw %}` block:
{% block footer %}
<p>
For more information about our selection, hours or events, please email us
at <a href="mailto:contact@burtsbooks.com">contact@burtsbooks.com</a>.
</p>
<p class="small">
Follow us on Facebook at
{% raw linkify("https://fb.me/burtsbooks", extra_params='ref=website') %}.
</p>
{% end %}
This way, you can make use of the great shorthand of `linkify()`, but still utilize the benefit of autoescaping elsewhere.
# UI Modules
As we've seen, the templating system is lightweight but powerful. In practice, we'd like to follow the software engineering adage, _Don't Repeat Yourself_. In order to eliminate redundant code, we can make sections of our templates modular. For example, pages that display lists of items can define a single module that renders the markup for each item. Alternatively, groups of pages that share a common navigation structure could render content from a shared module. Tornado's UI Modules are especially helpful in these situations.
UI Modules are reusable components that encapsulate markup, style, and behavior for inclusion in a template. The page elements they define are typically reused across many templates or are included repeatedly in the same template. Modules themselves are simply Python classes that inherit from Tornado's `UIModule` class and define a `render` method. When a template references a module with the `{% module Foo(...) %}` tag, Tornado's template engine calls the module's `render` method, which returns a string that replaces the module tag in the template. UI modules may also embed their own JavaScript and CSS in the rendered page, or specify additional JavaScript or CSS files to be included. You may define optional `embedded_javascript`, `embedded_css`, `javascript_files` and `css_files` methods to that end.
## Basic Module Usage
In order to reference a module in your templates, you must declare it in the application's settings. The `ui_modules` parameter expects a dictionary that maps module names to the classes that render them. Consider Example 3-1.
##### Example 3-1. Module basics: hello_module.py
import tornado.web
import tornado.httpserver
import tornado.ioloop
import tornado.options
import os.path
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class HelloHandler(tornado.web.RequestHandler):
def get(self):
self.render('hello.html')
class HelloModule(tornado.web.UIModule):
def render(self):
return '<h1>Hello, world!</h1>'
if __name__ == '__main__':
tornado.options.parse_command_line()
app = tornado.web.Application(
handlers=[(r'/', HelloHandler)],
template_path=os.path.join(os.path.dirname(__file__), 'templates'),
ui_modules={'Hello', HelloModule}
)
server = tornado.httpserver.HTTPServer(app)
server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
This example has only one item in the `ui_modules` dictionary, which associates the reference to the module named `Hello` with the `HelloModule` class we've defined.
Now, when the `HelloHandler` is invoked and _hello.html_ is rendered, we can use the `{% module Hello() %}` template tag to include the string returned by the `render` method in the `HelloModule` class.
<html>
<head><title>UI Module Example</title></head>
<body>
{% module Hello() %}
</body>
</html>
This _hello.html_ template will interpolate the string returned by invoking the `HelloModule` in place of the module tag itself. The example in the next section shows how to extend UI modules to render their own templates and to include scripts and stylesheets.
## Modules in Depth
Very often, it's helpful for a module to refer to a template file instead of building the rendered string directly in the module class. The markup for these templates looks just like what we've seen for templates as a whole.
One common application of UI modules is to iterate over the results of a database or API query, rendering the same markup with data from each individual item. For example, if Burt wanted to create a Recommended Reading section of the Burt's Books site, he'd create a template called _recommended.html_ , with the template markup shown in the following code. As we've seen before, we will invoke the module with the `{% module Book(book) %}` tag.
{% extends "main.html" %}
{% block body %}
<h2>Recommended Reading</h2>
{% for book in books %}
{% module Book(book) %}
{% end %}
{% end %}
Burt would also create a template for the Book module itself, called _book.html_ and place it in the _templates/modules_ directory. A simple book template might look like this:
<div class="book">
<h3 class="book_title">{{ book["title"] }}</h3>
<img src="{{ book["image"] }}" class="book_image"/>
</div>
Now, when we define the `BookModule` class, we will call the `render_string` method it inherits from `UIModule`. This method explicitly renders the template and its keyword arguments as a string, which we return to the caller.
class BookModule(tornado.web.UIModule):
def render(self, book):
return self.render_string('modules/book.html', book=book)
In the full example, we will use the following template to format all the attributes of each recommended book, in place of the preceding _book.html_ template.
<div class="book">
<h3 class="book_title">{{ book["title"] }}</h3>
{% if book["subtitle"] != "" %}
<h4 class="book_subtitle">{{ book["subtitle"] }}</h4>
{% end %}
<img src="{{ book["image"] }}" class="book_image"/>
<div class="book_details">
<div class="book_date_released">Released: {{ book["date_released"]}}</div>
<div class="book_date_added">Added: {{ »
locale.format_date(book["date_added"], relative=False) }}</div>
<h5>Description:</h5>
<div class="book_body">{% raw book["description"] %}</div>
</div>
</div>
With this arrangement, the module will be called for each item of the `books` parameter passed to the _recommended.html_ template. Each time the `Book` module is invoked with a new `book` parameter, the module (and its _book.html_ template) can reference the `book` parameter's dictionary items and format the data appropriately (as shown in Figure 3-6).
###### Figure 3-6. The Book Module with style data
Now we can define a `RecommendedHandler` that renders a template just as you would normally. That template can reference the `Book` module when it renders the list of recommended books.
class RecommendedHandler(tornado.web.RequestHandler):
def get(self):
self.render(
"recommended.html",
page_title="Burt's Books | Recommended Reading",
header_text="Recommended Reading",
books=[
{
"title":"Programming Collective Intelligence",
"subtitle": "Building Smart Web 2.0 Applications",
"image":"/static/images/collective_intelligence.gif",
"author": "Toby Segaran",
"date_added":1310248056,
"date_released": "August 2007",
"isbn":"978-0-596-52932-1",
"description":"<p>This fascinating book demonstrates how you »
can build web applications to mine the enormous amount of data created by people »
on the Internet. With the sophisticated algorithms in this book, you can write »
smart programs to access interesting datasets from other web sites, collect data »
from users of your own applications, and analyze and understand the data once »
you've found it.</p>"
},
...
]
)
To use additional modules, simply add mappings to the `ui_modules` parameter. Because templates can refer to any module defined in the `ui_modules` mapping, it's easy to break out specific functionality into its own module.
###### Note
In this example, you may have noticed the use of `locale.format_date()`. This invokes the date-handling methods provided by the `tornado.locale` module, which in itself has a collection of internationalization methods. The `format_date()` option, by default, formats GMT Unix timestamps as _`XX time ago`_ , and can be used like this:
{{ locale.format_date(book["date"]) }}
`relative=False` will cause it to return an absolute time instead (in hours and minutes), whereas `full_format=True` will make it display a full date with month, day, year, and time (for instance, `July 9, 2011 at 9:47 pm`), which can be paired with `shorter=True` to hide the time, and display only month, day, and year.
This module can be a huge help when dealing with times and dates, and additionally offers support for handling localization of strings.
## Embedding JavaScript and CSS
To provide even more flexibility with these modules, Tornado allows you to embed separate CSS and JavaScript via the `embedded_css()` and `embedded_javascript()` methods. For example, if you wanted to add a line of text to the DOM when this module was called, you could embed JavaScript from the module to do this for you:
class BookModule(tornado.web.UIModule):
def render(self, book):
return self.render_string(
"modules/book.html",
book=book,
)
def embedded_javascript(self):
return "document.write(\"hi!\")"
When that module is called, it will wrap that `document.write(\"hi!\")` in a `<script>` tag and insert it right before the closing `<body>` tag:
<script type="text/javascript">
//<![CDATA[
document.write("hi!")
//]]>
</script>
Clearly, just writing to the document body isn't the most helpful thing in the world, but having the option of including JavaScript that can be included for each module gives you enormous flexibility when creating these modules.
Similarly, you can put in additional CSS rules that are loaded only when these modules are called:
def embedded_css(self):
return ".book {background-color:#F5F5F5}"
In this case, the `.book {background-color:#555}` CSS rule would be wrapped in a `<style>` tag and inserted into the page directly before the closing `<head>` tag:
<style type="text/css">
.book {background-color:#F5F5F5}
</style>
For even more flexibility, you can simply use `html_body()` to insert full HTML markup right before the closing `</body>` tag as well:
def html_body(self):
return "<script>document.write(\"Hello!\")</script>"
Clearly, while it's helpful to be able to add in-line scripts and style, it would be better for more serious inclusions (and cleaner code!) to include stylesheet and script files. This works in much the same way, so you can use `javascript_files()` and `css_files()` to include full files, both hosted locally and externally.
For example, you could include a separate local CSS file this way:
def css_files(self):
return "/static/css/newreleases.css"
Or you could fetch an external JavaScript file:
def javascript_files(self):
return "https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.14/jquery-ui.min.js"
This is particularly useful when a module requires additional libraries that aren't necessary elsewhere in the application. For example, if you have a module that makes use of the jQuery UI library (which is not used elsewhere in the application), you can load the _jquery-ui.min.js_ file just for this sample module, and spare the load time for other pages where it's not needed.
#####
Because the module's JavaScript-embedding and HTML-embedding functions target the end of the `</body>`, the content rendered by `html_body()`, `javascript_files()`, and `embedded_javascript()` will be inserted at the bottom of the page, and as such will appear in reverse order from the order in which you specify them.
If you have a module that looks like this, then:
class SampleModule(tornado.web.UIModule):
def render(self, sample):
return self.render_string(
"modules/sample.html",
sample=sample
)
def html_body(self):
return "<div class=\"addition\"><p>html_body()</p></div>"
def embedded_javascript(self):
return "document.write(\"<p>embedded_javascript()</p>\")"
def embedded_css(self):
return ".addition {color: #A1CAF1}"
def css_files(self):
return "/static/css/sample.css"
def javascript_files(self):
return "/static/js/sample.js"
The `html_body()` is written out first, appearing as the last element before the `</body>` tag. The `embedded_javascript()` is rendered next, and `javascript_files()` last. You can see how this works in Figure 3-7.
Be careful that nothing you're including here from one method requires anything inserted by another (such as JavaScript functions relying on other files), as they might be rendered in an order different from what you'd expected.
###### Figure 3-7. Module styles and scripts loaded
In short, modules allow you to be very flexible about the way your templates render formulaic data, and also let you specify a variety of additional style and function rules that are included only when the modules are called. By using modules for specific functions, you can break out your code into reusable chunks, and keep your site fast and free of unnecessary cruft.
# Summing Up
As we've seen, Tornado makes it easy to extend templates so that your web code can be easily reused throughout your application. With the addition of modules, you can make more finegrained decisions on what files, style, and script actions to include. However, while our examples have relied on how easy it is to work with Python's native data types, it wouldn't make much sense to hardcode big data structures into your applications in practice. Next, we'll see how we can tie in persistent storage to deal with storing, serving, and editing dynamic content.
# Chapter 4. Databases
In this chapter, we present a few examples of Tornado web applications that make use of a database. We'll begin with a simple RESTful API example, then move on to creating a fully functional version of the Burt's Books website introduced in "Templates in Practice: Burt's Books".
The examples in this chapter use MongoDB as the database, and pymongo as the Python driver to connect to MongoDB. There are, of course, many database systems that make sense for use in a web application: Redis, CouchDB, and MySQL are a few well-known options, and Tornado itself ships with a library for wrapping MySQL requests. We choose to use MongoDB due to its simplicity and convenience: it's easy to install and integrates well with Python code. Its schemaless nature makes it unnecessary to predefine your data structures, which is great for prototyping.
We're assuming in this chapter that you have a MongoDB installation running on the machine where you're running the example code, but it's easy to adapt the code to use with MongoDB running on a remote server. If you don't want to install MongoDB on your machine, or if there isn't a MongoDB binary for your operating system, there are a number of hosted MongoDB services you can use instead. We recommend MongoHQ. In our initial examples, we'll assume that you have MongoDB running locally on your machine, but it's easy to adapt the code to use with MongoDB running on a remote server (including MongoHQ).
We're also assuming you have some experience with databases, though not necessarily any experience with MongoDB in particular. Of course, we're only able to scratch the surface of what's possible with MongoDB here; be sure to consult the MongoDB documentation ( _http://www.mongodb.org/display/DOCS/Home_) for more information. Let's begin!
# Basic MongoDB Operations with PyMongo
Before we can write a web application that uses MongoDB, we need to learn how to use MongoDB from Python. In this section, you'll learn how to connect to MongoDB with PyMongo, then how to use pymongo to create, retrieve, and update documents in a MongoDB collection.
PyMongo is a simple Python library that wraps the MongoDB client API. You can download it here: _http://api.mongodb.org/python/current/_. Once you have it installed, open an interactive Python session and follow along.
## Establishing a Connection
First of all, you need to import the PyMongo library and create a connection to a MongoDB database.
>>> **import pymongo**
>>> **conn = pymongo.Connection("localhost", 27017)**
The preceding example shows how to connect to a MongoDB server running on your local machine, on the default MongoDB port (27017). If you're using a remote MongoDB server, replace **`localhost`** and **`27017`** as appropriate. You can also connect to MongoDB using a MongoDB URI, like so:
>>> **conn = pymongo.Connection(**
... **"mongodb:// _user_ : _password_ @staff.mongohq.com:10066/ _your_mongohq_db_ ")**
The preceding code would connect to a database called `your_mongohq_db` hosted on MongoHQ, using `user` as the username and `password` as the password. Read more about MongoDB URIs here: _http://www.mongodb.org/display/DOCS/Connections_.
A MongoDB server can have any number of databases, and the `Connection` object lets you access any of the databases on the server you've connected to. You can get an object representing a particular database either with an attribute of the object, or by using the object like a dictionary. If the database doesn't already exist, it will be automatically created.
>>> **db = conn.example** or: db = conn['example']
A database can have any number of _collections_. A collection is just a place to put related documents. Most of the operations that we perform with MongoDB (finding documents, saving documents, deleting documents) will be performed on a collection object. You can get a list of collections in a database by calling the `collection_names` method on the database object:
>>> **db.collection_names()**
[]
Of course, we haven't added any collections to our database yet, so this list is empty. MongoDB will automatically create a collection when we insert our first document. You can get an object representing a collection by accessing an attribute with the name of the collection on the database object, then insert a document by calling the object's `insert` method, specifying a Python dictionary. For example, in the following code, we insert a document into a collection called `widgets`. Because it didn't already exist, it is created automatically when the document is added:
>>> **widgets = db.widgets** or: widgets = db['widgets'] (see below)
>>> **widgets.insert({"foo": "bar"})**
ObjectId('4eada0b5136fc4aa41000000')
>>> **db.collection_names()**
[u'widgets', u'system.indexes']
(The `system.indexes` collection is for MongoDB's internal use. For the purposes of this chapter, you can ignore it.)
As an earlier example showed, you can access a collection both as an attribute of a database object, and by accessing the database object as though it was a dictionary and using the collection name as a key. For example, if `db` is a pymongo database object, both `db.widgets` and `db['widgets']` evaluate to the same collection.
## Dealing with Documents
MongoDB collections store data as _documents_ , a term that indicates the relatively free structure of data. MongoDB is a "schemaless" database: documents in the same collection usually have the same structure, but no structure is enforced by MongoDB. Internally, MongoDB stores documents in a binary JSON-like format called _BSON_. Pymongo allows us to write and retrieve our documents as Python dictionaries.
To create a new document in a collection, call the `insert` method of the document, with a dictionary as a parameter:
>>> **widgets.insert({"name": "flibnip", "description": "grade-A industrial flibnip",**
» **"quantity": 3})**
ObjectId('4eada3a4136fc4aa41000001')
Now that the document is in the database, we can retrieve it using the collection object's `find_one` method. You can tell `find_one` to find a particular document by passing it a dictionary that has a document field name as a key, and the expression you want to match in that field as the value. For example, to return the document whose `name` field is equal to `flibnip` (i.e., the document just created), call the `find_one` method like so:
>>> **widgets.find_one({"name": "flibnip"})**
{u'description': u'grade-A industrial flibnip',
u'_id': ObjectId('4eada3a4136fc4aa41000001'),
u'name': u'flibnip', u'quantity': 3}
Note the `_id` field. MongoDB automatically adds this field to any document you create. Its value is an `ObjectID`, special kind of BSON object that is guaranteed to be unique to the document in question. This `ObjectID` value, you might have noticed, is also what the `insert` method returns when successfully creating a new document. (You can override the automatic creation of an `ObjectID` by putting an `_id` key in the document when you create it.)
The value returned from `find_one` is a simple Python dictionary. You can access individual items from it, iterate over its key/value pairs, and modify values in it just as you would any other Python dictionary:
>>> **doc = db.widgets.find_one({"name": "flibnip"})**
>>> **type(doc)**
<type 'dict'>
>>> **print doc['name']**
flibnip
>>> **doc['quantity'] = 4**
However, changes to the dictionary aren't automatically saved back to the database. If you want to save changes to the dictionary, call the collection's `save` method, passing in the modified dictionary as a parameter:
>>> **doc['quantity'] = 4**
>>> **db.widgets.save(doc)**
>>> **db.widgets.find_one({"name": "flibnip"})**
{u'_id': ObjectId('4eb12f37136fc4b59d000000'),
u'description': u'grade-A industrial flibnip',
u'quantity': 4, u'name': u'flibnip'}
Let's add a few more documents to our collection:
>>> **widgets.insert({"name": "smorkeg", "description": "for external use only",** »
**"quantity": 4})**
ObjectId('4eadaa5c136fc4aa41000002')
>>> **widgets.insert({"name": "clobbasker", "description":** »
**"properties available on request", "quantity": 2})**
ObjectId('4eadad79136fc4aa41000003')
We can get a list of all documents in a collection by calling the collection's `find` method, then iterating over the results:
>>> **for doc in widgets.find():**
... **print doc**
...
{u'_id': ObjectId('4eada0b5136fc4aa41000000'), u'foo': u'bar'}
{u'description': u'grade-A industrial flibnip',
u'_id': ObjectId('4eada3a4136fc4aa41000001'),
u'name': u'flibnip', u'quantity': 4}
{u'description': u'for external use only',
u'_id': ObjectId('4eadaa5c136fc4aa41000002'),
u'name': u'smorkeg', u'quantity': 4}
{u'description': u'properties available on request',
u'_id': ObjectId('4eadad79136fc4aa41000003'),
u'name': u'clobbasker',
u'quantity': 2}
If we want only a subset of documents, we can pass a dictionary parameter to the `find` method, just as we did with the `find_one` method. For example, to find only those documents whose `quantity` key is equal to 4:
>>> **for doc in widgets.find({"quantity": 4}):**
... **print doc**
...
{u'description': u'grade-A industrial flibnip',
u'_id': ObjectId('4eada3a4136fc4aa41000001'),
u'name': u'flibnip', u'quantity': 4}
{u'description': u'for external use only',
u'_id': ObjectId('4eadaa5c136fc4aa41000002'),
u'name': u'smorkeg',
u'quantity': 4}
Finally, we can delete a document from a collection using the collection's `remove` method. The `remove` method takes a dictionary argument just like `find` and `find_one`, specifying which documents to delete. For example, to remove all documents whose `name` key is equal to `flipnip`, enter:
>>> **widgets.remove({"name": "flibnip"})**
Listing all documents in the collection confirms that the document in question has been removed:
>>> **for doc in widgets.find():**
... **print doc**
...
{u'_id': ObjectId('4eada0b5136fc4aa41000000'),
u'foo': u'bar'}
{u'description': u'for external use only',
u'_id': ObjectId('4eadaa5c136fc4aa41000002'),
u'name': u'smorkeg', u'quantity': 4}
{u'description': u'properties available on request',
u'_id': ObjectId('4eadad79136fc4aa41000003'),
u'name': u'clobbasker',
u'quantity': 2}
## MongoDB Documents and JSON
When working with web applications, you'll often want to take a Python dictionary and serialize it as a JSON object (as, for example, a response to an AJAX request). Since a document retrieved from MongoDB with PyMongo is simply a dictionary, you might assume that you could convert it to JSON simply by passing it to the `json` module's `dumps` function. There's a snag, though:
>>> **doc = db.widgets.find_one({"name": "flibnip"})**
>>> **import json**
>>> **json.dumps(doc)**
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
[stack trace omitted]
TypeError: ObjectId('4eb12f37136fc4b59d000000') is not JSON serializable
The problem here is that Python's `json` module doesn't know how to convert MongoDB's special `ObjectID` type to JSON. There are several methods of dealing with this. The simplest method (and the method we'll be adopting in this chapter) is to simply delete the `_id` key from the dictionary before we serialize it:
>>> **del doc["_id"]**
>>> **json.dumps(doc)**
'{"description": "grade-A industrial flibnip", "quantity": 4, "name": "flibnip"}'
A more sophisticated solution would be to use `json_util` library included with PyMongo, which will also help you serialize other MongoDB-specific data types to JSON. Read more about the library here: _http://api.mongodb.org/python/current/api/bson/json_util.html_.
# A Simple Persistent Web Service
Now we know enough to write a web service that can access data in a MongoDB database. First, we're going to write a web service that just reads data from MongoDB. Then, we'll write one that reads and writes data.
## A Read-Only Dictionary
The application we're going to build is a simple web-based dictionary. You should be able to make requests for a particular word, and get back the definition for that word. Here's what a typical interaction might look like:
$ **curl http://localhost:8000/oarlock**
{definition: "A device attached to a rowboat to hold the oars in place",
"word": "oarlock"}
This web service will be drawing its data from a MongoDB database. Specifically, we'll be looking up documents by their `word` attributes. Before we actually look at the source code for the web application itself, let's add some words to the database in the interactive interpreter.
>>> **import pymongo**
>>> **conn = pymongo.Connection("localhost", 27017)**
>>> **db = conn.example**
>>> **db.words.insert({"word": "oarlock", "definition":** »
**"A device attached to a rowboat to hold the oars in place"})**
ObjectId('4eb1d1f8136fc4be90000000')
>>> **db.words.insert({"word": "seminomadic", "definition": "Only partially nomadic"})**
ObjectId('4eb1d356136fc4be90000001')
>>> **db.words.insert({"word": "perturb", "definition": "Bother, unsettle, modify"})**
ObjectId('4eb1d39d136fc4be90000002')
See Example 4-1 for the source code for our dictionary web service, which will look up the words we just added and then respond with the definition.
##### Example 4-1. A dictionary web service: definitions_readonly.py
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import pymongo
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class Application(tornado.web.Application):
def __init__(self):
handlers = [(r"/(\w+)", WordHandler)]
conn = pymongo.Connection("localhost", 27017)
self.db = conn["example"]
tornado.web.Application.__init__(self, handlers, debug=True)
class WordHandler(tornado.web.RequestHandler):
def get(self, word):
coll = self.application.db.words
word_doc = coll.find_one({"word": word})
if word_doc:
del word_doc["_id"]
self.write(word_doc)
else:
self.set_status(404)
self.write({"error": "word not found"})
if __name__ == "__main__":
tornado.options.parse_command_line()
http_server = tornado.httpserver.HTTPServer(Application())
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
Run this program on the command line like so:
$ **python definitions_readonly.py**
Now use curl or your web browser to make a request to the application.
$ **curl http://localhost:8000/perturb**
{"definition": "Bother, unsettle, modify", "word": "perturb"}
If we request a word that we haven't added to the database, we get a 404 response, along with an error message:
$ **curl http://localhost:8000/snorkle**
{"error": "word not found"}
So how does this program work? Let's discuss a few key lines from the code. To begin, we include `import pymongo` at the top of our program. We then instantiate a pymongo `Connection` object in the `__init__` method of our Tornado `Application` object. We create a `db` attribute on our `Application` object, which refers to the `example` database in MongoDB. Here's the relevant code:
conn = pymongo.Connection("localhost", 27017)
self.db = conn["example"]
Once we've added the `db` attribute to our `Application` object, we can access it as `self.application.db` in any `RequestHandler` object. This is, in fact, exactly what we do in the `get` method of `WordHandler` in order to retrieve a pymongo collection object for the `words` collection. The following is the code for the `get` method:
def get(self, word):
coll = self.application.db.words
word_doc = coll.find_one({"word": word})
if word_doc:
del word_doc["_id"]
self.write(word_doc)
else:
self.set_status(404)
self.write({"error": "word not found"})
After we've assigned the collection object to the variable `coll`, we call the `find_one` method with the word that the user specified in the path of the HTTP request. If we found a word, we delete the `_id` key from the dictionary (so that Python's `json` library can serialize it), then pass it to the RequestHandler's `write` method. The `write` method will automatically serialize the dictionary as JSON.
If the `find_one` method doesn't find a matching object, it returns `None`. In this case, we set the response's status to 404 and write a small bit of JSON to inform the user that the word they specified wasn't found in the database.
## Writing the Dictionary
Looking words up in the dictionary is lots of fun, but it's a hassle to have to add words beforehand in the interactive interpreter. The next step in our example is to make it possible to create and modify words by making HTTP requests to the web service.
Here's how it will work: issuing a `POST` request for a particular word will modify the existing definition with the definition given in the body of the request. If the word doesn't already exist, it will be created. For example, to create a new word:
$ **curl -d definition=a+leg+shirt http://localhost:8000/pants**
{"definition": "a leg shirt", "word": "pants"}
Having created the word, we can request it with a `GET` request:
$ **curl http://localhost:8000/pants**
{"definition": "a leg shirt", "word": "pants"}
We can modify an existing word by issuing a `POST` request with a definition field to a word (the same arguments we use when creating a new word):
$ **curl -d definition=a+boat+wizard http://localhost:8000/oarlock**
{"definition": "a boat wizard", "word": "oarlock"}
See Example 4-2 for the source code for the read/write version of our dictionary web service.
##### Example 4-2. A read/write dictionary service: definitions_readwrite.py
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import pymongo
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class Application(tornado.web.Application):
def __init__(self):
handlers = [(r"/(\w+)", WordHandler)]
conn = pymongo.Connection("localhost", 27017)
self.db = conn["definitions"]
tornado.web.Application.__init__(self, handlers, debug=True)
class WordHandler(tornado.web.RequestHandler):
def get(self, word):
coll = self.application.db.words
word_doc = coll.find_one({"word": word})
if word_doc:
del word_doc["_id"]
self.write(word_doc)
else:
self.set_status(404)
def post(self, word):
definition = self.get_argument("definition")
coll = self.application.db.words
word_doc = coll.find_one({"word": word})
if word_doc:
word_doc['definition'] = definition
coll.save(word_doc)
else:
word_doc = {'word': word, 'definition': definition}
coll.insert(word_doc)
del word_doc["_id"]
self.write(word_doc)
if __name__ == "__main__":
tornado.options.parse_command_line()
http_server = tornado.httpserver.HTTPServer(Application())
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
The source code is exactly the same as the read-only service, except for the addition of the `post` method in `WordHandler`. Let's look at that method in more detail:
def post(self, word):
definition = self.get_argument("definition")
coll = self.application.db.words
word_doc = coll.find_one({"word": word})
if word_doc:
word_doc['definition'] = definition
coll.save(word_doc)
else:
word_doc = {'word': word, 'definition': definition}
coll.insert(word_doc)
del word_doc["_id"]
self.write(word_doc)
The first thing we do is use the `get_argument` method to fetch the `definition` passed in to our request from the `POST`. Then, just as in the `get` method, we attempt to load the document with the given word from the database using the `find_one` method. If such a document was found, we set its `definition` entry to the value we got from the `POST` arguments, then call the collection object's `save` method to write the changes to the database. If no document was found, we create a new one and use the `insert` method to save it to the database. In either case, after the database operation has taken place, we write the document out in the response (taking care to delete the `_id` attribute first).
# Burt's Books
In Chapter 3, we presented Burt's Books as an example of how to build a sophisticated web application with Tornado's template tools. In this section, we'll show you a version of the Burt's Books example that uses MongoDB as a data store. (You'll want to review the Burt's Books example from Chapter 3 before you continue.)
## Reading Books (From the Database)
Let's start with something simple: a version of Burt's Books that reads its list of books from the database. The first thing we'll need to do is create a database and a collection on our MongoDB server and populate it with book documents, like so:
>>> **import pymongo**
>>> **conn = pymongo.Connection()**
>>> **db = conn["bookstore"]**
>>> **db.books.insert({**
... **"title":"Programming Collective Intelligence",**
... **"subtitle": "Building Smart Web 2.0 Applications",**
... **"image":"/static/images/collective_intelligence.gif",**
... **"author": "Toby Segaran",**
... **"date_added":1310248056,**
... **"date_released": "August 2007",**
... **"isbn":"978-0-596-52932-1",**
... **"description":"<p>[...]</p>"**
... **})**
ObjectId('4eb6f1a6136fc42171000000')
>>> **db.books.insert({**
... **"title":"RESTful Web Services",**
... **"subtitle": "Web services for the real world",**
... **"image":"/static/images/restful_web_services.gif",**
... **"author": "Leonard Richardson, Sam Ruby",**
... **"date_added":1311148056,**
... **"date_released": "May 2007",**
... **"isbn":"978-0-596-52926-0",**
... **"description":"<p>[...]</p>"**
... **})**
ObjectId('4eb6f1cb136fc42171000001')
(We've omitted the descriptions of these books to save space.) Once we have these documents in the database, we're ready to roll. Example 4-3 shows the source code for the modified version of the Burt's Books web application, called _burts_books_db.py_.
##### Example 4-3. Reading from the database: burts_books_db.py
import os.path
import tornado.auth
import tornado.escape
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
from tornado.options import define, options
import pymongo
define("port", default=8000, help="run on the given port", type=int)
class Application(tornado.web.Application):
def __init__(self):
handlers = [
(r"/", MainHandler),
(r"/recommended/", RecommendedHandler),
]
settings = dict(
template_path=os.path.join(os.path.dirname(__file__), "templates"),
static_path=os.path.join(os.path.dirname(__file__), "static"),
ui_modules={"Book": BookModule},
debug=True,
)
conn = pymongo.Connection("localhost", 27017)
self.db = conn["bookstore"]
tornado.web.Application.__init__(self, handlers, **settings)
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render(
"index.html",
page_title = "Burt's Books | Home",
header_text = "Welcome to Burt's Books!",
)
class RecommendedHandler(tornado.web.RequestHandler):
def get(self):
coll = self.application.db.books
books = coll.find()
self.render(
"recommended.html",
page_title = "Burt's Books | Recommended Reading",
header_text = "Recommended Reading",
books = books
)
class BookModule(tornado.web.UIModule):
def render(self, book):
return self.render_string(
"modules/book.html",
book=book,
)
def css_files(self):
return "/static/css/recommended.css"
def javascript_files(self):
return "/static/js/recommended.js"
if __name__ == "__main__":
tornado.options.parse_command_line()
http_server = tornado.httpserver.HTTPServer(Application())
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
As you can see, this program is almost exactly identical to the original Burt's Books web application presented in Chapter 3. There are two differences. First, we've added a `db` attribute to our `Application` connected to a MongoDB server:
conn = pymongo.Connection("localhost", 27017)
self.db = conn["bookstore"]
Second, we use the connection's `find` method to get a list of book documents from the database, and pass that list in when rendering _recommended.html_ in the `get` method of `RecommendedHandler`. Here's the relevant code:
def get(self):
coll = self.application.db.books
books = coll.find()
self.render(
"recommended.html",
page_title = "Burt's Books | Recommended Reading",
header_text = "Recommended Reading",
books = books
)
Previously, the list of books had been hardcoded into the `get` method. However, because the documents we added to MongoDB have the same fields as the original hardcoded dictionaries, the template code we wrote works without any modification.
Run the application like so:
$ **python burts_books_db.py**
And then point your web browser to `http://localhost:8000/recommended/`. At this point, it should look almost exactly like the hardcoded version of Burt's Books (see Figure 3-6).
## Editing and Adding Books
The next step is to make an interface for editing books that are already in the database, and to add new books to the database. In order to do this, we need to make a form for the user to fill out with book information, a handler to serve that form, and a handler to process the results of that form and put them in the database.
The source code for this version of Burt's Books is nearly identical to the code previously presented, with a few additions that we'll discuss below. You can follow along with the full source code that came with the book; the relevant program is _burts_books_rwdb.py_.
### Rendering the edit form
Here's the source code for `BookEditHandler`, which performs two jobs:
1. A `GET` request to the handler renders an HTML form (in the template _book_edit.html_ ), potentially with data for an existing book.
2. A `POST` request to the handler takes data from the form and either updates an existing book record in the database, or adds a new one, depending on the data supplied.
Here's the source code for the handler:
class BookEditHandler(tornado.web.RequestHandler):
def get(self, isbn=None):
book = dict()
if isbn:
coll = self.application.db.books
book = coll.find_one({"isbn": isbn})
self.render("book_edit.html",
page_title="Burt's Books",
header_text="Edit book",
book=book)
def post(self, isbn=None):
import time
book_fields = ['isbn', 'title', 'subtitle', 'image', 'author',
'date_released', 'description']
coll = self.application.db.books
book = dict()
if isbn:
book = coll.find_one({"isbn": isbn})
for key in book_fields:
book[key] = self.get_argument(key, None)
if isbn:
coll.save(book)
else:
book['date_added'] = int(time.time())
coll.insert(book)
self.redirect("/recommended/")
We'll talk about the details in a second, but first let's discuss how we've set up our `Application` class to route requests to this handler. Here's the relevant section from the `Application`'s `__init__` method:
handlers = [
(r"/", MainHandler),
(r"/recommended/", RecommendedHandler),
(r"/edit/([0-9Xx\-]+)", BookEditHandler),
(r"/add", BookEditHandler)
]
As you can see, `BookEditHandler` handles requests for _two different_ path patterns. One of these, `/add`, serves up the edit form with no existing information, so you can add a new book to the database; the other, `/edit/([0-9Xx\-]+)`, renders the form with information for a pre-existing book, according to the book's ISBN.
### Retrieving book information from the database
Let's look at the `get` method in `BookEditHandler` to see how it works:
def get(self, isbn=None):
book = dict()
if isbn:
coll = self.application.db.books
book = coll.find_one({"isbn": isbn})
self.render("book_edit.html",
page_title="Burt's Books",
header_text="Edit book",
book=book)
If the method is invoked as a result of a request to `/add`, Tornado will call the `get` method without a second argument (as there's no corresponding group in the regular expression for the path). In this case, the default, an empty `book` dictionary is passed to the _book_edit.html_ template.
If the method was called as a result of a request to, for example, `/edit/0-123-456`, the `isbn` parameter is set to the value `0-123-456`. In this case, we get the `books` collection from our `Application` instance and use it to look up the book with the corresponding ISBN. Then we pass the resulting `book` dictionary into the template.
Here's the template ( _book_edit.html_ ):
{% extends "main.html" %}
{% autoescape None %}
{% block body %}
<form method="POST">
ISBN <input type="text" name="isbn"
value="{{ book.get('isbn', '') }}"><br>
Title <input type="text" name="title"
value="{{ book.get('title', '') }}"><br>
Subtitle <input type="text" name="subtitle"
value="{{ book.get('subtitle', '') }}"><br>
Image <input type="text" name="image"
value="{{ book.get('image', '') }}"><br>
Author <input type="text" name="author"
value="{{ book.get('author', '') }}"><br>
Date released <input type="text" name="date_released"
value="{{ book.get('date_released', '') }}"><br>
Description<br>
<textarea name="description" rows="5"
cols="40">{% raw book.get('description', '')%}</textarea><br>
<input type="submit" value="Save">
</form>
{% end %}
This is a fairly conventional HTML form. We're using the `book` dictionary passed in from the request handler to prepopulate the form with data from the existing book, if any; we use the Python dictionary object's `get` method to supply a default value for a key if the key isn't present in the dictionary. Note that the `name` attributes of the `input` tags are set to the corresponding key of the `book` dictionary; this will make it easy to associate the data from the form with the data we want to put into the database.
Also note that, because the `form` tag lacks an `action` attribute, the form's `POST` will be directed to the current URL, which is precisely what we want (e.g., if the page was loaded as `/edit/0-123-456`, the `POST` request will go to `/edit/0-123-456`; if the page was loaded as `/add`, the `POST` will go to `/add`). Figure 4-1 shows what the page looks like when rendered.
###### Figure 4-1. Burt's Books: Form for adding a new book
### Saving to the database
Let's take a look at the `post` method of `BookEditHandler`. This method handles requests that come from the book edit form. Here's the source code:
def post(self, isbn=None):
import time
book_fields = ['isbn', 'title', 'subtitle', 'image', 'author',
'date_released', 'description']
coll = self.application.db.books
book = dict()
if isbn:
book = coll.find_one({"isbn": isbn})
for key in book_fields:
book[key] = self.get_argument(key, None)
if isbn:
coll.save(book)
else:
book['date_added'] = int(time.time())
coll.insert(book)
self.redirect("/recommended/")
Like the `get` method, the `post` method does double duty: it handles requests to edit existing documents and requests to add a new document. If there's an `isbn` argument (i.e., the path of the request was something like `/edit/0-123-456`), we assume that we're editing the document with the given ISBN. If such an argument is not present, we assume that we're adding a new document.
We begin with an empty dictionary variable called `book`. If we're editing an existing book, we load the document corresponding to the incoming ISBN from the database using the `book` collection's `find_one` method. In either case, the `book_fields` list specifies what fields should be present in a book document. We iterate over this list, grabbing the corresponding values from the `POST` request using the `get_argument` method of the `RequestHandler` object.
At this point, we're ready to update the database. If we have an ISBN, we call the collection's `save` method to update the book document in the database. If not, we call the collection's `insert` method, taking care to first add a value for the `date_added` key. (We didn't include this in our list of fields to fetch from the incoming request, as it doesn't make sense to be able to edit the `date_added` value after the book has been added to the database.) When we're done, we use the `redirect` method of the `RequestHandler` class to send the user back to the Recommendations page. Any changes that we made should be visible there immediately. Figure 4-2 shows what the updated Recommendations page might look like.
###### Figure 4-2. Burt's Books: Recommended list with newly added book
You'll also notice that we've added an "Edit" link to each book entry, which links to the Edit form for each book in the list. Here's the source code for the modified Book module:
<div class="book" style="overflow: auto">
<h3 class="book_title">{{ book["title"] }}</h3>
{% if book["subtitle"] != "" %}
<h4 class="book_subtitle">{{ book["subtitle"] }}</h4>
{% end %}
<img src="{{ book["image"] }}" class="book_image"/>
<div class="book_details">
<div class="book_date_released">Released: {{ book["date_released"]}}</div>
<div class="book_date_added">Added: {{
locale.format_date(book["date_added"], relative=False) }}</div>
<h5>Description:</h5>
<div class="book_body">{% raw book["description"] %}</div>
<p><a href="/edit/{{ book['isbn'] }}">Edit</a></p>
</div>
</div>
The important line is this one:
<p><a href="/edit/{{ book['isbn'] }}">Edit</a></p>
The link to the Edit page is made by appending the value of the book's `isbn` key to the string `/edit/`. This link will lead to the Edit form for the book in question. You can see the results in Figure 4-3.
###### Figure 4-3. Burt's Books: Recommended list with edit links
# MongoDB: Next Steps
We've covered only the bare essentials of MongoDB here—just enough to implement the example web applications in this chapter. If you're interested in learning more about PyMongo and MongoDB in general, the PyMongo tutorial ( _http://api.mongodb.org/python/2.0.1/tutorial.html_) and the MongoDB tutorial ( _http://www.mongodb.org/display/DOCS/Tutorial_) are good places to start.
If you're interested in making MongoDB applications with Tornado that perform well at scale, you'll want to familiarize yourself with asyncmongo ( _https://github.com/bitly/asyncmongo_), a PyMongo-like library for performing MongoDB requests asynchronously. We'll discuss what asynchronous requests are, and why they're good for scalable web applications, in Chapter 5.
# Chapter 5. Asynchronous Web Services
Thus far, we've taken a look at many of the features that make Tornado such a powerful framework for web applications. Its simplicity, ease of use, and handy helpers are enough reason to make it a great choice for many web projects. However, one of the most talked about features of Tornado is its ability to fetch and serve content asynchronously, and with good reason: it makes it easy to handle nonblocking requests, ultimately resulting in more efficient processes and greater scaling possibilities. In this chapter, we'll take a look at the basics of asynchronous requests in Tornado, as well as some long polling techniques that will allow you to write simpler web applications that can serve more requests with fewer resources.
# Asynchronous Web Requests
Most web applications (including the examples we've looked at thus far) are blocking in nature, meaning that while a request is being handled, the process hangs until the request is completed. In most cases, web requests handled by Tornado should complete fast enough that this is not a concern. However, for operations that can take some time to complete (like large database requests or calls to external APIs) it means that the application is effectively locked until the process is finished, and for obvious reasons that is a problem at scale.
However, Tornado gives us better ways to handle this sort of situation. Instead of leaving a process hanging while it waits for a request to finish, the application can start a request and give it a callback for when that completes, leaving the I/O loop open to serve other clients while it waits for the first process to complete.
To illustrate Tornado's asynchronous features, we're going to build a simple web application that makes HTTP requests to the Twitter Search API. The web application takes a parameter `q` on the query string and determines how often a tweet with that search term is posted on Twitter ("tweets per second"). The methodology for determining this number is very rough, but it's good enough for example purposes. Figure 5-1 shows what the application looks like.
###### Figure 5-1. Asynchronous HTTP example: tweet rate
We're going to show three versions of this application: first, the version that uses a synchronous HTTP request, then a version that uses Tornado's asynchronous HTTP client with a callback. Finally, we'll show how to use Tornado 2.1's new `gen` module to make asynchronous HTTP requests cleaner and easier to implement. You don't need to be an expert on the Twitter Search API to understand some examples, but a passing familiarity won't hurt. You can read the developer documentation for the search API here: _https://dev.twitter.com/docs/api/1/get/search_.
## Starting Synchronous
Example 5-1 contains the source code for the synchronous version of our tweet rate calculator. Note that we import Tornado's `httpclient` module up at the top: we're going to use the `HTTPClient` class from that module to perform our HTTP requests. Later on, we'll use the `AsyncHTTPClient` class, which is available in the same module.
##### Example 5-1. Synchronous HTTP requests: tweet_rate.py
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import tornado.httpclient
import urllib
import json
import datetime
import time
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class IndexHandler(tornado.web.RequestHandler):
def get(self):
query = self.get_argument('q')
client = tornado.httpclient.HTTPClient()
response = client.fetch("http://search.twitter.com/search.json?" + \
urllib.urlencode({"q": query, "result_type": "recent", "rpp": 100}))
body = json.loads(response.body)
result_count = len(body['results'])
now = datetime.datetime.utcnow()
raw_oldest_tweet_at = body['results'][-1]['created_at']
oldest_tweet_at = datetime.datetime.strptime(raw_oldest_tweet_at,
"%a, %d %b %Y %H:%M:%S +0000")
seconds_diff = time.mktime(now.timetuple()) - \
time.mktime(oldest_tweet_at.timetuple())
tweets_per_second = float(result_count) / seconds_diff
self.write("""
<div style="text-align: center">
<div style="font-size: 72px">%s</div>
<div style="font-size: 144px">%.02f</div>
<div style="font-size: 24px">tweets per second</div>
</div>""" % (query, tweets_per_second))
if __name__ == "__main__":
tornado.options.parse_command_line()
app = tornado.web.Application(handlers=[(r"/", IndexHandler)])
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
The structure of this program should be familiar to you by now: we have a `RequestHandler` class, `IndexHandler`, that handles requests going to the root path of the application. Inside the `get` method of `IndexHandler`, we grab the `q` parameter from the query string (using `get_argument`) and then use it to perform a request to the Twitter Search API. Here's the most relevant bit of code:
client = tornado.httpclient.HTTPClient()
response = client.fetch("http://search.twitter.com/search.json?" + \
urllib.urlencode({"q": query, "result_type": "recent", "rpp": 100}))
body = json.loads(response.body)
Here we instantiate Tornado's `HTTPClient` class, then call `fetch` on the resulting object. The synchronous version of `fetch` method takes as a parameter the URL to be fetched. We construct a URL to grab relevant search results from the Twitter Search API (the `rpp` parameter specifies that we want 100 tweets in the first page of search results, while the `result_type` parameter specifies that we want only the most recent tweets that match our search). The `fetch` method returns an `HTTPResponse` object, whose `body` attribute contains whatever data was fetched from the remote URL. Twitter returns results in JSON format, so we use Python's `json` module to create a Python data structure from the results.
###### Note
The `HTTPResponse` object that the `fetch` method returns allows you to access all parts of the HTTP response, not just the body. Read more about it in the official documentation.
The rest of the code in the handler is concerned with calculating our tweets per second figure. We use the difference in time between the oldest tweet in the search results and the current timestamp to determine how many seconds the search covers, then use that number to divide the number of tweets retrieved in the search to arrive at our final figure. Finally, we write some rudimentary HTML with the figure to the browser.
## The Trouble with Blocking
So far, we've written a simple Tornado application that makes a request to the Twitter API and then returns the results to the browser. And while the application itself should be fairly quick to respond, there will always be a lag between when the request to Twitter is made and when the search data returns. In a synchronous (and for now, we'll assume single-threaded) application, this means that only one request can be served at a time. So, if your application involves a two-second API request, you're going to be serving (at most!) one request every other second. That's not what you might call a highly scalable application, even spread over multiple threads and/or multiple servers.
To take a concrete look at this, let's benchmark the example we've written. You can verify the performance of this application with any benchmarking tool, though here we'll use the excellent Siege utility for our tests as follows:
$ **siege http://localhost:8000/?q=pants -c10 -t10s**
In this case, Siege will make roughly 10 concurrent requests for 10 seconds to our application, the output of which is shown in Figure 5-2.
###### Figure 5-2. Synchronous tweet-rate fetch
The problem, as we can quickly see here, is that while each request returns somewhat quickly on its own, the API roundtrip has enough lag in it that it forces the process to hang until the request completes and the data is handled. This is not a concern for just one or two requests, but spread across 100 (or even 10) users, it means slowdowns across the board.
Here, 10 simulated users over a time period of fewer than 10 seconds brought the average response time to 1.99 seconds, with a grand total of 29 hits served. And keep in mind, this example is serving just a dead-simple web page. If you were to add in calls to other web services or databases, the result would be far worse. If this type of code were used on a site that got even a moderate amount of traffic, requests would get increasingly slower, and eventually begin to time out or fail.
## Basic Asynchronous Calls
Fortunately, Tornado includes a class called `AsyncHTTPClient`, which performs HTTP requests asynchronously. It works a lot like the synchronous client illustrated in Example 5-1, with a few important differences that we'll discuss. See Example 5-2 for the source code.
##### Example 5-2. Asynchronous HTTP requests: tweet_rate_async.py
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import tornado.httpclient
import urllib
import json
import datetime
import time
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class IndexHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self):
query = self.get_argument('q')
client = tornado.httpclient.AsyncHTTPClient()
client.fetch("http://search.twitter.com/search.json?" + \
urllib.urlencode({"q": query, "result_type": "recent", "rpp": 100}),
callback=self.on_response)
def on_response(self, response):
body = json.loads(response.body)
result_count = len(body['results'])
now = datetime.datetime.utcnow()
raw_oldest_tweet_at = body['results'][-1]['created_at']
oldest_tweet_at = datetime.datetime.strptime(raw_oldest_tweet_at,
"%a, %d %b %Y %H:%M:%S +0000")
seconds_diff = time.mktime(now.timetuple()) - \
time.mktime(oldest_tweet_at.timetuple())
tweets_per_second = float(result_count) / seconds_diff
self.write("""
<div style="text-align: center">
<div style="font-size: 72px">%s</div>
<div style="font-size: 144px">%.02f</div>
<div style="font-size: 24px">tweets per second</div>
</div>""" % (self.get_argument('q'), tweets_per_second))
self.finish()
if __name__ == "__main__":
tornado.options.parse_command_line()
app = tornado.web.Application(handlers=[(r"/", IndexHandler)])
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
The `fetch` method of `AsyncHTTPClient` does not return with the results of the call. Instead, it specifies a `callback` parameter; the method or function you specify will be called when the HTTP request is complete, with the `HTTPResponse` object as a parameter.
client = tornado.httpclient.AsyncHTTPClient()
client.fetch("http://search.twitter.com/search.json?" + »
urllib.urlencode({"q": query, "result_type": "recent", "rpp": 100}),
callback=self.on_response)
In this example, we specified the method `on_response` as the callback. All of the logic that we used to transform the Twitter Search API request into a web page with the desired output was then moved into the `on_response` function. Also note the use of the `@tornado.web.asynchronous` decorator (before the definition of the `get` method) and the call to `self.finish()` at the end of the callback method. We'll discuss those in more detail shortly.
This version of the application has the same outward behavior as the synchronous version, but it performs much better. How much better? Well, let's look at the benchmark readout.
As you can see in Figure 5-3, we've gone from 3.20 transactions per second in our synchronous example to 12.59, serving a total of 118 hits for the same period of time. That's a pretty solid improvement! As you could imagine, spread over more users and a longer period of time, this would serve many more connections, and would not be as likely to suffer the slowdown issues that the synchronous example showed.
###### Figure 5-3. Asynchronous tweet-rate fetch
## The asynchronous Decorator and the finish Method
Tornado's default behavior is to close the connection to the client when the function handling the request returns. In normal circumstances, this is exactly what you want. But when we're performing an asynchronous request that requires a callback, we need the connection to stay open until the callback has been executed. You can tell Tornado to leave the connection open by using the `@tornado.web.asynchronous` decorator on the method whose behavior you want to change, as we did with the `get` method of the `IndexHandler` in the asynchronous version of the Tweet Rate example. The following is the relevant snippet of code:
class IndexHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self):
query = self.get_argument('q')
[... other request handler code here...]
Note that when you use the `@tornado.web.asynchronous` decorator, Tornado will never close the connection on its own. You must explicitly tell Tornado to close the request by calling the `finish` method of your `RequestHandler` object. (Otherwise, the request will appear to hang, and the browser may or may not display the data we've already sent to the client.) In the preceding asynchronous example, we called the `finish` method right after our call to `write` in the `on_response` function:
[... other callback code ...]
self.write("""
<div style="text-align: center">
<div style="font-size: 72px">%s</div>
<div style="font-size: 144px">%.02f</div>
<div style="font-size: 24px">tweets per second</div>
</div>""" % (self.get_argument('q'), tweets_per_second))
self.finish()
## Asynchronous Generators
Now, the asynchronous version of our Tweet Rate program works great and performs well. Unfortunately, it's a little bit messy: we've had to split our code for handling the request across two different methods. This can get especially hard to code and maintain when we have two or more asynchronous requests to perform, each dependent on the previous call: soon you can find yourself calling callbacks from within callbacks within callbacks. What follows is a contrived (but not impossible) illustration:
def get(self):
client = AsyncHTTPClient()
client.fetch("http://example.com", callback=on_response)
def on_response(self, response):
client = AsyncHTTPClient()
client.fetch("http://another.example.com/", callback=on_response2)
def on_response2(self, response):
client = AsyncHTTPClient()
client.fetch("http://still.another.example.com/", callback=on_response3)
def on_response3(self, response):
[etc., etc.]
Fortunately, Tornado 2.1 introduced the `tornado.gen` module, which provides a cleaner pattern for performing asynchronous requests. Example 5-3 contains the source code for a version of the Tweet Rate application that uses `tornado.gen`. Take a look, and then we'll discuss how it works.
##### Example 5-3. Asynchronous requests with the generator pattern: tweet_rate_gen.py
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import tornado.httpclient
import tornado.gen
import urllib
import json
import datetime
import time
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class IndexHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
@tornado.gen.engine
def get(self):
query = self.get_argument('q')
client = tornado.httpclient.AsyncHTTPClient()
response = yield tornado.gen.Task(client.fetch,
"http://search.twitter.com/search.json?" + \
urllib.urlencode({"q": query, "result_type": "recent", "rpp": 100}))
body = json.loads(response.body)
result_count = len(body['results'])
now = datetime.datetime.utcnow()
raw_oldest_tweet_at = body['results'][-1]['created_at']
oldest_tweet_at = datetime.datetime.strptime(raw_oldest_tweet_at,
"%a, %d %b %Y %H:%M:%S +0000")
seconds_diff = time.mktime(now.timetuple()) - \
time.mktime(oldest_tweet_at.timetuple())
tweets_per_second = float(result_count) / seconds_diff
self.write("""
<div style="text-align: center">
<div style="font-size: 72px">%s</div>
<div style="font-size: 144px">%.02f</div>
<div style="font-size: 24px">tweets per second</div>
</div>""" % (query, tweets_per_second))
self.finish()
if __name__ == "__main__":
tornado.options.parse_command_line()
app = tornado.web.Application(handlers=[(r"/", IndexHandler)])
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
As you can see, this code here is largely identical to the previous two versions of the code. The main difference is in how we call the `fetch` method of the `AsyncHTTPClient` object. Here's the relevant part of the code:
client = tornado.httpclient.AsyncHTTPClient()
response = yield tornado.gen.Task(client.fetch,
"http://search.twitter.com/search.json?" + \
urllib.urlencode({"q": query, "result_type": "recent", "rpp": 100}))
body = json.loads(response.body)
We use Python's `yield` keyword and an instance of the `tornado.gen.Task` object, passing in the function we want to call and the parameters to pass to that function. Here, the use of `yield` returns control of the program to Tornado, allowing it to perform other tasks while the HTTP request is in progress. When the HTTP request is finished, the `RequestHandler` method resumes where it left off. The beauty of this construction is that it returns the HTTP response right in the request handler, not in a callback. As a consequence, the code is easier to understand: all of the logic related to the request is located in the same place. The HTTP request is still performed asynchronously, however, and so we get the same performance gains from using `tornado.gen` as we do from using an asynchronous request with a callback, as we can see in Figure 5-4.
Note the use of the `@tornado.gen.engine` decorator just before the definition of the `get` method; this is what informs Tornado that the method will be using the `tornado.gen.Task` class. The `tornado.gen` module has a number of other classes and functions that ease asynchronous programming in Tornado. It's worth looking over the documentation.
##### Making Everything Asynchronous
We've been using Tornado's asynchronous HTTP client in this chapter as an illustration of how to perform tasks asynchronously. Other developers have written asynchronous client libraries for other kinds of tasks. Volunteers maintain a fairly complete list of such libraries on the Tornado wiki.
One notable example is bit.ly's asyncmongo, which can be used to make calls to a MongoDB server asynchronous. This one is particularly good choice for us, as it was developed specifically to provide async database access to Tornado developers, but for those using other databases, there's a good chance your data store of choice also has an asynchronous library listed there.
###### Figure 5-4. Asynchronous tweet-rate fetch using tornado.gen
## Summary of Asynchronous Operations
As we've seen in the preceding examples, asynchronous web services in Tornado are both easy to implement and incredibly powerful in practice. Using asynchronous handlers for longer API and database requests can keep your application from blocking, and ultimately serve more requests faster. While not every handler benefits from being asynchronous—and in fact trying to make a full application nonblocking can overcomplicate things quickly—Tornado's nonblocking features can be extremely handy for building web applications that depend on slower queries or external services.
However, it's worth noting here that these examples are fairly contrived. If you were designing an application with this functionality at any kind of scale, you'd probably want to have the client web browser do the Twitter search request (in JavaScript), and let the web server move on to serving other requests. In most cases, you'd at least want to cache the results so that two requests for the same search term didn't incur a full request to the remote API. In general, if you're doing an HTTP request on the backend just to serve your web content, you're probably going to want to rethink how your application is set up.
With this in mind, over the next set of examples we're going to take a look at dealing with asynchronous applications from the frontend side using tools like JavaScript to let the clients take on more of the work and help scale out your applications.
# Long Polling with Tornado
Another advantage of Tornado's asynchronous architecture is the ease with which it handles HTTP long polling. This is a way of handling real-time updates, which can be used for effects as simple as a notification badge and as complex as multi-user chat rooms.
Developing web applications that offer real-time updates is a constant challenge for web programmers. Updating a user's status, sending new message notifications, or indicating any other global activity all require a method for the server to send messages to the browser after the initial document has finished loading. One early approach was for the browser to poll the server for new updates at a regular interval. This technique poses obvious challenges: the polling frequency must be fast enough that notifications are up-to-date, but not too frequent that the HTTP requests pose serious scaling challenges when hundreds or thousands of clients continually open new connections. Frequent polling presents a "death by a thousand cuts" strain on a web server.
So-called "server push" technology allows web applications to distribute updates in real time while maintaining reasonable resource usage and ensuring predictable scaling. For a server push technology to be practical, it must play nicely with existing browsers. The most popular technique is to emulate a server pushing updates by letting the browser initiate the connection. These sorts of HTTP connections are called long polling, or Comet requests.
Long polling means that the browser simply initiates an HTTP request whose connection the server intentionally leaves open. The browser will simply wait for the server to "push" a response whenever an update is available. After the server sends a response and closes the connection, (or if the client request times out on the browser side) the client simply opens a new connection and waits for the next update.
This section will cover HTTP long polling in a simple real-time application and demonstrate how Tornado's architecture makes these applications easy.
## The Benefits of Long Polling
The primary appeal of HTTP long polling is that it dramatically reduces the load on a web server. Instead of clients making many short, frequent requests (and incurring the overhead of processing the HTTP headers each time), the server processes the connection only when it receives an initial request and again when there's a response to be sent. During the majority of the time that there's no new data, the connection won't consume any processor resources.
Browser compatibility is another huge benefit. Any web browser that supports AJAX requests can make long polling requests. No browser plug-ins or other add-ons are required. Compared with other server-push techniques, HTTP long polling ends up being one of the few viable options that are seen in widespread use.
We've already touched on some of the uses for long polling. In fact, the previously mentioned status updates, message notifications, and chat messages are all features on current popular web sites. Sites such as Google Docs use long polling for synchronized collaboration, where two people can edit a document simultaneously and watch each other's changes. Twitter uses long polling instruct the browser to display notifications that new status updates are available. Facebook uses the technique for its chat feature. One reason long polling is so popular is that it improves an application's user experience: visitors no longer have to constantly refresh the page to see the latest content.
## Example: Live Inventory Reporting
This example demonstrates a service that keeps a live count of a retailer's inventory updated across multiple shoppers' browsers. The application serves an HTML book detail page with an "Add to Cart" button and a count of the book's remaining inventory. Immediately after one shopper adds the book to her cart, other visitors browsing the site will see the remaining inventory decrement.
In order to provide the inventory updates, we need to write a `RequestHandler` subclass that doesn't immediately close the HTTP connection after the initial handler method is called. We accomplish this feat with Tornado's built-in `asynchronous` decorator, which we introduce in Example 5-4.
##### Example 5-4. Long polling: shopping_cart.py
import tornado.web
import tornado.httpserver
import tornado.ioloop
import tornado.options
from uuid import uuid4
class ShoppingCart(object):
totalInventory = 10
callbacks = []
carts = {}
def register(self, callback):
self.callbacks.append(callback)
def moveItemToCart(self, session):
if session in self.carts:
return
self.carts[session] = True
self.notifyCallbacks()
def removeItemFromCart(self, session):
if session not in self.carts:
return
del(self.carts[session])
self.notifyCallbacks()
def notifyCallbacks(self):
for c in self.callbacks:
self.callbackHelper(c)
self.callbacks = []
def callbackHelper(self, callback):
callback(self.getInventoryCount())
def getInventoryCount(self):
return self.totalInventory - len(self.carts)
class DetailHandler(tornado.web.RequestHandler):
def get(self):
session = uuid4()
count = self.application.shoppingCart.getInventoryCount()
self.render("index.html", session=session, count=count)
class CartHandler(tornado.web.RequestHandler):
def post(self):
action = self.get_argument('action')
session = self.get_argument('session')
if not session:
self.set_status(400)
return
if action == 'add':
self.application.shoppingCart.moveItemToCart(session)
elif action == 'remove':
self.application.shoppingCart.removeItemFromCart(session)
else:
self.set_status(400)
class StatusHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self):
self.application.shoppingCart.register(self.async_callback(self.on_message))
def on_message(self, count):
self.write('{"inventoryCount":"%d"}' % count)
self.finish()
class Application(tornado.web.Application):
def __init__(self):
self.shoppingCart = ShoppingCart()
handlers = [
(r'/', DetailHandler),
(r'/cart', CartHandler),
(r'/cart/status', StatusHandler)
]
settings = {
'template_path': 'templates',
'static_path': 'static'
}
tornado.web.Application.__init__(self, handlers, **settings)
if __name__ == '__main__':
tornado.options.parse_command_line()
app = Application()
server = tornado.httpserver.HTTPServer(app)
server.listen(8000)
tornado.ioloop.IOLoop.instance().start()
Let's take a closer look at _shopping_cart.py_ before looking at the template and script files. We define a `ShoppingCart` class that maintains the number of items in our inventory and a list of the shoppers who have added the item to their carts. Next, we specify the `DetailHandler`, which renders the HTML; the `CartHandler`, which provides an interface to manipulate the cart; and the `StatusHandler`, which we query for notifications of changes to the global inventory.
The `DetailHandler` simply generates a unique identifier for each request of the page, provides the inventory count at the time of the request, and renders the _index.html_ template to the browser. The `CartHandler` provides an API for the browser to request the item be added or removed from the visitor's shopping cart. The JavaScript running in the browser will submit `POST` requests to manipulate the visitor's cart. We will see how these methods interact with the inventory count queries that follow when we look at the `StatusHandler` and the `ShoppingCart` classes.
class StatusHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self):
self.application.shoppingCart.register(self.async_callback(self.on_message))
The first thing to notice about the `StatusHandler` is the `@tornado.web.asynchronous` decorator on the `get` method. This instructs Tornado not to close the connection when the `get` method returns. In the method itself, we simply register a callback with the shopping cart controller. We wrap the callback method with `self.async_callback` to ensure that exceptions raised in the callback don't prevent the `RequestHandler` from properly closing the connection.
###### Note
In Tornado versions prior to 1.1, callbacks had to be wrapped in the `self.async_callback()` method to catch any exceptions that might be thrown in the wrapped function. In Tornado versions 1.1 and newer, however, this is not explicitly necessary.
def on_message(self, count):
self.write('{"inventoryCount":"%d"}' % count)
self.finish()
Whenever a visitor's cart is manipulated, the `ShoppingCart` controller invokes the `on_message` method for each of the registered callbacks. This method writes the current inventory count to the client and closes the connection. (If the server doesn't close the connection, the browser may not know the request has completed, and won't notify the script that there's been an update.) Now that the long polling connections are closed, the shopping cart controller must remove the callbacks from the list of registered callbacks. In this example, we simply replace the list of callbacks with a new, empty list.
It is important to remove registered callbacks after they have been invoked and finished in the request handler, since invoking the callback subsequently would call `finish()` on a previously closed connection, which is an error.
Finally, the `ShoppingCart` controller manages inventory allocation and status callbacks. The `StatusHandler` registers callbacks via the `register` method, which appends the method to the internal `callbacks` array.
def moveItemToCart(self, session):
if session in self.carts:
return
self.carts[session] = True
self.notifyCallbacks()
def removeItemFromCart(self, session):
if session not in self.carts:
return
del(self.carts[session])
self.notifyCallbacks()
The `ShoppingCart` controller also makes `addItemToCart` and `removeItemFromCart` methods available to the `CartHandler`. When the `CartHandler` invokes these methods, the requesting page's unique identifier (the `session` variable passed to the methods) is used to mark the inventory before we call `notifyCallbacks`.
def notifyCallbacks(self):
self.callbacks[:] = [c for c in self.callbacks if self.callbackHelper(c)]
def callbackHelper(self, callback):
callback(self.getInventoryCount())
return False
The registered callbacks are invoked with the current available inventory count and the callback list is emptied to ensure a callback isn't invoked on a closed connection.
See Example 5-5 for the HTML template that displays the list of books as they change.
##### Example 5-5. Long polling: index.html
<html>
<head>
<title>Burt's Books – Book Detail</title>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"
type="text/javascript"></script>
<script src="{{ static_url('scripts/inventory.js') }}"
type="application/javascript"></script>
</head>
<body>
<div>
<h1>Burt's Books</h1>
<hr/>
<p><h2>The Definitive Guide to the Internet</h2>
<em>Anonymous</em></p>
</div>
<img src="static/images/internet.jpg"
alt="The Definitive Guide to the Internet" />
<hr />
<input type="hidden" id="session" value="{{ session }}" />
<div id="add-to-cart">
<p><span style="color: red;">Only <span id="count">{{ count }}</span>
left in stock! Order now!</span></p>
<p>$20.00 <input type="submit" value="Add to Cart" id="add-button" /></p>
</div>
<div id="remove-from-cart" style="display: none;">
<p><span style="color: green;">One copy is in your cart.</p>
<p><input type="submit" value="Remove from Cart" id="remove-button" /></p>
</div>
</body>
</html>
When the `DetailHandler` renders the _index.html_ template, we simply render the book description and include the required JavaScript code. Additionally, we dynamically include a unique ID via the `session` variable and the current inventory stock as `count`.
Finally, we will discuss the client-side JavaScript code. While this is a book on Tornado, and therefore we've been using Python up until now, the client-side code is vital enough to this example that it's important to at least understand the gist of it. In Example 5-6, we're using the jQuery library to assist in defining the page's behavior in the browser.
##### Example 5-6. Long polling: inventory.js
$(document).ready(function() {
document.session = $('#session').val();
setTimeout(requestInventory, 100);
$('#add-button').click(function(event) {
jQuery.ajax({
url: '//localhost:8000/cart',
type: 'POST',
data: {
session: document.session,
action: 'add'
},
dataType: 'json',
beforeSend: function(xhr, settings) {
$(event.target).attr('disabled', 'disabled');
},
success: function(data, status, xhr) {
$('#add-to-cart').hide();
$('#remove-from-cart').show();
$(event.target).removeAttr('disabled');
}
});
});
$('#remove-button').click(function(event) {
jQuery.ajax({
url: '//localhost:8000/cart',
type: 'POST',
data: {
session: document.session,
action: 'remove'
},
dataType: 'json',
beforeSend: function(xhr, settings) {
$(event.target).attr('disabled', 'disabled');
},
success: function(data, status, xhr) {
$('#remove-from-cart').hide();
$('#add-to-cart').show();
$(event.target).removeAttr('disabled');
}
});
});
});
function requestInventory() {
jQuery.getJSON('//localhost:8000/cart/status', {session: document.session},
function(data, status, xhr) {
$('#count').html(data['inventoryCount']);
setTimeout(requestInventory, 0);
}
);
}
When the document is finished loading, we add click event handlers to the "Add to Cart" button as well as the hidden "Remove from Cart" button. These event handler functions make the associated API calls to the server and swap the add-to-cart interface for the remove-from-cart one.
function requestInventory() {
jQuery.getJSON('//localhost:8000/cart/status', {session: document.session},
function(data, status, xhr) {
$('#count').html(data['inventoryCount']);
setTimeout(requestInventory, 0);
}
);
}
The `requestInventory` function is called with a short delay after the page has finished loading. In the function body, we initiate the long polling connection via an HTTP `GET` request to the `/cart/status` resource. The delay allows the loading progress indicator to complete when the browser finishes rendering the page and prevents the Esc key or Stop button from interrupting the long polling request. When the request returns successfully, the content of the `count` span is updated with the current stock tally. Figure 5-5 shows two browser windows displaying full inventory.
###### Figure 5-5. Long polling example: Full inventory
Now, when you run the server, you will be able to load the root URL and see the current inventory count for the book. Open multiple browser windows to the detail page and click the "Add to Cart" button in one of the windows. The number of remaining copies will immediately be updated in the other windows, as illustrated in Figure 5-6.
###### Figure 5-6. Long polling example: One item in a cart
This is a somewhat naive shopping cart implementation, to be sure—there is no logic to make sure we don't dip below our total stock, not to mention that the data will not persist between invocations of the Tornado application or between parallel instances of the application on the same server. We will leave those improvements as an exercise for the reader.
## The Downsides of Long Polling
As we've seen, HTTP long polling is incredibly useful for communicating highly interactive feedback about a site or a particular user's status. But there are a couple of pitfalls to be aware of.
When developing applications that use long polling, it's important to remember that the server has no control over the browser's request timeout interval. It's up to the browser to re-initiate the HTTP connection in the case of any interruption. Another potential issue is that many web browsers limit the number of simultaneous requests that may be opened to a particular host. With one connection sitting idle, the number of requests remaining to download site content may be limited.
Additionally, you should still be aware of how the requests will affect server performance. Consider the shopping cart application again. Since all of the Comet requests are answered and closed _en masse_ whenever the inventory changes, the server will be slammed with new requests as browsers re-establish the connections. For applications like user-to-user chat or message notifications, where only a few users' connections will close at a time, this is less of an issue.
# WebSockets with Tornado
WebSockets are a new protocol for client-server communication proposed in the HTML 5 spec. The protocol is still a draft, and only the most recent web browsers support it. However, its benefits are significant enough that we will see the protocol become more popular as more browsers begin to support it. (As always with web development, it's prudent to adhere to the pragmatic strategy of relying on new features when available and falling back on older technology when necessary.)
The WebSocket protocol provides bidirectional communication over a persistent connection between a client and server. The protocol itself uses a new `ws://` URL scheme, but is implemented on top of standard HTTP. By using the standard HTTP and HTTPS ports, it avoids all kinds of problems introduced when connecting to sites from networks that sit behind web proxies. The HTML 5 spec not only describes the communication protocol itself, but also the browser APIs that are required to write client-side code that use WebSockets.
Since WebSocket support is already supported in some of the latest browsers and since Tornado helpfully provides a module for it, it's worth seeing how to implement applications that use WebSockets.
## Tornado's WebSocket Module
Tornado provides a `WebSocketHandler` class as part of the `websocket` module. The class provides hooks for WebSocket events and methods to communicate with the connected client. The `open` method is called when a new WebSocket connection is opened, and the `on_message` and `on_close` methods are called when the connection receives a new message or is closed by the client.
Additionally, the `WebSocketHandler` class provides the `write_message` method to send messages to the client and the `close` method to close the connection. Let's look at a simple handler that repeats the messages it receives back to the client.
class EchoHandler(tornado.websocket.WebSocketHandler):
def on_open(self):
self.write_message('connected!')
def on_message(self, message):
self.write_message(message)
As you can see in our `EchoHandler` implementation, the `on_open` method simply sends the string "connected!" back to the client using the `write_message` method provided by the `WebSocketHandler` base class. The `on_message` method is invoked every time the handler receives a new message from the client, and our implementation echoes the same message back to the client. That's all there is to it! Let's take a look at a complete example to see how easy this protocol is to implement.
## Example: Live Inventory with WebSockets
In this section, we will see how easy it is to update the HTTP long polling example we saw previously to use WebSockets. Keep in mind, however, that WebSockets are a new standard and are only supported by the very latest browser versions. The specific WebSocket protocol versions that Tornado supports are only available in Firefox versions 6.0 and up, Safari 5.0.1, Chrome 6 and higher, and the Internet Explorer 10 developer preview.
With the disclaimer out of the way, let's take a look at the source. Most of the code remains unchanged, but the server application needs a few modifications to the `ShoppingCart` and `StatusHandler` classes. Example 5-7 should look familiar.
##### Example 5-7. Web Sockets: shopping_cart.py
import tornado.web
import tornado.websocket
import tornado.httpserver
import tornado.ioloop
import tornado.options
from uuid import uuid4
class ShoppingCart(object):
totalInventory = 10
callbacks = []
carts = {}
def register(self, callback):
self.callbacks.append(callback)
def unregister(self, callback):
self.callbacks.remove(callback)
def moveItemToCart(self, session):
if session in self.carts:
return
self.carts[session] = True
self.notifyCallbacks()
def removeItemFromCart(self, session):
if session not in self.carts:
return
del(self.carts[session])
self.notifyCallbacks()
def notifyCallbacks(self):
for callback in self.callbacks:
callback(self.getInventoryCount())
def getInventoryCount(self):
return self.totalInventory - len(self.carts)
class DetailHandler(tornado.web.RequestHandler):
def get(self):
session = uuid4()
count = self.application.shoppingCart.getInventoryCount()
self.render("index.html", session=session, count=count)
class CartHandler(tornado.web.RequestHandler):
def post(self):
action = self.get_argument('action')
session = self.get_argument('session')
if not session:
self.set_status(400)
return
if action == 'add':
self.application.shoppingCart.moveItemToCart(session)
elif action == 'remove':
self.application.shoppingCart.removeItemFromCart(session)
else:
self.set_status(400)
class StatusHandler(tornado.websocket.WebSocketHandler):
def open(self):
self.application.shoppingCart.register(self.callback)
def on_close(self):
self.application.shoppingCart.unregister(self.callback)
def on_message(self, message):
pass
def callback(self, count):
self.write_message('{"inventoryCount":"%d"}' % count)
class Application(tornado.web.Application):
def __init__(self):
self.shoppingCart = ShoppingCart()
handlers = [
(r'/', DetailHandler),
(r'/cart', CartHandler),
(r'/cart/status', StatusHandler)
]
settings = {
'template_path': 'templates',
'static_path': 'static'
}
tornado.web.Application.__init__(self, handlers, **settings)
if __name__ == '__main__':
tornado.options.parse_command_line()
app = Application()
server = tornado.httpserver.HTTPServer(app)
server.listen(8000)
tornado.ioloop.IOLoop.instance().start()
Other than an additional import statement, we need only to change the `ShoppingCart` and `StatusHandler` classes. The first thing to notice is that the `tornado.websocket` module is required in order to get the `WebSocketHandler` functionality.
In the `ShoppingCart` class, we need to make a slight change to the way we notify callbacks. Since WebSockets stay open after a message is sent, we don't need to remove callbacks from the internal list as they are notified. We just iterate over the list and invoke the callbacks with the current inventory count:
def notifyCallbacks(self):
for callback in self.callbacks:
callback(self.getInventoryCount())
The other change is to add the `unregister` method. The `StatusHandler` will call this method to remove a callback when a WebSocket connection closes.
def unregister(self, callback):
self.callbacks.remove(callback)
The bulk of changes are in the `StatusHandler` class, which now inherits from `tornado.websocket.WebSocketHandler`. Instead of implementing handler functions for each of the HTTP methods, WebSocket handlers implement the `open` and `on_message` methods, which are called when a connection is opened and when a message is received over the connection, respectively. Additionally, the `on_close` method is called when a connection is closed by the remote host.
class StatusHandler(tornado.websocket.WebSocketHandler):
def open(self):
self.application.shoppingCart.register(self.callback)
def on_close(self):
self.application.shoppingCart.unregister(self.callback)
def on_message(self, message):
pass
def callback(self, count):
self.write_message('{"inventoryCount":"%d"}' % count)
In our implementation, we register the `callback` method with the `ShoppingCart` class when a new connection is opened, and unregister the callback when the connection is closed. Since we're still using the HTTP API calls in the `CartHandler` class, we don't listen for new messages on the WebSocket connection, so the `on_message` implementation is empty. (We override the default implementation of `on_message` to prevent Tornado from raising a `NotImplementedError` if we happen to receive a message.) Finally, the `callback` method writes the message contents to the WebSocket connection when the inventory changes.
The JavaScript code in this version is virtually identical. We just need to change the `requestInventory` function. Instead of making an AJAX request for the long polling resource, we use the HTML 5 WebSocket API. See Example 5-8.
##### Example 5-8. Web Sockets: The new requestInventory function from inventory.js
function requestInventory() {
var host = 'ws://localhost:8000/cart/status';
var websocket = new WebSocket(host);
websocket.onopen = function (evt) { };
websocket.onmessage = function(evt) {
$('#count').html($.parseJSON(evt.data)['inventoryCount']);
};
websocket.onerror = function (evt) { };
}
After creating a new WebSocket connection to the URL `ws://localhost:8000/cart/status`, we add handler functions for each of the events we want to respond to. The only event we care about in this example is `onmessage`, which updates the contents of the same `count` span that the previous `requestInventory` function modified. (The slight difference is that we have to manually parse the JSON object that the server sent.)
Just as in the previous example, the inventory count is updated dynamically as shoppers add the book to their cart. The difference here is that one persistent WebSocket connection is used instead of re-opening HTTP requests with each long polling update.
## The Future of WebSockets
The WebSocket protocol is still in draft form, and may change as it is finalized. However, since the specification has just been submitted to the IETF for final review, it is relatively unlikely to face significant changes. As mentioned in the beginning of this section, the major downside to using the WebSocket protocol right now is that only the very latest browsers support it.
Despite those caveats, WebSockets are a promising new way to implement bidirectional communication between a browser and server. As the protocol gains widespread support, we will start seeing implementations in more prominent applications.
# Chapter 6. Writing Secure Applications
Very often, secure applications come at the expense of complexity (and developer headaches). The Tornado web server has been designed with a number of security considerations in mind, making it easy to protect against a few well-documented vulnerabilities. Secure cookies prevent a user's local state from being surreptitiously modified by malicious code in his browser. Additionally, browser cookies can be compared with HTTP request parameter values to prevent cross-site request forgery attacks. In this chapter, we will look at features in Tornado that make preventing these attacks easy and then look at a user authentication example that uses these features.
# Cookie Vulnerabilities
Many websites use browser cookies to store a user's identity between browser sessions. It's a simple and widely compatible way to store persistent state across browser sessions. Unfortunately, browser cookies are susceptible to a number of well-documented attacks. This section will demonstrate how Tornado prevents a malicious script from tampering with your application's stored cookies.
## Cookie Forgery
There are a number of ways cookies can be intercepted in the browser. JavaScript and Flash have read and write access to the cookies on the domain of the page in which they are executed. Browser plug ins also have programmatic access to this data. Cross-site scripting attacks can take advantage of this access to modify the value of a cookie in the visitor's browser.
## Secure Cookies
Tornado's secure cookies use a cryptographic signature to verify that the value of a cookie has not been modified by anyone other than the server software. Since a malicious script does not know the secret key, it cannot modify a cookie without the application's knowledge.
### Using Secure Cookies
Tornado's `set_secure_cookie()` and `get_secure_cookie()` functions send and retrieve browser cookies that are protected against malicious modifications in the browser. To use these functions, you must specify the `cookie_secret` parameter in the application constructor. Let's look at a simple example.
The application in Example 6-1 will render a page that counts how many times it has been reloaded in the browser. If no cookie has been set (or if the cookie has been tampered with), the application will set a new cookie with the value `1`. Otherwise, the application will increment the value read from the cookie.
##### Example 6-1. Secure Cookie Example: cookie_counter.py
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.options
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class MainHandler(tornado.web.RequestHandler):
def get(self):
cookie = self.get_secure_cookie("count")
count = int(cookie) + 1 if cookie else 1
countString = "1 time" if count == 1 else "%d times" % count
self.set_secure_cookie("count", str(count))
self.write(
'<html><head><title>Cookie Counter</title></head>'
'<body><h1>You’ve viewed this page %s times.</h1>' % countString
'</body></html>'
)
if __name__ == "__main__":
tornado.options.parse_command_line()
settings = {
"cookie_secret": "bZJc2sWbQLKos6GkHn/VB9oXwQt8S0R0kRvJ5/xJ89E="
}
application = tornado.web.Application([
(r'/', MainHandler)
], **settings)
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
If you inspect the value of the cookie in the browser, you will notice that the value stored for `count` is `MQ==|1310335926|8ef174ecc489ea963c5cdc26ab6d41b49502f2e2`. Tornado encodes the cookie value as a Base-64 string and appends a timestamp and an HMAC signature to the cookie contents. If the cookie's timestamp is too old (or from the future), or if the signature doesn't match the expected value, the `get_secure_cookie()` function assumes the cookie has been tampered with and will return `None`, as if the cookie had not been set.
###### Note
The `cookie_secret` value passed to the `Application` constructor should be a unique, random string. Executing the following code snippet in a Python shell will generate one for you:
>>> **import base64, uuid**
>>> **base64.b64encode(uuid.uuid4().bytes + uuid.uuid4().bytes)**
'bZJc2sWbQLKos6GkHn/VB9oXwQt8S0R0kRvJ5/xJ89E='
Tornado's secure cookies are still susceptible to snooping, however. Attackers may be able to intercept cookies via scripts or plug ins in the browser, or simply by eavesdropping unencrypted network data. Remember that cookie values are _signed_ rather than _encrypted_. Malicious programs are able to read stored cookies and either transmit their data to arbitrary servers or forge requests by sending them unmodified to the application. Therefore, it's important to avoid storing sensitive user data in a browser cookie.
We also need to be aware of the possibility that a user could modify his own cookies, which could lead to a privilege escalation attack. If, for example, we store the number of remaining articles a user has paid to view in a cookie, we would want to prevent the user from updating that number himself in an attempt to get free content. The `httponly` and `secure` cookie properties can help prevent these sorts of attacks.
### HTTP-Only and SSL Cookies
Tornado's cookie functionality piggybacks on Python's built-in `Cookie` module. As such, we can take advantage of some security features it provides. These security attributes are part of the HTTP cookie specification, and instruct the browser on how it may expose the value of the cookie to servers it connects to and scripts that it runs. For example, we could minimize the chances that a cookie's value is intercepted on the network by requiring that it be sent only over an SSL connection. We can also ask that the browser hide the cookie's value from JavaScript.
Setting the `secure` attribute on a cookie instructs the browser to transfer the cookie only over SSL connections. (It's a little confusing, but this is not the same as Tornado's secure cookies, which are more accurately described as _signed_ cookies.) Since Python version 2.6, the `Cookie` object also supports the `httponly` attribute. Including this attribute instructs the browser to make the cookie inaccessible to JavaScript, which can prevent cross-site scripting attacks from reading the cookie's value.
To enable these features, you can pass keyword arguments to the `set_cookie` and `set_secure_cookie` methods. For example, a secure, HTTP-only cookie (that's not signed by Tornado) could be sent with the call `self.set_cookie('foo', 'bar', httponly=True, secure=True)`.
Now that we've explored a number of strategies for protecting persistent data stored in cookies, we will look at another common attack vector. "Request Vulnerabilities" will look at a way to prevent malicious sites from sending forged requests to your application.
# Request Vulnerabilities
One of the main security vulnerabilities facing any web application is the Cross-Site Request Forgery, usually abbreviated CSRF or XSRF, and pronounced "sea surf." This exploit takes advantage of a security hole in the browser that permits a malicious attacker to inject code in a victim site that makes unauthorized requests on behalf of a logged-in user. Let's look at an example.
## Anatomy of a Cross-Site Request Forgery
Let's say Alice is a regular customer of Burt's Books. When she's logged into her account on the online store, the website identifies her with a browser cookie. Now suppose an unscrupulous author, Melvin, wants to increase sales of his book. On a web forum that Alice frequents, he has posted an entry with an HTML image tag whose source is a URL that initiates a purchase in the online store. For example:
<img src="http://store.burts-books.com/purchase?title=Melvins+Web+Sploitz" />
Alice's browser will attempt to fetch the image source and include the legitimate cookies in the request, unaware that instead of a picture of a kitten, the URL initiated a purchase at the online store.
## Defending Against Request Forgeries
There are a number of precautions to take in order to prevent this sort of attack. The first requires some forethought on your part when developing your application. Any HTTP requests that cause side effects, like clicking a button to make a purchase, edit account settings, change a password, or delete a document, should use the HTTP `POST` method. This is good RESTful practice anyway, but it has the additional advantage of preventing trivial XSRF attacks like the malicious image we just saw. However, it doesn't go far enough: a malicious site could still make `POST` requests to your application through other tactics like HTML forms or the `XMLHTTPRequest` API. Protecting `POST` requests requires an additional strategy.
In order to prevent forged `POST` requests, we will require that each request include as one of its parameters a token that matches a corresponding value stored in a cookie. Our application will provide the token to pages we serve through a cookie header and a hidden HTML form element. When the form on a legitimate page is submitted, it will include the form value as well as the stored cookie. If the two match, our application considers the request valid.
Since third-party sites don't have access to this cookie data, they will be unable to include the token cookie with the request. This effectively prevents untrusted sites from making unauthorized requests. As we'll see, Tornado makes this easy for you, too.
## Using Tornado's XSRF protection
You can enable XSRF protection by including the `xsrf_cookies` parameter in the application's constructor:
settings = {
"cookie_secret": "bZJc2sWbQLKos6GkHn/VB9oXwQt8S0R0kRvJ5/xJ89E=",
"xsrf_cookies": True
}
application = tornado.web.Application([
(r'/', MainHandler),
(r'/purchase', PurchaseHandler),
], **settings)
With this application flag set, Tornado will reject `POST`, `PUT`, and `DELETE` requests that do not contain the correct `_xsrf` value as a request parameter. Tornado will handle the `_xsrf` cookies behind the scenes, but you must include the XSRF token in your HTML forms in order to authorize legitimate requests. To do so, simply include a call to the `xsrf_form_html` function in your template:
<form action="/purchase" method="POST">
{% raw xsrf_form_html() %}
<input type="text" name="title" />
<input type="text" name="quantity" />
<input type="submit" value="Check Out" />
</form>
### XSRF Tokens and AJAX Requests
AJAX requests also require an `_xsrf` parameter, but instead of having to explicitly include an `_xsrf` value when rendering the page, the script is able to query the browser for the value of the cookie on the client side. The following two functions transparently add the token value to AJAX `POST` requests. The first function fetches a cookie by name, while the second is a convenience function to add the `_xsrf` parameter to the data object passed to the `postJSON` function.
function getCookie(name) {
var c = document.cookie.match("\\b" + name + "=([^;]*)\\b");
return c ? c[1] : undefined;
}
jQuery.postJSON = function(url, data, callback) {
data._xsrf = getCookie("_xsrf");
jQuery.ajax({
url: url,
data: jQuery.param(data),
dataType: "json",
type: "POST",
success: callback
});
}
These precautions are a lot to think about, and Tornado's secure cookies support and XSRF protection eases some of the burden on application developers. The built-in security features are helpful, to be sure, but it's important to stay alert when thinking about your application's security. There are a number of online web application security references, and one of the more comprehensive collections of practical countermeasures is Mozilla's Secure Coding Guidelines.
# User Authentication
Now that we've seen how to set and retrieve cookies securely and understand the theory behind XSRF attacks, let's look at an example that demonstrates a simple user authentication system. In this section, we will build an application that asks a visitor for her username and stores it in a secure cookie to be retrieved later. Subsequent requests will recognize the returning visitor and display a page customized specifically for her. You'll learn about the `login_url` parameter and the `tornado.web.authenticated` decorator, which will eliminate some of the headaches normally involved in such an application.
## Example: Welcome Back
In this example, we will simply identify someone by a username stored in a secure cookie. When someone visits our page for the first time in a particular browser (or after her cookie expires), we present a page with a login form. The form is submitted as a `POST` request that is routed to `LoginHandler`. The body of the `post` method calls `set_secure_cookie()` to store the value submitted in the `username` request argument.
The Tornado application in Example 6-2 demonstrates the authentication functions we will discuss in this section. The `LoginHandler` class renders the login form and sets the cookie while the `LogoutHandler` class deletes it.
##### Example 6-2. Authenticating visitors: cookies.py
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.options
import os.path
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class BaseHandler(tornado.web.RequestHandler):
def get_current_user(self):
return self.get_secure_cookie("username")
class LoginHandler(BaseHandler):
def get(self):
self.render('login.html')
def post(self):
self.set_secure_cookie("username", self.get_argument("username"))
self.redirect("/")
class WelcomeHandler(BaseHandler):
@tornado.web.authenticated
def get(self):
self.render('index.html', user=self.current_user)
class LogoutHandler(BaseHandler):
def get(self):
if (self.get_argument("logout", None)):
self.clear_cookie("username")
self.redirect("/")
if __name__ == "__main__":
tornado.options.parse_command_line()
settings = {
"template_path": os.path.join(os.path.dirname(__file__), "templates"),
"cookie_secret": "bZJc2sWbQLKos6GkHn/VB9oXwQt8S0R0kRvJ5/xJ89E=",
"xsrf_cookies": True,
"login_url": "/login"
}
application = tornado.web.Application([
(r'/', WelcomeHandler),
(r'/login', LoginHandler),
(r'/logout', LogoutHandler)
], **settings)
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
And the files in Examples 6-3 and 6-4 belong in the application's _templates/_ directory.
##### Example 6-3. Login form: login.html
<html>
<head>
<title>Please Log In</title>
</head>
<body>
<form action="/login" method="POST">
Username: <input type="text" name="username" />
<input type="submit" value="Log In" />
</form>
</body>
</html>
##### Example 6-4. Welcoming returning visitors: index.html
<html>
<head>
<title>Welcome Back!</title>
</head>
<body>
<h1>Welcome back, {{ user }}</h1>
</body>
</html>
## The authenticated Decorator
In order to use Tornado's authentication feature, we need to mark specific handlers as requiring a logged-in user. We can accomplish this using the `@tornado.web.authenticated` decorator. When we wrap a handler method with this decorator, Tornado will ensure that the method body will be called only if a valid user is found. Let's take a look at the `WelcomeHandler` from the example, which renders the _index.html_ template only to logged-in users.
class WelcomeHandler(BaseHandler):
@tornado.web.authenticated
def get(self):
self.render('index.html', user=self.current_user)
Before the `get` method is called, the `authenticated` decorator makes sure that the `current_user` property has a value. (We'll discuss this property shortly.) If the `current_user` value is considered "falsy" (`None`, `False`, `0`, or `""`), any `GET` or `HEAD` requests will redirect the visitor to the URL specified in the `login_url` application setting. Additionally, a `POST` request without a valid user will return an HTTP response with a 403 (Forbidden) status.
If a valid user is found, Tornado will invoke the handler method as expected. The `authenticated` decorator relies on the `current_user` property and the `login_url` setting for its full functionality, which we'll look at next.
### The current_user property
The request handler class has a `current_user` property (which is also available to any template the handler renders) that can be used to store the identity of the user authenticated for the current request. By default, its value is `None`. In order for the `authenticated` decorator to successfully identify an authenticated user, you must override the request handler's default `get_current_user()` method to return the current user.
The actual implementation is up to you, but in this case, we're simply retrieving the visitor's username from a secure cookie. Obviously you'd want to use a more robust technique, but for demonstration purposes, we will use the following method:
class BaseHandler(tornado.web.RequestHandler):
def get_current_user(self):
return self.get_secure_cookie("username")
While the example discussed here doesn't go into storing and retrieving a user's password or other credentials, the techniques described in this chapter can be extended to query a database for credentials with minimal additional effort.
### The login_url setting
Let's look at the application constructor briefly. Note that there's a new setting we pass to the application: the `login_url` is the address of our application's login form. If the `get_current_user` method returns a falsy value, a handler with the `authenticated` decorator will redirect the browser to this URL in order to login.
settings = {
"template_path": os.path.join(os.path.dirname(__file__), "templates"),
"cookie_secret": "bZJc2sWbQLKos6GkHn/VB9oXwQt8S0R0kRvJ5/xJ89E=",
"xsrf_cookies": True,
"login_url": "/login"
}
application = tornado.web.Application([
(r'/', WelcomeHandler),
(r'/login', LoginHandler),
(r'/logout', LogoutHandler)
], **settings)
When Tornado builds the redirect URL, it will also append a `next` query string parameter, which contains the URL of the resource that initiated the redirect to the log-in page. You can use a line like `self.redirect(self.get_argument('next', '/'))` to redirect the user back to the referring page after login.
# Summing Up
We just saw two techniques to help secure your Tornado application as well as an example of how to implement user authentication with the `@tornado.web.authenticated` decorator. In Chapter 7, we'll look at how to extend the concepts we've discussed here in an application that authenticates against external web services like Facebook and Twitter.
# Chapter 7. Authenticating with External Services
The example in Chapter 6 showed how to use secure cookies and the `tornado.web.authenticated` decorator to implement a simple user authentication form. In this chapter, we will look at how to authenticate against third-party services. Popular web APIs like Facebook's and Twitter's use the OAuth protocol to securely verify someone's identity while allowing their users to maintain control over third-party access to their personal information. Tornado offers a number of Python mix-ins that help developers authenticate with external services, either with explicit support for popular services, or through general OAuth support. In this chapter, we'll explore two example applications that use Tornado's `auth` module: one that connects to Twitter and another that connects to Facebook.
# The Tornado auth Module
As a web application developer, you might want to allow your users to post updates to Twitter or read recent Facebook statuses directly through your application. Most social network and single sign-on APIs provide a standard workflow for authorizing users on your application. The Tornado `auth` module provides classes for OpenID, OAuth, OAuth 2.0, Twitter, FriendFeed, Google OpenID, the Facebook REST API, and the Facebook Graph API. Although you could implement handlers for a particular external service's authorization process on your own, Tornado's `auth` module provides a simplified workflow for developing applications that connect to any of the supported services.
## The Authorization Workflow
The workflow for each of these authentication methods is slightly different, but for the most part, they share the `authorize_redirect` and `get_authenticated_user` methods. The `authorize_redirect` method is used to redirect an unauthenticated user to the external service's authorization page. On the authorization page, the user signs into the service and grants your application access to his account. Typically, you will call the `get_authenticated_user` method when the user returns to your application with a temporary access code. Calling the `get_authenticated_user` method exchanges the temporary credentials provided by the authorization redirect process for a set of long-term credentials belonging to the user. The specific authentication classes for Twitter, Facebook, FriendFeed, and Google provide their own functions to make API calls to those services.
## Asynchronous Requests
One thing to note about the `auth` module is its use of Tornado's asynchronous HTTP requests. As we saw in Chapter 5, asynchronous HTTP requests allow the tornado server to handle incoming requests while a pending request is waiting for an outgoing request to return.
We'll take a brief look at how to use asynchronous requests and then dive into an example that uses them. Each handler method that initiates an asynchronous call must be preceded with the `@tornado.web.asynchronous` decorator.
# Example: Sign in With Twitter
Let's walk through an example that uses the Twitter API to authenticate a user. This application will redirect a nonlogged-in user to Twitter's authorization page, which prompts the user for his screenname and password. Twitter then redirects the user to a URL you specify in Twitter's application settings page.
First, you must register a new application on Twitter. The Twitter Developers site has a "Create an app" link where you can get started, if you don't have an app already. Once you create your Twitter application, you will be assigned an access token and a secret that identifies your application to Twitter. You'll need to fill in those values in the appropriate places in the source code we show in this section.
Now let's take a look at the code in Example 7-1.
##### Example 7-1. View Twitter timeline: twitter.py
import tornado.web
import tornado.httpserver
import tornado.auth
import tornado.ioloop
class TwitterHandler(tornado.web.RequestHandler, tornado.auth.TwitterMixin):
@tornado.web.asynchronous
def get(self):
oAuthToken = self.get_secure_cookie('oauth_token')
oAuthSecret = self.get_secure_cookie('oauth_secret')
userID = self.get_secure_cookie('user_id')
if self.get_argument('oauth_token', None):
self.get_authenticated_user(self.async_callback(self._twitter_on_auth))
return
elif oAuthToken and oAuthSecret:
accessToken = {
'key': oAuthToken,
'secret': oAuthSecret
}
self.twitter_request('/users/show',
access_token=accessToken,
user_id=userID,
callback=self.async_callback(self._twitter_on_user)
)
return
self.authorize_redirect()
def _twitter_on_auth(self, user):
if not user:
self.clear_all_cookies()
raise tornado.web.HTTPError(500, 'Twitter authentication failed')
self.set_secure_cookie('user_id', str(user['id']))
self.set_secure_cookie('oauth_token', user['access_token']['key'])
self.set_secure_cookie('oauth_secret', user['access_token']['secret'])
self.redirect('/')
def _twitter_on_user(self, user):
if not user:
self.clear_all_cookies()
raise tornado.web.HTTPError(500, "Couldn't retrieve user information")
self.render('home.html', user=user)
class LogoutHandler(tornado.web.RequestHandler):
def get(self):
self.clear_all_cookies()
self.render('logout.html')
class Application(tornado.web.Application):
def __init__(self):
handlers = [
(r'/', TwitterHandler),
(r'/logout', LogoutHandler)
]
settings = {
'twitter_consumer_key': 'cWc3 ... d3yg',
'twitter_consumer_secret': 'nEoT ... cCXB4',
'cookie_secret': 'NTliOTY5NzJkYTVlMTU0OTAwMTdlNjgzMTA5M2U3OGQ5NDIxZmU3Mg==',
'template_path': 'templates',
}
tornado.web.Application.__init__(self, handlers, **settings)
if __name__ == '__main__':
app = Application()
server = tornado.httpserver.HTTPServer(app)
server.listen(8000)
tornado.ioloop.IOLoop.instance().start()
The templates in Examples 7-2 and 7-3 should be located in the application's _templates_ directory.
##### Example 7-2. Twitter timeline: home.html
<html>
<head>
<title>{{ user['name'] }} ({{ user['screen_name'] }}) on Twitter</title>
</head>
<body>
<div>
<a href="/logout">Sign out</a>
</div>
<div>
<img src="{{ user['profile_image_url'] }}" style="float:left" />
<h2>About @{{ user['screen_name'] }}</h2>
<p style="clear:both"><em>{{ user['description'] }}</em></p>
</div>
<div>
<ul>
<li>{{ user['statuses_count'] }} tweets.</li>
<li>{{ user['followers_count'] }} followers.</li>
<li>Following {{ user['friends_count'] }} users.</li>
</ul>
</div>
{% if 'status' in user %}
<hr />
<div>
<p>
<strong>{{ user['screen_name'] }}</strong>
<em>on {{ ' '.join(user['status']['created_at'].split()[:2]) }}
at {{ user['status']['created_at'].split()[3] }}</em>
</p>
<p>{{ user['status']['text'] }}</p>
</div>
{% end %}
</body>
</html>
##### Example 7-3. Twitter timeline: logout.html
<html>
<head>
<title>Tornadoes on Twitter</title>
</head>
<body>
<div>
<h2>You have successfully signed out.</h2>
<a href="/">Sign in</a>
</div>
</body>
</html>
Let's break this down piece by piece, starting with the _twitter.py_ program. In the `Application` class's `__init__` method, you'll notice two new keys in the settings dictionary: `twitter_consumer_key` and `twitter_consumer_secret`. These should be set to the values listed in your Twitter application's detailed settings page. Also note that we're declaring two handlers: a `TwitterHandler` and a `LogoutHandler`. Let's turn our attention to those for a minute.
The `TwitterHandler` class contains the bulk of our application's logic. The two things that are important to immediately note are that the class inherits from `tornado.auth.TwitterMixin`, which provides the Twitter functionality we will be using in this class, and that the `get` method is wrapped in the `@tornado.web.asynchronous` decorator, which we discussed in Chapter 5. Now let's look at the first asynchronous call:
if self.get_argument('oauth_token', None):
self.get_authenticated_user(self.async_callback(self._twitter_on_auth))
return
When a user requests the root resource of our application, we first check to see whether the request includes an `oauth_token` query string parameter. If so, we treat the request as a callback from Twitter's authorization process.
We then use the `auth` module's `get_authenticated_user` method to exchange the temporary token we were given for the user's access token. This method expects a callback parameter, which, in this case, is the `self._twitter_on_auth` method. The callback is executed when the API request to Twitter returns, and we define it a little further down in our code.
If the `oauth_token` parameter was not found, we move on and test for the case where we've seen a particular user before.
elif oAuthToken and oAuthSecret:
accessToken = {
'key': oAuthToken,
'secret': oAuthSecret
}
self.twitter_request('/users/show',
access_token=accessToken,
user_id=userID,
callback=self.async_callback(self._twitter_on_user)
)
return
This snippet looks for `access_key` and `access_secret` cookies, which our application sets when given a valid user by Twitter. If the values are set, we assemble an access token object with the key and the secret and use the `self.twitter_request` method to make a request to the `/users/show` resource of the Twitter API. Again, you'll notice the asynchronous callback, this time to the `self._twitter_on_user` method that we define later.
The `twitter_request` method expects a resource path as its first parameter, and additionally takes optional keyword arguments for `access_token`, `post_args`, and `callback`. The `access_token` parameter should be a dictionary with keys for `key`, which is the user's OAuth access token, and `secret`, the user's OAuth secret.
If the API call uses the `POST` method, the request arguments should be bundled in a dictionary passed to the `post_args` argument. Query string parameters are specified simply as additional keyword arguments in the method call. In the case of the `/users/show` API call, we are making an HTTP `GET` request, so there is no `post_args` argument, and the required `user_id` API parameter is passed as one of the keyword arguments.
If none of the conditions we discussed above are met, it means the user is visiting our application for the first time (or has logged out or otherwise deleted her cookies) and we want to redirect her to the Twitter authorization page. This is done by calling `self.authorize_redirect()`.
def _twitter_on_auth(self, user):
if not user:
self.clear_all_cookies()
raise tornado.web.HTTPError(500, 'Twitter authentication failed')
self.set_secure_cookie('user_id', str(user['id']))
self.set_secure_cookie('oauth_token', user['access_token']['key'])
self.set_secure_cookie('oauth_secret', user['access_token']['secret'])
self.redirect('/')
The callback methods for our Twitter requests are quite straightforward. The `_twitter_on_auth` is called with a `user` parameter, which is a dictionary of user data for the authorized user. Our method implementation simply checks that we received a valid user and if so, sets the appropriate cookies. Once the cookies are set, we redirect the user to the root resource, which makes the request to the `/users/show` API method as discussed earlier.
def _twitter_on_user(self, user):
if not user:
self.clear_all_cookies()
raise tornado.web.HTTPError(500, "Couldn't retrieve user information")
self.render('home.html', user=user)
The `_twitter_on_user` method is the callback we specified in the call to the `twitter_request` method. When Twitter responds with the user's profile information, our callback renders the _home.html_ template with data from the response. The template displays the user's profile image, screenname, and description, as well as some statistics about friend and follower counts and the user's most recent status update.
The `LogoutHandler` method simply clears any stored cookies we stored for a user of the application. It renders the _logout.html_ template to provide feedback to the user and allow him to sign in again by redirecting to Twitter. That's all there is to it!
The Twitter application we just looked at simply displays user info for an authenticated user, but it demonstrates how Tornado's `auth` module makes developing social applications much easier. Building an application that can post to a user's Twitter stream is left as an exercise for the reader.
# Example: Facebook Authentication and the Graph API
The Facebook example is structurally very similar to the Twitter example we just saw. Facebook has two different API standards, the original REST API and the Facebook Graph API. While both are currently supported, the Graph API is the recommended way to develop new Facebook applications. Tornado supports both APIs in the `auth` module, but we will focus on the Graph API in this example.
In order to prepare for this example, you will need to sign in to Facebook's developer site and create a new application. You will be asked to name your application and asked to prove you are not a robot. In order to authorize users from your own domain, you will need to specify your application's domain name. Then click the "Website" box under the "Select how your app integrates with Facebook" heading. You will need to enter your site's URL here as well. For a more complete guide to setting up a Facebook app, the developer guides are a good start: _https://developers.facebook.com/docs/guides/web/_.
Once your application is set up, you will use the application ID and secret provided in the Basic Settings page to connect to the Facebook Graph API.
Recall from the previous section that the single sign-on workflow will direct a user to the Facebook platform to authorize the application, and Facebook will use an HTTP redirect to send the user back to your server with an authorization code. Once you receive the request with the code, you must request the authorization token which is used to identify the user making the API requests.
This example app will render the user's timeline and allow the user to update her Facebook status though our interface. Let's take a look at Example 7-4.
##### Example 7-4. Facebook Authentication: facebook.py
import tornado.web
import tornado.httpserver
import tornado.auth
import tornado.ioloop
import tornado.options
from datetime import datetime
class FeedHandler(tornado.web.RequestHandler, tornado.auth.FacebookGraphMixin):
@tornado.web.asynchronous
def get(self):
accessToken = self.get_secure_cookie('access_token')
if not accessToken:
self.redirect('/auth/login')
return
self.facebook_request(
"/me/feed",
access_token=accessToken,
callback=self.async_callback(self._on_facebook_user_feed))
def _on_facebook_user_feed(self, response):
name = self.get_secure_cookie('user_name')
self.render('home.html', feed=response['data'] if response else [], name=name)
@tornado.web.asynchronous
def post(self):
accessToken = self.get_secure_cookie('access_token')
if not accessToken:
self.redirect('/auth/login')
userInput = self.get_argument('message')
self.facebook_request(
"/me/feed",
post_args={'message': userInput},
access_token=accessToken,
callback=self.async_callback(self._on_facebook_post_status))
def _on_facebook_post_status(self, response):
self.redirect('/')
class LoginHandler(tornado.web.RequestHandler, tornado.auth.FacebookGraphMixin):
@tornado.web.asynchronous
def get(self):
userID = self.get_secure_cookie('user_id')
if self.get_argument('code', None):
self.get_authenticated_user(
redirect_uri='http://example.com/auth/login',
client_id=self.settings['facebook_api_key'],
client_secret=self.settings['facebook_secret'],
code=self.get_argument('code'),
callback=self.async_callback(self._on_facebook_login))
return
elif self.get_secure_cookie('access_token'):
self.redirect('/')
return
self.authorize_redirect(
redirect_uri='http://example.com/auth/login',
client_id=self.settings['facebook_api_key'],
extra_params={'scope': 'read_stream,publish_stream'}
)
def _on_facebook_login(self, user):
if not user:
self.clear_all_cookies()
raise tornado.web.HTTPError(500, 'Facebook authentication failed')
self.set_secure_cookie('user_id', str(user['id']))
self.set_secure_cookie('user_name', str(user['name']))
self.set_secure_cookie('access_token', str(user['access_token']))
self.redirect('/')
class LogoutHandler(tornado.web.RequestHandler):
def get(self):
self.clear_all_cookies()
self.render('logout.html')
class FeedListItem(tornado.web.UIModule):
def render(self, statusItem):
dateFormatter = lambda x: datetime.
strptime(x,'%Y-%m-%dT%H:%M:%S+0000').strftime('%c')
return self.render_string('entry.html', item=statusItem, format=dateFormatter)
class Application(tornado.web.Application):
def __init__(self):
handlers = [
(r'/', FeedHandler),
(r'/auth/login', LoginHandler),
(r'/auth/logout', LogoutHandler)
]
settings = {
'facebook_api_key': '2040 ... 8759',
'facebook_secret': 'eae0 ... 2f08',
'cookie_secret':
'NTliOTY5NzJkYTVlMTU0OTAwMTdlNjgzMTA5M2U3OGQ5NDIxZmU3Mg==',
'template_path': 'templates',
'ui_modules': {'FeedListItem': FeedListItem}
}
tornado.web.Application.__init__(self, handlers, **settings)
if __name__ == '__main__':
tornado.options.parse_command_line()
app = Application()
server = tornado.httpserver.HTTPServer(app)
server.listen(8000)
tornado.ioloop.IOLoop.instance().start()
We'll walk through the handlers in the order that a visitor would interact with them. When the root URL is requested, the `FeedHandler` will look for the `access_token` cookie. If the cookie is not present, the user will be directed to the `/auth/login` URL.
The login page uses the `authorize_redirect` method to redirect the user to Facebook's authorization dialog box, where the user will login to Facebook if required, review the permissions the application is requesting, and approve the application. Upon clicking "Approve," she will be directed back to the application, to the URL specified in the `redirect_uri` parameter given in the call to `authorize_redirect`.
When returning from the Facebook authorization screen, the request to `/auth/login` will include a `code` parameter as a query-string argument. This code is a temporary token that is exchanged for more permanent credentials. If the `code` argument is found, the application will make a Facebook Graph API request to retrieve the authenticated user and store her user ID, full name, and the access token that will identify her when the application makes Graph API calls.
Once these values have been stored, the user is directed back to the root URL. Upon returning to the root page, the user will this time get a listing of recent Facebook feed messages. The application sees that an `access_token` cookie is set and uses the `facebook_request` method to query the Graph API for the user's feed. We pass the OAuth token to the `facebook_request` method, which also takes a callback argument—in Example 7-5, it is the `_on_facebook_user_feed` method.
##### Example 7-5. Facebook Authentication: home.html
<html>
<head>
<title>{{ name }} on Facebook</title>
</head>
<body>
<div>
<a href="/auth/logout">Sign out</a>
<h1>{{ name }}</h1>
</div>
<div>
<form action="/facebook/" method="POST">
<textarea rows="3" cols="50" name="message"></textarea>
<input type="submit" value="Update Status" />
</form>
</div>
<hr />
{% for item in feed %}
{% module FeedListItem(item) %}
{% end %}
</body>
</html>
When the callback is invoked with the user's feed response from Facebook, the application renders the _home.html_ template, which uses the `FeedListItem` UI module to render each of the entries in the list. At the top of the template, we render a form that posts to the `/` resource on our server with a `message` parameter. The application forwards this call to the Graph API to post an update.
To post the update, we use the `facebook_request` method again. This time, in addition to the `access_token` parameter, we include a `post_args` parameter with a dictionary of arguments that become the post body for the Graph request. When this call succeeds, we redirect the user back to the home page, which requests the updated timeline once again.
As you can see, the Facebook authentication classes in Tornado's `auth` module provide a number of helpful features for building Facebook applications. This is a great asset for rapid prototyping, but it also holds up well in production applications.
# Chapter 8. Deploying Tornado
Until now, we've been running only single Tornado processes in our examples for simplicity's sake. It made testing an application and making quick changes extremely easy, but it is not an appropriate deployment strategy. Deploying an application to a production environment presents new challenges, with both maximizing performance and managing the individual processes. This chapter presents strategies to harden your Tornado application and increase request throughput, as well as tools that make deploying Tornado servers easier.
# Reasons for Running Multiple Tornado Instances
In most cases, assembling a web page is not a particularly computationally intensive process. The server needs to parse the request, fetch the appropriate data, and assemble the various components that make up the response. If your application makes blocking calls to query a database or access the filesystem, the server will not be able to respond to an incoming request while it is waiting for the call to complete. In these moments, the server hardware will have surplus CPU time while it waits for I/O operations to complete.
Given that most of the elapsed time responding to an HTTP request is spent with the CPU idle, we'd like to take advantage of this downtime and maximize the number of requests we can handle at a given time. That is, we'd like the server to be able to accept as many new requests as possible while the processes handling open requests are waiting for data.
As we saw in Chapter 5, when we discussed asynchronous HTTP requests, Tornado's nonblocking architecture goes a long way towards solving this problem for us. Recall that the asynchronous requests allow a Tornado process to fulfill incoming requests while waiting for an outbound request to return. The problem we run into, however, is when synchronous function calls block. If a database query or disk access blocks the Tornado process, that process is barred from answering new requests. The easiest way around this problem is to run multiple instances of the interpreter. Typically, you would want to use a reverse proxy like Nginx to distribute load across multiple Tornado instances.
# Using Nginx as a Reverse Proxy
A proxy server is a machine that relays a client's resource request to the appropriate server. Some network installations use proxy servers to filter and cache HTTP requests that machines on the local network make to the Internet. Since we will be running a number of Tornado instances on a range of TCP ports, we will use a proxy server in reverse: clients across the Internet will connect to a reverse proxy server, which will forward requests to any one host in a pool of Tornado servers behind the proxy. The proxy server is designed to be transparent to the client and yet pass valuable information like the original client's IP address and TCP scheme to the upstream Tornado node.
Our server configuration is illustrated in Figure 8-1. The reverse proxy receives all incoming HTTP requests and distributes them evenly among the individual Tornado instances.
###### Figure 8-1. Tornado instances behind a reverse proxy server
## Basic Nginx Configuration
The listing in Example 8-1 is an example Nginx configuration. This Nginx setup listens for connections on port 80 and distributes those requests among the upstream hosts listed in the configuration file. In this case, we will assume the upstream hosts are listening for connections on their own port on the loopback interface.
##### Example 8-1. A bare-bones Nginx proxy configuration
user nginx;
worker_processes 5;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
proxy_next_upstream error;
upstream tornadoes {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name www.example.org *.example.org;
location /static/ {
root /var/www/static;
if ($query_string) {
expires max;
}
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://tornadoes;
}
}
###### Caution
This configuration example assumes your system uses _epoll_. There are often subtle differences between UNIX flavors. Some systems may use _poll_ , _/dev/poll_ , or _kqueue_ instead.
It may be helpful to walk through the order that requests are matched to either `location /static/` or `location /`. Nginx treats a literal string in the location directive as if it were a regular expression that starts with a beginning-of-line anchor and ends with any repetition of any characters. So `/` is treated as the expression `^/.*`. When Nginx matches against literal strings, more specific strings like `/static/` are checked against the request URL before more general strings like `/`. The Nginx documentation explains the matching order in greater detail.
Aside from some of the standard boilerplate, the important parts of this configuration file are the `upstream` directive and the proxy directives in the server configuration. The Nginx server listens for connections on port 80 and distributes those requests among the Tornado instances listed in the `upstream` server group. The `proxy_pass` directive specifies the URI of the server that is accepting forwarded requests. You can reference an `upstream` server group by name in the host portion of the `proxy_pass` URI.
Nginx will by default distribute requests in a simple round-robin fashion. Alternatively, you can choose to distribute requests based on the client's IP address, which (barring connection interruptions) will guarantee that requests originating from the same IP address will always be routed to the same upstream node. You can read more about this option in the `HTTPUpstreamModule` documentation.
Also note the `location /static/` directive, which tells Nginx to serve files in the static directory directly instead of proxying the requests to Tornado. Nginx can serve static files much more efficiently than Tornado, so it makes sense to keep the unnecessary load off the Tornado processes.
## SSL Decryption with Nginx
Developers of applications that transfer personal information between the browser and client need to take special care to protect that information from falling into the wrong hands. With unsecured WiFi access as common as it is, users are susceptible to cookie hijacking attacks that compromise their accounts on popular social networking sites. In response, most major social web applications have made their sites either use encrypted protocols by default or as a user-configurable option. Coincidentally, we can use for Nginx to decrypt SSL encryption on incoming requests and distribute the decoded HTTP requests to the upstream servers.
Example 8-2 shows a `server` block that decrypts incoming HTTPS requests and forwards the decrypted traffic using the proxy directives we saw in Example 8-1.
##### Example 8-2. server block using SSL
server {
listen 443;
ssl on;
ssl_certificate **/path/to/cert.pem** ;
ssl_certificate_key **/path/to/cert.key** ;
default_type application/octet-stream;
location /static/ {
root /var/www/static;
if ($query_string) {
expires max;
}
}
location = /favicon.ico {
rewrite (.*) /static/favicon.ico;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://tornadoes;
}
}
This works exactly like the previous configuration, with the exception that Nginx will be listening for secure web requests on the standard HTTPS port 443. If you want to enforce an SSL connection, you can include a rewrite directive in the `server` block that listens for HTTP connections on port 80. See Example 8-3 for an example of that redirect.
##### Example 8-3. server block to redirect HTTP requests to a secure channel
server {
listen 80;
server_name example.com;
rewrite /(.*) https://$http_host/$1 redirect;
}
Nginx is a very robust tool, and we've barely scratched the surface of the possible configuration options that can be helpful for Tornado deployments. The Nginx documentation wiki is an excellent resource for additional information on installing and configuring this powerful software.
# Using Supervisor to Manage Tornado Processes
As we foreshadowed in "Using Nginx as a Reverse Proxy", we will be running many instances of our Tornado application to take advantage of modern multiprocessor and multicore server architecture. Most anecdotal reports from deployment teams recommend running one Tornado process per core. As we know, however, the plural of anecdote is not data, so your results may vary. In this section, we will discuss strategies for managing many Tornado instances on a UNIX system.
So far, we've run the Tornado server from the command line with a command like `$ **`python main.py \--port=8000`**`. In long-term production deployments however, this is unmanageable. Because we are running a separate Tornado process for each CPU core, there are several processes to monitor and control. The _supervisor_ daemon can help us with this task.
Supervisor is designed to launch at boot time and start the processes listed in its configuration file. Here, we will look at Supervisor configuration to manage the four Tornado instances we referenced as upstream hosts in our Nginx configuration. Typically _supervisord.conf_ contains global configuration directives, and will load additional configuration files from a _conf.d_ directory. Example 8-4 shows a configuration file for the Tornado processes we want to start.
##### Example 8-4. tornado.conf
[group:tornadoes]
programs=tornado-8000,tornado-8001,tornado-8002,tornado-8003
[program:tornado-8000]
command=python /var/www/main.py --port=8000
directory=/var/www
user=www-data
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/tornado.log
loglevel=info
[program:tornado-8001]
command=python /var/www/main.py --port=8001
directory=/var/www
user=www-data
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/tornado.log
loglevel=info
[program:tornado-8002]
command=python /var/www/main.py --port=8002
directory=/var/www
user=www-data
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/tornado.log
loglevel=info
[program:tornado-8003]
command=python /var/www/main.py --port=8003
directory=/var/www
user=www-data
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/tornado.log
loglevel=info
In order for Supervisor to do anything useful, you will need at least one `program` section. In Example 8-4, we've declared four programs, named `tornado-8000` through `tornado-8003`. The program sections define the parameters for the individual command that Supervisor will run. A value for `command` is required, which will typically be the Tornado application with the `port` argument that we want to listen on. We also define additional settings for the program's working directory, effective user, and logfile; and it's helpful to set the `autorestart` and `redirect_stderr` settings to `true`.
In order to manage all the Tornado processes in aggregate, it's helpful to create a group. At the top of our example, we declare a group called `tornadoes` and list the individual programs that make up that group. Now, when we want to manage our Tornado app, we can reference all the constituent programs by the group name followed by the wildcard character. To restart the app, for example, we would issue the command `restart tornadoes:*` in the _supervisorctl_ utility.
Once you've installed and configured Supervisor, you can use _supervisorctl_ to manage the _supervisord_ process. To start your web application, you can instruct Supervisor to reread its configuration, and any programs or program groups whose configuration has changed will be restarted. You can also manually start, stop, and restart managed programs or check the overall system status.
supervisor> **update**
tornadoes: stopped
tornadoes: updated process group
supervisor> **status**
tornadoes:tornado-8000 RUNNING pid 32091, uptime 00:00:02
tornadoes:tornado-8001 RUNNING pid 32092, uptime 00:00:02
tornadoes:tornado-8002 RUNNING pid 32093, uptime 00:00:02
tornadoes:tornado-8003 RUNNING pid 32094, uptime 00:00:02
Supervisor works with your system's init process, and it should automatically register the daemon to launch at boot time. Program groups automatically come online when _supervisor_ starts up. By default, Supervisor will monitor the child processes and respawn any individual program that unexpectedly terminates. If you want to restart managed processes without regard to their exit codes, you can set the `autorestart` to `true`.
Not only does Supervisor make managing many Tornado instances easier, it also provides some peace of mind that your Tornado servers will come back online after an unexpected service interruption.
# About the Authors
**Michael Dory** has spent the last decade studying the ways people communicate, and working to make their conversations better. As the co-founder and CTO of the social technology agency Socialbomb, he's worked with brands, agencies, and startups to build social applications and platforms that connect users with their friends, their devices, and the world around them.
**Allison Parrish** is an artist and programmer, currently residing in Brooklyn. She has 10 years of professional programming experience, with an emphasis on programming for the Web.
**Brendan Berg** has over five years of professional experience developing web and mobile applications. Previously, he developed mobile applications, cloud infrastructure, and APIs as Chief Software Architect at Socialbomb. Now he's focusing on creating software for the freelance ecosystem as the co-founder and CTO of Wurk Happy.
1. Preface
1. Conventions Used in This Book
2. Using Code Examples
3. Safari® Books Online
4. How to Contact Us
5. Acknowledgements
2. 1. Introduction
1. What Is Tornado?
1. Getting Started with Tornado
2. Community and Support
2. Simple Web Services
1. Hello Tornado
2. String Service
3. More About RequestHandlers
4. Next Steps
3. 2. Forms and Templates
1. Simple Example: Poem Maker Pro
1. Rendering Templates
2. Interpolation
2. Template Syntax
1. Interpolating Expressions
2. Control Flow Statements
3. Using Functions Inside Templates
3. Complete Example: The Alpha Munger
1. How It Works
2. Serving Static Files
3. Next Steps with Templates
4. 3. Extending Templates
1. Blocks and Substitutions
1. Basics of Blocks
2. Templates in Practice: Burt's Books
3. Autoescaping
2. UI Modules
1. Basic Module Usage
2. Modules in Depth
3. Embedding JavaScript and CSS
3. Summing Up
5. 4. Databases
1. Basic MongoDB Operations with PyMongo
1. Establishing a Connection
2. Dealing with Documents
3. MongoDB Documents and JSON
2. A Simple Persistent Web Service
1. A Read-Only Dictionary
2. Writing the Dictionary
3. Burt's Books
1. Reading Books (From the Database)
2. Editing and Adding Books
4. MongoDB: Next Steps
6. 5. Asynchronous Web Services
1. Asynchronous Web Requests
1. Starting Synchronous
2. The Trouble with Blocking
3. Basic Asynchronous Calls
4. The asynchronous Decorator and the finish Method
5. Asynchronous Generators
6. Summary of Asynchronous Operations
2. Long Polling with Tornado
1. The Benefits of Long Polling
2. Example: Live Inventory Reporting
3. The Downsides of Long Polling
3. WebSockets with Tornado
1. Tornado's WebSocket Module
2. Example: Live Inventory with WebSockets
3. The Future of WebSockets
7. 6. Writing Secure Applications
1. Cookie Vulnerabilities
1. Cookie Forgery
2. Secure Cookies
2. Request Vulnerabilities
1. Anatomy of a Cross-Site Request Forgery
2. Defending Against Request Forgeries
3. Using Tornado's XSRF protection
3. User Authentication
1. Example: Welcome Back
2. The authenticated Decorator
4. Summing Up
8. 7. Authenticating with External Services
1. The Tornado auth Module
1. The Authorization Workflow
2. Asynchronous Requests
2. Example: Sign in With Twitter
3. Example: Facebook Authentication and the Graph API
9. 8. Deploying Tornado
1. Reasons for Running Multiple Tornado Instances
2. Using Nginx as a Reverse Proxy
1. Basic Nginx Configuration
2. SSL Decryption with Nginx
3. Using Supervisor to Manage Tornado Processes
| {
"redpajama_set_name": "RedPajamaBook"
} | 3,933 |
\section{Introduction}
Weather forecasting is of major importance as it affects the daily activities of fundamental fields such as agriculture, transportation and international commerce among others.
The ability to forecast the precipitation rates, the risk of flood or the likelihood of a hurricane can potentially lead to saving of lives and to the well-being of humans. Moreover, the change of the climate on earth has led to increasing research and world-wide efforts to halt the environmental and ecological consequences \cite{o2017ipcc}.
Traditional approaches of weather forecasting rely on priors like the thermodynamic properties of the atmosphere \cite{holtslag1990high, niziol1995winter, campbell2005weather}, statistical distribution of the data \cite{glahn1985statistical}, or ensemble learning that incorporates multiple models with different initial conditions \cite{gneiting2005weather}. This family of models belongs to the ``Numerical Weather Prediction" (NWP) methodologies \cite{lorenc1986analysis} and usually rely on the processing power of supercomputers, thus being resource heavy \cite{bauer2015quiet}. In addition of the high computational cost, it has been shown that \textit{a priori} information about the data that constitutes the initial state is the source of errors in weather prediction \cite{tolstykh2005some}.
While traditional NWP methods aim at extracting useful dynamics from a model or to transfer information between models, the purpose of recent data-driven approaches is to simulate an entire system to predict its future state \cite{scher2018toward}. Machine learning data-driven based models have already been successfully applied in various domains such as healthcare, dynamical systems, biomedical signal analysis, neuroscience among others \cite{mehrkanoon2012approximate,mehrkanoon2015learning,mehrkanoon2014parameter,mehrkanoon2019deep,mehrkanoon2018deep, abdellaoui2020deep,breiman2001random,webb2018deep,mehrkanoon2019cross}. The recent advances of machine learning models has increased the capability to automatically learn the underlying nonlinear complex patterns of weather dynamics \cite{mehrkanoon2019deep2,trebing2020smaat,trebing2020wind}. In particular, the combination of convolutional neural networks (CNNs) and long short-term memory (LSTM) networks proved to be a successful deep learning approach for climate modeling and weather forecasting \cite{chen2019hybrid, fu2019multi}.
This paper presents three contributions. The first one is an investigation of the unistream and multistream approaches as input representation for the neural networks. The second contribution aims at enriching the proposed networks with a self-attention mechanism. Finally, the third contribution addresses the explainability of the networks by using modern visualization techniques to determine the features and cities that contribute the most to the output predictions of a particular city or group of cities. It is of utter importance to gain interpretability from the data driven models given that weather prediction is the basis of many human real-life decisions. This paper is organized as follows. A brief review of the existing machine learning methodologies for weather forecasting is given in Section \ref{sec:related_work}. A formal definition of the Conv-LSTM layer and the visualization techniques used are presented in Section \ref{sec:preliminaries}. Our proposed models are introduced in Section \ref{sec:proposed_models}. Furthermore, the dataset used is introduced in Section \ref{sec:data_desc}. The experimental results are reported in section \ref{sec:results}. Finally, a discussion followed by the conclusion are drawn in sections \ref{sec:discussion} and \ref{sec:conclusion}, respectively.
\section{Related Work}\label{sec:related_work}
Multiple approaches have been recently proposed to tackle weather forecasting using deep machine learning models. The author in \cite{mehrkanoon2019deep2} introduced the convolutional neural networks to learn the underlying spatio-temporal patterns of weather data. This work used hourly past data from cities in the Netherlands, Belgium and Denmark to predict the temperature and wind speed of multiple cities. It has been shown that the convolutional operations benefit from a tensorial representation in order to improve the prediction capability.
In another work, a feed forward neural network has been used to investigate the volume of data needed as well as its recency to yield accurate weather predictions \cite{booz2019deep}.
In terms of data volume, it has been shown that more data consistently leads to better predictions. The impact of the data recency remains unknown as there was no significant impact on the predictions when tuning the recency of the data.
In \cite{zhou2019forecasting}, the authors used deep learning to predict weather phenomena related to the heating of the air (i.e. heavy rains and thunderstorms among others), more commonly known as severe convective weather (SCW). A deep CNN has been utilized and proved to yield superior results compared to traditional machine learning models such as support vector machines or random forests. Another model which used stacked ConvLSTM layers, \textit{DeepRain}, was compared to linear regression models and reduced the RMSE by a large margin. This model used past radar data with a 6-min time resolution over a period of two years \cite{kim2017deeprain}.
Similarly to the previous work, the authors in \cite{sonderby2020metnet} used a multi-input network with past radar data to perform precipitation forecasting. However, a different approach was used since the regression problem was transformed into a multi-class classification of possible precipitation ranges. The model also made use of axial self-attention \cite{ho2019axial} as spatial aggregator. This model was able to outperform the system used in the National Oceanic and Atmospheric Administration (NOAA). In \cite{dueben2018challenges}, the different challenges of using deep learning for weather forecasting are addressed. In particular, it has been stated that while neural networks can be useful for short-term predictions, the need for domain knowledge is essential when tackling forecasting of longer term ranges.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Self-Attention}
The self-attention mechanism was first introduced by Vaswani et. al \cite{vaswani2017attention} to capture dependencies within sequence of words. It relies on the dot product operation to assess the similarity of each word with respect to all the other words of a sequence. The query $Q$, key $K$, and value $V$ matrices are computed through the sequence of inputs $I \in \mathbb{R}^{S \times E}$ where $S$ is the sequence length and $E$ is the embedding dimension of each input feature:
\begin{equation}
\quad Q=IW_{q}, \quad K=IW_{k}, \quad and \quad V=IW_{v},
\end{equation}
where $W_q$, $W_k$, and $W_v$ are learnable weights through a linear function. The attention matrix, also called the head output, is then computed through the softmax of a scaled dot product as follows:
\begin{equation}\label{eq:att}
Attention(Q,K,V) = softmax(\frac{QK^T}{\sqrt{d_k}})V,
\end{equation}
where $d_k$ is the dimension of the key vector $K \in \mathbb{R}^{1 \times d_k}$.
\subsection{Conv-LSTM} \label{ssec:conv_lstm}
In this section we give an overview of the ConvLSTM layer that is used in the proposed models.
It is based on the LSTM cell and was introduced in \cite{xingjian2015convolutional} to address the issue of capturing the spatial structure of the data. In this model, the input gate $i_t$, the forget gate $f_t$, the output gate $o_t$, the hidden state $h_{t-1}$, the candidate cell state $\hat{C}_t$, the current cell state $C_t$ and the input $x_t$ are all 3D tensors. The first dimension of each tensor is the sequence length while the two last dimensions represent the rows and columns. This model has first been used on weather data for precipitation nowcasting, outperforming other models based on the LSTM only.
\subsection{Activation maximization} \label{ssec:activ_max}
Activation maximization is a visualization technique that looks for patterns that maximize a particular activation function inside a neural network \cite{mahendran2016visualizing}. Following the taxonomy of interpretability methods presented in \cite{molnar2020interpretable}, activation maximization is a post hoc method since it aims at understanding the model after the training. This methodology focuses on finding a new input that maximizes the activation of a neuron:
\begin{equation} \label{eq:act_max}
I^* = \argmax_{I}h_{l,z}(I),
\end{equation}
where $I$ is the input data of the network, $h$ is the activation function used in the neuron $z$ of the layer index $l$. For our case, we want to find the input data that contributes the most to minimizing the error between the model prediction and ground truth data. Since in our study weather element forecasting is reduced to a regression problem, here we define $h$, a custom objective function, as the inverse of the mean squared error (MSE):
\begin{equation}\label{eq:objective}
h = \frac{1}{\frac{1}{n}\sum_{i=1}^{n} (y_i-\hat{y}_i)^2},
\end{equation}
where $y_i$ and $\hat{y}_i$ are the true measured data and the model prediction of a particular weather feature for the i\textsuperscript{th} target city, respectively. Here, $n$ denotes the number of target cities. The pseudocode of maximizing the $h$ score and getting the score map $I^*$ is provided in Algorithm \ref{alg:act_max}.
\begin{algorithm}[]
\SetAlgoLined
\KwIn{The number of iterations $s$\\
\Indp \Indp Pretrained model $m_p$\\
Sample input $I$\\
Input ranges $I_{min}$ and $I_{max}$\\
Learning rate $\eta$}
\KwOut{New input $I^*$}
\For{the number of iterations $s$}{
Perform a forward pass of $m_p$ on $I$ to get a prediction $\hat{y}$.\\
Use eq. (\ref{eq:objective}) to obtain the score $h$.\\
Apply $L_2$ normalization on the obtained score.\\
Compute the gradient $dI$ of the normalized score with respect to input $I$.\\
Update the input using: $I \leftarrow I + \eta \; dI$. \\
}
Obtain $I^*$ by clipping $I$ based on $I_{min}$ and $I_{max}$.
\caption{Score Maximization}
\label{alg:act_max}
\end{algorithm}
\subsection{Occlusion Analysis}\label{ssec:occlusion_analysis}
The occlusion analysis is a simplistic, yet effective way to determine which features contribute the most to a minimal error between the actual and prediction data. In this paper, we are concerned with two types of occlusion analysis: a spatial and a temporal occlusion. While the spatial occlusion analysis focuses on the important cities and weather features, the temporal analysis aims at determining the most important lags. The spatial occlusion analysis can be used to either focus on the cities only, or on the weather features only, or on a group of cities and features. However, all of these approaches rely on the same principle, which is to compute the percentage change between a reference MSE, obtained from the prediction of an unmasked data sample and its corresponding ground truth target data, and a new MSE, computed from the prediction of a masked data sample and the same ground truth label. We perform this percentage change each time the mask is slided to a new location of the input. Focusing on either the cities or the weather features on one hand, or a group of cities and features on the other hand, will determine the shape of the mask (i.e. a vector or a matrix, respectively). We present in Algorithm \ref{alg:occ_ana} the specific pseudocode of the occlusion analysis, when using a square matrix of size $p$ as a mask over the input dataset $\mathcal{X}$, and for a particular target city.
\begin{algorithm}[!h]
\SetAlgoLined
\KwIn{Input dataset $\mathcal{X}=\{x_i\}_{i=1}^{k}$\\ \vspace{0.05in}
\Indp \Indp Target dataset $\mathcal{O}=\{o_i\}_{i=1}^{k}$ \\
Pretrained model $m_p$\\
Target city index $c$\\
Mask size $p$\\
Number of horizontal slidings $s_h$\\
Number of vertical slidings $s_v$}
\KwOut{Occlusion map $M_o$}
\For{the number of data samples}{
Perform a prediction of the sample $x_i$ using $m_p$.\\
Compute MSE\textsubscript i between the real target data $o_i$ and the model prediction for the $c$\textsuperscript{th} city. \\
\For{the number of horizontal slidings $s_h$}
{
\For{the number of vertical slidings $s_v$}
{
Mask the sample $x_i$ by the patch to get the masked sample $\tilde{x}_{i}$.\\
Make a new prediction using masked sample $\tilde{x}_{i}$.\\
Compute the $\widetilde{MSE}$\textsubscript{i} between $o_i$ and the recent model prediction for the $c$\textsuperscript{th} city.\\
Calculate the percentage change $\Delta$ between MSE\textsubscript i and $\widetilde{MSE}$\textsubscript{i}.\\
Store $\Delta$ in a tensor $\mathcal{M}$ for this particular patch location.\\
Displace the mask vertically by the distance $p$ over the sample $x_i$.
}
Displace the mask horizontally by the distance $p$ over the sample $x_i$.
}
}
For each mask location, compute an average of the stored $\Delta$ over all data samples from $\mathcal{M}$ to get the occlusion map $M_o$.
\caption{Occlusion analysis}
\label{alg:occ_ana}
\end{algorithm}
\section{Proposed Models}\label{sec:proposed_models}
The two proposed models aim at studying the impact of the input representation for the task of weather forecasting. To this end, these models use two types of input representation: a unistream tensorial representation and a multistream representation.
\subsection{Unistream model}\label{sec:shared_convlstm_model}
The input of the model is a tensor $\mathcal{T} \in \mathbb{R}^{L \times F \times C}$ where $L$ is the number of lags used, $F$ is the number of weather features, and $C$ the number of cities. This tensor is the input of a ConvLSTM layer. As previously seen in section \ref{ssec:conv_lstm}, the ConvLSTM layer essentially processes this tensor input while taking the number of lags $L$ as the sequence length. This layer is then followed by batch normalization and a flatten operation since the output of the ConvLSTM is tensorial. We then use two fully connected layers with a $ReLu$ activation function before the output layer.
\subsection{Multistream model}
This architecture uses a multistream approach. Each input stream uses a tensor $\mathcal{U} \in \mathbb{R}^{V \times F \times C}$ where $V$ is the number of lags used in each tensor. Since the two models, i.e. Unistream and Multistream, use the same number of lags, then $V$ evenly divides the total number of lags $L$ in each sample. Two ConvLSTM layers are used in each stream to capture the spatial and temporal features. The output of each stream is then concatenated on the axis of the channels. Similarly to the previous model, we use batch normalization and layer flattening. A dense layer with a $ReLu$ activation function is used before the final output layer. Some hyperparameters like the kernel size or the number of fully connected nodes have been adapted to have a comparable number of paramaters with respect to the previous model.
\subsection{Attention enriched models}
The two models presented above have also been augmented with a self-attention mechanism.
More specifically, one layer encoder block introduced in \cite{vaswani2017attention} has been incorporated.
In the tensorial input model, it has been added right after the ConvLSTM layer. For the multistream approach, the attention block is inserted after the merging. For both approaches, a reshaping is necessary before the attention encoder block since it is designed for matrices. It should also be noted that we only use one head attention.
\newline
Fig. \ref{fig:all_models} shows the schema of the proposed models. The output of these models is a vector $o$ of length $n$, i.e. the number of target cities. Each value in this vector represents the same target feature for all the target cities. The hyperparameters of these models were selected so that they all have a comparable number of learnable parameters.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.2]{figures/2models_att_matrix_lines.pdf}
\caption{Schemas of (a) the Unistream and (b) Multistream models.}
\label{fig:all_models}
\end{figure}
\section{Data Description}\label{sec:data_desc}
The dataset used has been collected from Weather Underground and includes 18 cities across Europe and 18 weather features, for a period of 15 years from May 2005 to April 2020. The data is made publicly available \footnote{\url{https://github.com/IsmailAlaouiAbdellaoui/weather-forecasting-explanable-recurrent-convolutional-NN}}. Its time resolution is daily and weather features include for instance the temperature, wind speed, condition and sea level pressure among others. Table \ref{tab:features_explanation} presents the list of all features used. At each time step $t$, a data sample is represented by a matrix $M_t\in \mathbb{R}^{F \times C}$, where $F$ is the number of features and $C$ is the number of cities. Therefore the whole dataset is a tensor $\mathcal{D} \in \mathbb{R}^{L \times F \times C}$, where $L$ is the total number of days used. Fig. \ref{fig:map_high_res} shows a map of the different cities that are contained in the dataset.
\begin{table}[!t]
\begin{center}
\caption{Features used in the dataset.}
\resizebox{\columnwidth}{!}{%
\centering
\begin{tabular}{c >{\centering\arraybackslash}m{0.5\textwidth}}
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{\textbf{Feature name}} & \multirow{2}{*}{\textbf{Remarks}} \\
& \\
\Xhline{3\arrayrulewidth}
Highest temperature (\degree F) & -\\
Lowest temperature (\degree F) & -\\
Average temperature (\degree F) & -\\
Dew point (\degree F) & - \\
Highest dew point (\degree F) & - \\
Lowest dew point (\degree F) & - \\
Average dew point (\degree F) & - \\
Maximum wind speed (mph) & -\\
Visiblity (mi) & Discrete value expressed in miles to measure the distance at which an object can be clearly distinguished\\
Sea level pressure (Hg) & Measured in inch of mercury\\
Observed temperature (\degree F) & Temperature in Fahrenheit observed at 10 am\\
Observed dew point (\degree F) & Dew point in Fahrenheit observed at 10 am\\
Humidity (\%) & -\\
Wind direction & Discrete values indicating 16 possible directions of the wind \\
Wind speed (mph) & -\\
Wind gust (mph) & -\\
Pressure (in) & -\\
Condition & 21 possible discrete values that describe the overall weather state (cloudy, rainy, fog ...)\\
\Xhline{3\arrayrulewidth}
\label{tab:features_explanation}
\end{tabular}
}
\end{center}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[height=3.3in,width=\columnwidth]{figures/map_high_res_ESRI_Imagery_World_2D.pdf}
\caption{Map showing the 18 cities used in the dataset.}
\label{fig:map_high_res}
\end{figure}
\section{Experimental Results}\label{sec:results}
\subsection{Data Preprocessing}\label{sec:Preprocessing}
The weather data is first scaled by means of equation \ref{eq:minmax}. In this way, for each feature and city, we take the values corresponding to every date and scale them down between 0 and 1.
\begin{equation}\label{eq:minmax}
x_{scaled} = \frac{x - min(c_{ij})}{max(c_{ij}) - min(c_{ij})},
i \in [1,F], j \in [1,C],
\end{equation}
where $c_{ij} \in \mathbb{R}^{L}$ refers to a column vector and $i$ and $j$ are the i\textsuperscript{th} feature of the j\textsuperscript{th} city.
\subsection{Experimental setup}
For all the experiments, the following six cities have been used as target cities: Paris, Luxembourg, London, Brussels, Frankfurt and Rotterdam. We also selected two target features: the wind speed, in miles per hour, and the average temperature of the day, in degree Fahrenheit. It should be noted that for each training instance, the output vector corresponds to only one type of feature, for all the target cities. Moreover, we performed experiments for 2, 4 and 6 days ahead, using 10 lags.
All the experiments used $90\%$ of the data for training and validation, while the remaining $10\%$ was used for testing. Adam method \cite{kingma2014adam} is used to optimize the mean square error (MSE) with a learning rate of $1e^{-4}$ and a batch size of $16$ for all of the proposed models.
\subsection{Results} \label{ssec:results}
The obtained mean squared error (MSE) of the proposed two models for wind speed as well as average temperature prediction for six target cities over 2, 4, and 6 days ahead are tabulated in Table \ref{tab:results_windspeed_mse} and Table \ref{tab:results_avgtemp_mse} respectively. The results of the two models with incorporated attention mechanism is also tabulated in Table \ref{tab:results_windspeed_mse} and Table \ref{tab:results_avgtemp_mse}. For every city and days ahead, the MSE of the best model is underlined. It should be noted that the reported MSEs are calculated after descaling the models prediction.
\begin{table}[!t]
\centering
\caption{The MSE comparison of the four models, for 2, 4, and 6 days ahead \textbf{wind speed} prediction.}
\label{tab:results_windspeed_mse}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l l c c c c}\Xhline{3\arrayrulewidth}
\multirow{2}{*}{\textbf{Days ahead}}&
\multirow{2}{*}{\textbf{City}}&
\multirow{2}{*}{\textbf{Unistream}}& \multirow{2}{*}{\textbf{Att-Unistream}}& \multirow{2}{*}{\textbf{Multistream}}& \multirow{2}{*}{{\textbf{Att-Multistream}}}\\%\hline
& & & & & \\\Xhline{3\arrayrulewidth}
2 & Luxembourg&28.85&\underline{22.96}&26.94&23.05\\
& Rotterdam&48.04&\underline{38.22}&44.85&38.38\\
& Frankfurt&41.11&\underline{22.70}&38.38&32.84\\
& Brussels&36.78&\underline{29.26}&34.34&29.38\\
& London&30.75&\underline{24.46}&28.71&24.56\\
& Paris&25.25&\underline{20.09}&23.58&20.16\\ \hline
4 & Luxembourg&32.27&25.82&29.85&\underline{25.35}\\
& Rotterdam&53.73&42.98&49.69&\underline{42.20}\\
& Frankfurt&45.97&36.78&42.52&\underline{36.11}\\
& Brussels&41.14&32.91&38.05&\underline{32.31}\\
& London&34.39&27.51&31.80&\underline{27.01}\\
& Paris&28.24&22.59&26.12&\underline{22.18}\\ \hline
6 & Luxembourg&38.22&26.34&30.63&\underline{25.36}\\
& Rotterdam&63.64&43.87&51.00&\underline{42.23}\\
& Frankfurt&54.45&37.53&43.64&\underline{36.14}\\
& Brussels&48.72&33.58&39.05&\underline{32.33}\\
& London&40.73&28.07&32.64&\underline{27.03}\\
& Paris&33.45&23.06&26.81&\underline{22.20}\\ \hline
\Xhline{3\arrayrulewidth}
\end{tabular}
}
\end{table}
From Tables \ref{tab:results_windspeed_mse} and \ref{tab:results_avgtemp_mse}, one can observe that often the models with attention are the most successful ones. Indeed, for both the prediction of the temperature and the wind speed, the models with attention always have an edge over their corresponding models without attention.
If we consider the models without the addition of attention, the one that uses a multistream approach is the most dominant one for both weather features, consistently outperforming the unistream model. Concerning the models with attention, if we compare them over all the results, there is no clear winner.
However, if we perform the same analysis within each weather feature, a different pattern emerges. Indeed, the unistream model is the best one for predicting the temperature. It should also be noted that the unistream model is better than the other one for short time horizons (e.g. 2 days ahead). The city for which wind speed is easier to predict is Paris, while the average temperature of Brussels is the one yielding the minimum MSE error among all the target cities. Fig. \ref{fig:actual_vs_pred} shows the result of real data versus its prediction for 2, 4 and 6 days ahead using the Att-Multistream model and for the cities of Paris and Brussels.
\begin{table}[!t]
\centering
\caption{The MSE comparison of the four models, for 2, 4, and 6 days ahead \textbf{average temperature} prediction.}
\label{tab:results_avgtemp_mse}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l l c c c c }\Xhline{3\arrayrulewidth}
\multirow{2}{*}{\textbf{Days ahead}}&
\multirow{2}{*}{\textbf{City}}&
\multirow{2}{*}{\textbf{Unistream}}& \multirow{2}{*}{\textbf{Att-Unistream}}& \multirow{2}{*}{\textbf{Multistream}}& \multirow{2}{*}{{\textbf{Att-Multistream}}}\\%\hline
& & & & & \\\Xhline{3\arrayrulewidth}
2 & Luxembourg&58.40&\underline{40.83}&46.35&47.46\\
& Rotterdam&52.89&\underline{37.14}&41.98&43.23\\
& Frankfurt&53.73&\underline{37.68}&42.65&43.89\\
& Brussels&42.87&\underline{30.15}&34.02&35.18\\
& London&44.80&\underline{31.48}&35.56&36.69\\
& Paris&53.15&\underline{37.27}&42.19&43.41\\ \hline
4 & Luxembourg&67.71&53.88&59.41&\underline{42.16}\\
& Rotterdam&61.32&48.91&53.81&\underline{38.19}\\
& Frankfurt&62.29&49.67&54.66&\underline{38.80}\\
& Brussels&49.70&39.70&43.61&\underline{30.97}\\
& London&51.94&41.46&45.58&\underline{32.36}\\
& Paris&61.62&49.13&54.07&\underline{38.29}\\ \hline
6 & Luxembourg&75.87&\underline{54.91}&65.96&55.76\\
& Rotterdam&68.72&\underline{49.84}&59.74&50.55\\
& Frankfurt&69.80&\underline{50.60}&60.68&51.35\\
& Brussels&55.69&\underline{40.43}&48.41&40.99\\
& London&58.20&\underline{42.23}&50.60&42.84\\
& Paris&69.05&\underline{50.06}&60.03&50.80\\ \hline
\Xhline{3\arrayrulewidth}
\end{tabular}
}
\end{table}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=\columnwidth]{figures/Paris_model4_att_ws.pdf}}
\subfloat[]{\includegraphics[width=\columnwidth]{figures/Brussels_model2_att_temp.pdf}}
\caption{Actual vs. prediction of the average temperature (a) and wind speed (b) using the two proposed models.}
\label{fig:actual_vs_pred}
\end{figure*}
Concerning the training time, we observed that it keeps growing as we add the attention mechanism and use the multistream approach. Interestingly enough, the multistream architecture that incorporates attention takes more time to train despite less trainable parameters. We should also note that the attention mechanism has more impact on the training time of the unistream architecture.
\section{Discussion} \label{sec:discussion}
An effective way to understand which input features and cities affect the outputs is using the techniques explained in sections \ref{ssec:activ_max} and \ref{ssec:occlusion_analysis}. In this section, we present the results of spatial and temporal occlusion analysis and score maximization techniques for the two proposed models. In addition, for this analysis the models have been trained to predict six days ahead.
\subsubsection{Occlusion analysis}
In order to determine which features contribute the most to a minimal error between the actual data and the prediction for a particular city, we first determine the MSE between the prediction of a sample and the actual data, which is used as a reference MSE. We then use a mask vector $m_f \in \mathbb{R}^{1 \times F}$ which is used in a sliding fashion across all the feature rows of the input data. We make an inference everytime $m_f$ masks a row, compute the corresponding MSE and finally obtain the percentage change between this MSE and the reference MSE. We repeat the same process along all row features to obtain all the percentage change for that particular data sample. The masked row feature that leads to the biggest MSE increase corresponds to the most important feature. Moreover, we repeat the same computations using multiple data samples and we average the percentage changes for each feature row. The same process applies to the rest of the other cities in order to obtain their corresponding important features. On the other hand, to determine the most important cities, we use the same algorithm, but with a mask vector $m_c \in \mathbb{R}^{C \times 1}$ that is slided across all the column cities of the input data.
Fig. \ref{fig:occ_analysis_m1} shows the most important features and cities of the Att-Unistream model after performing the occlusion analysis. This model was trained on the 6 target cities described above with the average temperature as target feature. Fig. \ref{fig:occ_analysis_m1} (a) shows the most important features for each target city while Fig. \ref{fig:occ_analysis_m1} (b) presents the most relevant cities for each target city. Fig. \ref{fig:occ_analysis_m1} (a) highlights the importance of the dew point on the temperature since the plot shows an agreement around this weather feature. This feature makes sense since the dew point is the temperature to which air must be cooled to transform into water vapor. Concerning the cities, Brussels plays a major role into the predictions. This finding is reasonable since we can observe from Fig. \ref{fig:map_high_res} that Brussels is the centroid of the city cluster that makes up the target cities. Indeed, these models are multioutput, and affecting the city of Brussels would affect the prediction of all the target cities.
\begin{figure*}[!htbp]
\centering
\subfloat[]{{\includegraphics[scale=0.067]{figures/important_features_m2_att_temp.pdf}}}
\subfloat[]{{\includegraphics[scale=0.067]{figures/important_cities_m2_att_temp.pdf}}}
\caption{Occlusion analysis visualization of the Att-Unistream model showing the most relevant weather features (a) and cities (b) for each target city. The model was trained to predict the 6 days ahead average temperature.}
\label{fig:occ_analysis_m1}
\end{figure*}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.065]{figures/important_features_m4_att_wind_speed.pdf}
\caption{Occlusion analysis visualization of the Att-Multistream model showing the most relevant weather features for each target city. The model was trained to predict the 6 days ahead wind speed.}
\label{fig:occ_analysis_m3}
\end{figure}
Fig. \ref{fig:occ_analysis_m3} presents the same information as in Fig. \ref{fig:occ_analysis_m1}, but for the Att-Multistream model, with the wind speed as target feature. Interestingly enough, the condition and to a lesser extent the wind gust and pressure play some role in these predictions.
Apart from determining the important features or cities only, occlusion analysis can also be useful to determine whether a group of cities or features is important as well. We applied the same process described above, however instead of using vector masks, we used square patch matrices that are slided along both the rows and columns directions, without overlapping. Fig. \ref{fig:occlusion_squares} shows the visualization of this occlusion analysis, where the reference MSE corresponds to the error between the prediction and the actual data of the cities of Paris. The top and bottom rows show the visualization of this analysis for the Att-Unistream and Att-Multistream models, respectively. These models were trained for a 6 days ahead prediction of the temperature and wind speed. The first, second, and third columns use patch sizes of 1$\times$1, 2$\times$2, and 3$\times$3, respectively. Brighter colors correspond to features and cities that are more important. A first observation is that occlusions seem decisive about the important features and cities since only a specific mask region is brighter than the other regions in each occlusion map. The important features to predict the temperature shown in subfigures (a), (b), and (c) are the temperature and the pressure. These findings bring complementary information to the outcomes shown by subfigure (a) of Fig. \ref{fig:occ_analysis_m1}. Subfigures (d), (e), and (f) also reveal complementary information when compared to the analysis shown in Fig. \ref{fig:occ_analysis_m3}. Among the weather features that are important to predict the wind speed, we can find the maximum wind speed and the wind speed. Moreover, Brussels as well as cities near the sea like Barcelona or Amsterdam are critical cities for wind speed prediction.
\begin{figure}[!t]
\centering
\subfloat[]{{\includegraphics[scale=0.18]{figures/model2_att_occ_analysis_temp_paris_100samples_patchsize1_6.pdf}}}
\subfloat[]{{\includegraphics[scale=0.18]{figures/model2_att_occ_analysis_temp_paris_100samples_patchsize2_6.pdf}}}
\subfloat[]{{\includegraphics[scale=0.18]{figures/model2_att_occ_analysis_temp_paris_100samples_patchsize3_6.pdf}}}
\\\vskip 0.5pt plus 0.25fil
\subfloat[]{{\includegraphics[scale=0.18]{figures/model4_att_occ_analysis_wind_speed_paris_100samples_patchsize1_6.pdf}}}
\subfloat[]{{\includegraphics[scale=0.18]{figures/model4_att_occ_analysis_wind_speed_paris_100samples_patchsize2_6.pdf}}}
\subfloat[]{{\includegraphics[scale=0.18]{figures/model4_att_occ_analysis_wind_speed_paris_100samples_patchsize3_6.pdf}}}
\caption{Results of the spatial occlusion analysis using different mask sizes. (a,d): mask size 1$\times$1. (b,e): mask size 2$\times$2. (c,f): mask size of 3$\times$3. For both Att-Unistream and Att-Multistream models, the target city is Paris. The Att-Unistream and Att-Multistream models were trained to predict the temperature and wind speed, respectively.}
\label{fig:occlusion_squares}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[]{{\includegraphics[width=\linewidth]{figures/temporal_occlusion_m2_att.pdf}}}
\\\vskip 0.5pt plus 0.25fil
\subfloat[]{{\includegraphics[width=\linewidth]{figures/temporal_occlusion_m4_att.pdf}}}
\caption{Temporal occlusion analysis visualization of the Att-Unistream (a) and Att-Multistream (b) models. These models were trained to perform the 6 days ahead prediction of temperature and wind speed, respectively.}
\label{fig:temporal_occ_analysis}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[]{{\includegraphics[width=0.3\linewidth]{figures/model2_att_act_max_temperature_seed_xtrainsample_input_0.pdf}}}
\subfloat[]{{\includegraphics[width=0.3\linewidth]{figures/model2_att_act_max_temperature_seed_xtrainsample_input_4.pdf}}}
\subfloat[]{{\includegraphics[width=0.3\linewidth]{figures/model2_att_act_max_temperature_seed_xtrainsample_input_9.pdf}}}
\caption{Score maximization maps of the Att-Unistream for the 1\textsuperscript{st} (a), 5\textsuperscript{th} (b), and 10\textsuperscript{th} (c) lags. This model was trained to predict the wind speed.}
\label{fig:act_max_map_nsconv+lstm}
\end{figure}
Fig. \ref{fig:temporal_occ_analysis} displays the visualization of the temporal occlusion analysis where brighter values mean more relevant lags. As previously mentioned, this approach aims at finding which lag contributes the most to a minimal error between the actual and prediction data. The leftmost region corresponds to the oldest lags while the rightmost one are the most recent lags. For the Att-Unistream model, Fig. \ref{fig:temporal_occ_analysis} (a), there is an emphasis on the recent lags, with the exception of the cities of London and Rotterdam. However, for the Att-Multistream model in Fig. \ref{fig:temporal_occ_analysis} (b), the older lags yield consistently highest importance, with the exception of Rotterdam. Interestingly enough, the 1st and 2nd lags seem the most important ones for the cities of Paris, Luxembourg, Brussels, and Frankfurt.
\subsubsection{Score maximization}
Here we show the visualization of the score maximization introduced in section \ref{ssec:activ_max}. Fig. \ref{fig:act_max_map_nsconv+lstm} are the score maximization maps that show the relevant weather features for all the cities of the Att-Unistream model, trained to predict six days ahead daily average temperature. Higher pixel values in the score maps refer to a more important city-feature pair. There is no discernible pattern except the consistent feature across the three lags. This weather feature is the visibility. Moreover, the pressure feature seems important as well, but to a lesser extent.
\section{Conclusion}\label{sec:conclusion}
In this paper, two deep neural networks architectures have been proposed and investigated to perform weather elements forecasting. Moreover, a self-attention mechanism proved to be beneficial to these models since it consistently improved the results. From the analysis of the experimental results, it has been shown that a multistream input representation is globally more suitable for this task. In addition, interpretability techniques such as occlusion analysis and score maximization have been used to extract the most relevant input features (i.e. weather features and cities). These methods revealed that in general, Brussels is important for the prediction of temperature since it is located in the center of the target cities, while cities near the sea like Barcelona or Amsterdam are more suitable for wind speed prediction. It has also been shown that dew point is an important feature for the prediction of the temperature while the maximum wind speed and the condition heavily influence the wind speed prediction. From a temporal perspective, each model favors specific lags, with the unistream model having more emphasis on the recent lags and the multistream model favouring the older lags. The data and code used can be found at \href{https://github.com/IsmailAlaouiAbdellaoui/weather-forecasting-explanable-recurrent-convolutional-NN}{github.com/IsmailAlaouiAbdellaoui/weather-forecasting-explanable-recurrent-convolutional-nn}.
\section*{Acknowledgment}
Simulations were performed with computing resources granted by RWTH Aachen University and Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,150 |
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
<title>tclap: TCLAP::ValueLike Struct Reference</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<link href="doxygen.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<!-- Generated by Doxygen 1.6.0 -->
<div class="navigation" id="top">
<div class="tabs">
<ul>
<li><a href="index.html"><span>Main Page</span></a></li>
<li><a href="namespaces.html"><span>Namespaces</span></a></li>
<li class="current"><a href="annotated.html"><span>Classes</span></a></li>
<li><a href="files.html"><span>Files</span></a></li>
</ul>
</div>
<div class="tabs">
<ul>
<li><a href="annotated.html"><span>Class List</span></a></li>
<li><a href="hierarchy.html"><span>Class Hierarchy</span></a></li>
<li><a href="functions.html"><span>Class Members</span></a></li>
</ul>
</div>
<div class="navpath"><a class="el" href="namespaceTCLAP.html">TCLAP</a>::<a class="el" href="structTCLAP_1_1ValueLike.html">ValueLike</a>
</div>
</div>
<div class="contents">
<h1>TCLAP::ValueLike Struct Reference</h1><!-- doxytag: class="TCLAP::ValueLike" -->
<p>A value like argument value type is a value that can be set using operator>>.
<a href="#_details">More...</a></p>
<p><code>#include <<a class="el" href="ArgTraits_8h_source.html">ArgTraits.h</a>></code></p>
<p><a href="structTCLAP_1_1ValueLike-members.html">List of all members.</a></p>
<table border="0" cellpadding="0" cellspacing="0">
<tr><td colspan="2"><h2>Public Types</h2></td></tr>
<tr><td class="memItemLeft" align="right" valign="top">typedef <a class="el" href="structTCLAP_1_1ValueLike.html">ValueLike</a> </td><td class="memItemRight" valign="bottom"><a class="el" href="structTCLAP_1_1ValueLike.html#a26e6d3b8c4a608ecebe7404e42fbecf9">ValueCategory</a></td></tr>
<tr><td colspan="2"><h2>Public Member Functions</h2></td></tr>
<tr><td class="memItemLeft" align="right" valign="top">virtual </td><td class="memItemRight" valign="bottom"><a class="el" href="structTCLAP_1_1ValueLike.html#aef7da69a6268964f450cf4c12e614ba7">~ValueLike</a> ()</td></tr>
</table>
<hr/><a name="_details"></a><h2>Detailed Description</h2>
<p>A value like argument value type is a value that can be set using operator>>. </p>
<p>This is the default value type. </p>
<p>Definition at line <a class="el" href="ArgTraits_8h_source.html#l00038">38</a> of file <a class="el" href="ArgTraits_8h_source.html">ArgTraits.h</a>.</p>
<hr/><h2>Member Typedef Documentation</h2>
<a class="anchor" id="a26e6d3b8c4a608ecebe7404e42fbecf9"></a><!-- doxytag: member="TCLAP::ValueLike::ValueCategory" ref="a26e6d3b8c4a608ecebe7404e42fbecf9" args="" -->
<div class="memitem">
<div class="memproto">
<table class="memname">
<tr>
<td class="memname">typedef <a class="el" href="structTCLAP_1_1ValueLike.html">ValueLike</a> <a class="el" href="structTCLAP_1_1ValueLike.html">TCLAP::ValueLike::ValueCategory</a></td>
</tr>
</table>
</div>
<div class="memdoc">
<p>Definition at line <a class="el" href="ArgTraits_8h_source.html#l00039">39</a> of file <a class="el" href="ArgTraits_8h_source.html">ArgTraits.h</a>.</p>
</div>
</div>
<hr/><h2>Constructor & Destructor Documentation</h2>
<a class="anchor" id="aef7da69a6268964f450cf4c12e614ba7"></a><!-- doxytag: member="TCLAP::ValueLike::~ValueLike" ref="aef7da69a6268964f450cf4c12e614ba7" args="()" -->
<div class="memitem">
<div class="memproto">
<table class="memname">
<tr>
<td class="memname">virtual TCLAP::ValueLike::~ValueLike </td>
<td>(</td>
<td class="paramname"></td>
<td> ) </td>
<td><code> [inline, virtual]</code></td>
</tr>
</table>
</div>
<div class="memdoc">
<p>Definition at line <a class="el" href="ArgTraits_8h_source.html#l00040">40</a> of file <a class="el" href="ArgTraits_8h_source.html">ArgTraits.h</a>.</p>
</div>
</div>
<hr/>The documentation for this struct was generated from the following file:<ul>
<li><a class="el" href="ArgTraits_8h_source.html">ArgTraits.h</a></li>
</ul>
</div>
<hr size="1"/><address style="text-align: right;"><small>Generated on Sat Apr 16 15:34:25 2011 for tclap by
<a href="http://www.doxygen.org/index.html">
<img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.6.0 </small></address>
</body>
</html>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,388 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.