text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
This last section is for all the rest of what we would like to share in connection with The Memoirs of Billy Shears, or with its related books on this site. This page has a link to a website mentioned in the text, an FAQ section, and a link to a CONTACT form for MACCA CORP and/or its subsidiary, Peppers Press.
The Memoirs of Billy Shears and Beatles Enlightenment mention Linda's Lifetime Achievement Award from PETA (People for the Ethical Treatment of Animals). PETA asks us to question the reasonableness of eating meat, wearing leather, and of going to circuses or zoos. William also encourages vegetarianism. He praises the band, Death Cab for Cutie, for having "raised a meaningful voice to wean people off of their addictions to meat" (p. 113). William tells us to "love every creature," and to "Respect all life enough to give up eating meat, starting with all mammals" (p. 605). Recognizing that the full PETA position is too much for most people, he encourages the reader to take it a step at a time, saying, "If vegetarianism sounds hard, eat fish and poultry. Mammals are too closely related" (p. 546).
before emailing us with questions already answered.
1. Did Paul McCartney really die?
Yes, he's dead. The proof in this book is all verifiable.
2. When did Paul McCartney die?
Paul died late in the evening of 11 September 1966.
3. What was the first song that William Shepherd (the man who replaced Paul) took part in recording as one of The Beatles?
4. What was the first Beatles album showing William on the cover?
A Collection of Beatles Oldies, released three months after Paul died, depicted William on the cover--without any attempt to make him look like Paul.
5. What was the first Beatles album that William took part in recording?
Sgt. Peppers Lonely Hearts Club Band.
6. How, when, and where did John Lennon first meet William Shepherd?
In the Memoirs, we see that Brian Epstein introduced William to John on 16 September 1966 (five days after Paul died) in Paris, France. That was the momentous occasion when William was presented to John as Paul's replacement. However, the book also hints at William's earlier roles. One point that is far more emphatic in the screenplay than in the book is that one of William's early roles was that of Phil Ackrill in the band, Denny Laine & the Diplomats. William (as "Phil Ackrill"), Brian Hines (as "Denny Laine"), and other Diplomats first met the Beatles on 5 July 1963 at the Old Hill Plaza (now the Platinum Plaza) about ten miles from the Diplomats' home in Birmingham, UK. The Diplomats were the opening act, warming up the audience for the Fab Four.
7. Did Paul McCartney write or dictate this book?
No, Paul died decades before this book was begun.
8. Did Sir Paul, or William Shepherd, write this book?
No, this book was written by the encoder, Thomas E. Uharriet.
9. Where did Uharriet get the material to write and/or encode this book?
Actually, although William (the current Paul) did visit the Louve, and did say nearly those words, his exact words were, "Oh my God, I'm that famous guy!" spoken elsewhere. He said the line, "One of the beautiful people," in another conversation. In this example, three separate events were combined as one, with one word replaced (reflecting the hell that he created for himself by adopting that role).
10. Which parts of the book are non-fiction?
Everything proving that Paul died and was replaced is correct and historically verifiable. Most of the rest, with the exception of chapter 35, is also correct. Chapter 65 explains some of the other exceptions.
11. Did Uharriet really plan this book with Paul McCartney when they met at a beach in Southern California?
No. Uharriet (who was only seven years old when Paul died in 1966) never met Paul. The story about Uharriet meeting William (as Paul) on a Southern Californian beach one hot day after Paul left a recording studio in Los Angeles is merely a legend that was invented (in chapter 35, which is fictional) to explain this book. If chapter 35 were historically accurate, that disclosure to Uharriet would have constituted a breach of William's nondisclosure agreement--which William would never do. Hence, everyone can be certain that that chance meeting never occurred.
12. To what extent was William (who replaced Paul) involved in writing and/or encoding The Memoirs of Billy Shears (or Billy's Back!, etc.)?
The encoding was done entirely by Thomas E. Uharriet. Even though you, the reader, will realize that much of the material could only have been known by William (the new Paul), or by his closest confidants, and even though William is the primary source, we maintain the position that William "Paul" had nothing to do with it. Please do not email us requesting information that would contradict either this official position or our claim that chapter 35 is completely fictional. Although that chapter depicts a chance meeting in which "Paul McCartney" and Thomas E. Uharriet decide to create this book, we again affirm that chapter 35 is fictional. That chapter is NOT an admission of Paul's involvement in this project, or of breeching his non-disclosure agreement by revealing such things to the Encoder.
The Memoirs of Billy Shears, Billy's Back, Beatles Enlightenment, The Talent Contest, and Billy Shears Acrostical Decoding are published by Peppers Press, a subsidiary of MACCA CORP. If you have questions not already answered above, or if you have comments, or outlandish praise, you may use this form to contact us. However, when the mail volume becomes too great, especially with questions already answered above, this contact option will be removed. Please do not attempt to use this contact option to send personal messages to Sir Paul McCartney.
but merely as illustrating concepts of The Memoirs of Billy Shears.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 108
|
Q: extract reCaptcha from web page to be completed externally via cURL and then return results to view page I am creating a web scraper for personal use that scrape car dealership sites based on my personal input but several of the sites that I attempting to collect data from a blocked by a redirected captcha page. The current site I am scraping with curl returns this HTML
<html>
<head>
<title>You have been blocked</title>
<style>#cmsg{animation: A 1.5s;}@keyframes A{0%{opacity:0;}99%{opacity:0;}100%{opacity:1;}}</style>
</head>
<body style="margin:0">
<p id="cmsg">Please enable JS and disable any ad blocker</p>
<script>
var dd={'cid':'AHrlqAAAAAMA1gZrYHNP4MIAAYhtzg==','hsh':'C0705ACD75EBF650A07FF8291D3528','t':'fe','host':'geo.captcha-delivery.com'}
</script>
<script src="https://ct.captcha-delivery.com/c.js"></script>
</body>
</html>
I am using this to scrape the page:
<?php
function web_scrape($url)
{
$ch = curl_init();
$imei = "013977000272744";
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_VERBOSE, 1);
curl_setopt($ch, CURLOPT_COOKIE, '_ym_uid=1460051101134309035; _ym_isad=1; cxx=80115415b122e7c81172a0c0ca1bde40; _ym_visorc_20293771=w');
curl_setopt($ch, CURLOPT_POSTFIELDS, array(
'imei' => $imei,
));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$server_output = curl_exec($ch);
return $server_output;
curl_close($ch);
}
echo web_scrape($url);
?>
And to reiterate what I want to do; I want to collect the Recaptcha from this page so when I want to view the page details on an external site I can fill in the Recaptcha on my external site and then scrape the page initially imputed.
Any response would be great!
A: Datadome is currently utilizing Recaptcha v2 and GeeTest captchas, so this is what your script should do:
*
*Navigate to redirection https://geo.captcha-delivery.com/captcha/?initialCid=….
*Detect what type of captcha is used.
*Obtain token for this captcha using any captcha solving service like Anti Captcha.
*Submit the token, check if you were redirected to the target page.
*Sometimes target page contains an iframe with address https://geo.captcha-delivery.com/captcha/?initialCid=.. , so you need to repeat from step 2 in this iframe.
I'm not sure if steps above could be made with PHP, but you can do it with browser automation engines like Puppeteer, a library for NodeJS. It launches a Chromium instance and emulates a real user presence. NodeJS is a must you want to build pro scrapers, worth investing some time in Youtube lessons.
Here's a script which does all steps above: https://github.com/MoterHaker/bypass-captcha-examples/blob/main/geo.captcha-delivery.com.js
You'll need a proxy to bypass GeeTest protection.
A: based on the high demand for code, HERE is my upgraded scraper that bypassed this specific issue. However my attempt to obtain the captcha did not work and I still have not solved how to obtain it.
include "simple_html_dom.php";
/**
* Get a web file (HTML, XHTML, XML, image, etc.) from a URL. Return an
* array containing the HTTP server response header fields and content.
*/
// This function is where the Magic comes from. It bypasses ever peice of security carsales.com.au can throw at me
function get_web_page( $url ) {
$options = array(
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_ENCODING => "", // handle all encodings
CURLOPT_USERAGENT => "spider", // who am i
CURLOPT_AUTOREFERER => true, // set referer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // timeout on connect
CURLOPT_TIMEOUT => 120, // timeout on response
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
CURLOPT_SSL_VERIFYPEER => false // Disabled SSL Cert checks
);
$ch = curl_init( $url ); //initiate the Curl program that we will use to scrape data off the webpage
curl_setopt_array( $ch, $options ); //set the data sent to the webpage to be readable by the webpage (JSON)
$content = curl_exec( $ch ); //creates function to read pages content. This variable will be used to hold the sites html
$err = curl_errno( $ch ); //errno function that saves all the locations our scraper is sent to. This is just for me so that in the case of a error,
//I can see what parts of the page has it seen and more importantly hasnt seen
$errmsg = curl_error( $ch ); //check error message function. for example if I am denied permission this string will be equal to: 404 access denied
$header = curl_getinfo( $ch ); //the information of the page stored in a array
curl_close( $ch ); //Closes the Curler to save site memory
$header['errno'] = $err; //sending the header data to the previously made errno, which contains a array path of all the places my scraper has been
$header['errmsg'] = $errmsg; //sending the header data to the previously made error message checker function.
$header['content'] = $content; //sending the header data to the previously made content checker that will be the variable holder of the webpages HTML.
return $header; //Return all the pages data and my identifying functions in a array. To be used in the presentation of the search results.
};
//using the function we just made, we use the url genorated by the form to get a developer view of the scraping.
$response_dev = get_web_page($url);
// print_r($response_dev);
$response = end($response_dev); //takes only the end of the developer response because the rest is for my eyes only in the case that the site runs into a issue
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,004
|
{"url":"https:\/\/www.ias.ac.in\/listing\/bibliography\/pram\/RENUKA_CHAUHAN","text":"\u2022 RENUKA CHAUHAN\n\nArticles written in Pramana \u2013 Journal of Physics\n\n\u2022 Various properties of the 0.6BaTiO$_3$\u20130.4Ni$_{0.5}$Zn$_{0.5}$Fe$_2$O$_4$ multiferroic nanocomposite\n\nStructural, magnetic and ferroelectric properties of 0.6BaTiO$_3$\u20130.4(Ni$_{0.5}Zn$_{0.5}Fe$_2$O$_4$) multiferroic nanocomposite are presented here. The structural properties of the samples were studied by XRD and Raman spectroscopy which confirm the formation of BaTiO$_3$ (BTO) phase with a tetragonal perovskite structure and asmall secondary spinel phase due to the ferrite content. The magnetic and electric orderings were investigated by vibrating sample magnetometer (VSM) and ferroelectric ($P\u2013E$) loop tracer at room temperature. The inceptionof ferroelectric properties is due to barium titanate. The remnant polarization increases \u223c5 times for the composite with Ni$_{0.5}$Zn$_{0.5}$Fe$_2$O$_4$ (NZFO) substitution compared to BTO. The remnant polarization is conducive forswitching applications of multiferroic composite.\n\n\u2022 # Pramana \u2013 Journal of Physics\n\nVolume 94, 2020\nAll articles\nContinuous Article Publishing mode\n\n\u2022 # Editorial Note on Continuous Article Publication\n\nPosted on July 25, 2019","date":"2020-07-16 17:41:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2852928936481476, \"perplexity\": 12112.479103786309}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593657172545.84\/warc\/CC-MAIN-20200716153247-20200716183247-00300.warc.gz\"}"}
| null | null |
\section{Introduction}
\label{intro}
A wide variety of observations supply evidence for the existence of dark matter (DM)~\cite{DM_rev,darkmatter}. Its nature, however, is so-far unknown, and attempts to elucidate it have given rise to a lively and varied research programme in physics. A common hypothesis is to consider dark matter to be made of new, unknown particles. The assumption that these particles are a thermal relic of the Big Bang leads to the conclusion that they are weakly interacting massive particles (WIMPs).
Different approaches are used to search for these particles: production at particle accelerators~\cite{Mitsou}, direct detection of the recoil from collisions with nuclei~\cite{Cline} or indirect detection by means of the secondary particles that they produce when they decay or annihilate~\cite{Gaskins}. Most of the particles that have been put forward as WIMPs candidates annihilate in pairs and subsequently produce standard model particles, including neutrinos. Neutrino telescopes may play a paramount role in the search for WIMPs via their annihilation products, because of their particularly clean signals and low expected backgrounds.
In this paper the results from the search for dark matter in the Milky Way using data recorded with the ANTARES neutrino telescope from 2007 to 2015, with a total live time of 2102 days are presented. Only neutrinos detected via muons produced inside or around the detector are considered. Here and in the following \enquote{neutrino} means $\nu_\mu + \bar{\nu}_\mu$, unless stated otherwise.
In Section \ref{sec:2} it is presented how the neutrino flux can be derived from the annihilation of DM particles. The detector and the reconstruction method are described in Section \ref{sec:3}, while the new analysis methodology is explained in Section \ref{sec:4}. The results are presented in Section \ref{sec:5}.
Compared to work previously published~\cite{Ant_dmgc}, a considerably increased data sample is used and a maximum likelihood method or \enquote{unbinned method} is applied. In addition, more recent parameters for the DM halo in the Milky Way are used.
\section{Dark matter phenomenology}
\label{sec:2}
In this type of indirect search two important ingredients have to be considered: the amount and spatial distribution of dark matter in the source under consideration, and the energy spectra of the standard model particles produced by WIMP annihilation. These two features are to a large extent independent of each other. They are relevant for modelling the expected signal and enter into the analysis at different stages.
The signal spectra used for the analysis presented here were calculated using the code described in~\cite{Cirelli_spectra}. Spectra were obtained for five annihilation channels and 17 WIMP masses between 50 $\frac{\text{GeV}}{\text{c}^2}$ and 100 $\frac{\text{TeV}}{\text{c}^2}$. These spectra take into account the effect of neutrino oscillations. In the following, the results for each annihilation channel are given assuming a 100\% branching ratio. The five annihilation channels are:
\begin{equation}
\rm WIMP + WIMP \to b \bar{b}, W^+ W^-, \tau^+ \tau^-, \mu^+ \mu^-, \nu_{\mu} \bar{\nu}_{\mu}. \label{channels}
\end{equation}
Of these channels, the $b \bar{b}$-channel produces the softest neutrino spectra, whilst the $\nu_\mu \bar{\nu}_\mu$-channel produces the hardest spectra. Although the $\nu_\mu \bar{\nu}_\mu$-channel is suppressed in many models, such as those with the WIMP being the lightest neutralino of supersymmetric models, it is included in this study in order to be as model independent as possible.
The second ingredient, i.e.\ the amount and distribution of dark matter in the source, is described by the so-called J-Factor. The J-Factor, $J(\psi)$, is the integral of the dark matter density squared, $\rm \rho_{DM}^2$, over a line of sight at an angular separation $\psi$ from the centre of the source. The relative signal strength at an angular separation $\psi$ to the source is described by the expression $J(\psi) d\Omega(\psi)$. The J-Factor can be integrated over an observation window $\Delta \Omega$:
\begin{equation}
\rm J_{int}(\Delta \Omega)= \int_{\Delta \Omega} \int \rho^2_{DM} \cdot dl \cdot d\Omega. \label{J-def}
\end{equation}
$\mathrm{J_{int}}$ relates the thermally averaged annihilation cross--section $\langle\sigma \mathrm{v}\rangle$ to the neutrino flux $\Phi_{\nu_\mu + \bar{\nu}_{\mu}}$ via the following equation:
\begin{equation}
\rm \frac{d \Phi_{\nu_\mu + \bar{\nu}_{\mu}}}{dE_{\nu_{\mu} + \bar{\nu_{\mu}}}} = \frac{\langle\sigma \mathrm{v}\rangle}{8 \pi M_{WIMP}^{2}} \cdot \frac{dN_{\nu_\mu + \bar{\nu}_{\mu}}}{dE_{\nu_{\mu} + \bar{\nu}_{\mu}}} \cdot J_{int}(\Delta \Omega) , \label{flux-rel}
\end{equation}
\noindent where $\mathrm{ N_{\nu_\mu + \bar{\nu}_{\mu}}}$ is the average number of neutrinos in the energy bin $\mathrm{dE}_{\nu_{\mu} + \bar{\nu}_{\mu}}$ per WIMP annihilation, v is the WIMP velocity and $\mathrm{M_{WIMP}}$ is the WIMP mass.
The shape of the J-Factor crucially depends on the halo model. In this analysis three models are used: the NFW~\cite{NFW}, the Burkert~\cite{burkert} model and the \enquote{McMillan}~\cite{mcmillan} profile. The parameters for these models are taken from~\cite{nesti_salucci} and~\cite{mcmillan} and are shown in Table \ref{nesti_salucci_par}. The McMillan profile is a variant of the Zhao profile~\cite{Zhao}, which treats one of the shape parameters, $\gamma$, as a free parameter and therefore is also referred to as the \enquote{$\gamma$ free} model. The optimum value of $\gamma$ for this model is $0.79 \pm 0.32$. The uncertainties on the halo profile parameters are not used in this analysis. In Figure \ref{Jint} the integrated J-Factors for the three models are shown. The NFW profile gives a larger total amount of dark matter that is also more concentrated in the core of the source than for the Burkert profile. This is due to the fact that the NFW profile is a so--called cuspy profile and diverges at the centre of the source, in contrast to the cored Burkert profile.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l||c|c|c|c|}
\hline
Parameter & NFW & Burkert & McMillan \\
\hline
\hline
$r_s$ $[kpc]$ & $16.1^{+17.0}_{-7.8}$ & $9.26^{+5.6}_{-4.2}$ & $17.6 \pm 7.5$ \\
\hline
$\rho_{local}$ $[GeV/cm^3]$ & $0.471^{+0.048}_{-0.061}$ & $0.487^{+0.075}_{-0.088}$ & $0.390 \pm 0.034$ \\
\hline
\end{tabular}
\caption{Table of dark matter halo parameters for the Milky Way as taken from \cite{mcmillan} and \cite{nesti_salucci}. $\rho_{local}$ is the local density and $r_s$ is the scaling radius.}
\label{nesti_salucci_par}
\end{center}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{JFactor.eps}
\caption{The integrated J-Factor, $J_{int}$, for a cone-shaped region $\Delta \Omega$ centred on the Galactic Centre with an opening angle $\Psi$. For the halo models the parameters from Table \ref{nesti_salucci_par} are used. The calculations are done using the code CLUMPY~\cite{CLUMPY}.}
\label{Jint}
\end{figure}
\section{Simulation and reconstruction}
\label{sec:3}
The ANTARES neutrino telescope~\cite{antares} is installed at the bottom of the Mediterranean Sea, about 40 km from Toulon and about 2475 m below the sea surface. Being located in the Northern hemisphere ($42^\circ 48^\prime$ N, $6^\circ 10^\prime$ E) allows the ANTARES detector to directly observe the centre of the Milky Way, using the Earth as a shield against the background from atmospheric muons.
ANTARES consists of 12, 450-m long, detector lines that are anchored to the seabed and kept vertical by buoys. Each line comprises 25 storeys with three 10--inch photomultipliers (PMTs)~\cite{PMT} per storey. The PMTs are housed inside pressure-resistant glass spheres~\cite{OM}.
The storeys also house the electronics to control the PMTs~\cite{DAQ} and a system to monitor the alignment of the lines~\cite{alignment}. For the synchronisation of the individual storeys a system of optical beacons~\cite{OB}, located at various points of the apparatus, is used~\cite{timing}.
In this analysis two muon track reconstruction strategies are used: $\Lambda$Fit and QFit. In the QFit strategy~\cite{BBFit} a $\chi^2$-like quality parameter, Q, is minimised. Q is calculated from the squared difference between the expected and measured times of the detected photons, taking into account the effect of light absorption in the water~\cite{BBFit}. This strategy allows for the reconstruction of events with photon hits on only one line (single-line events).
$\Lambda$Fit~\cite{AAFit_official} maximises a likelihood ratio $\Lambda$ in a multistep process. The value of $\Lambda$ of the final iteration of this process is used as a measure of the quality of the reconstruction. In addition, the angular error estimate $\beta$ is used to define a cut employed to reduce the background.
The main background for analyses using muon tracks are atmospheric muons. Taking advantage of the absorption of the Earth that acts as an efficient shield against muons, most of this background can be rejected by accepting only upgoing-reconstructed muons in the analysis. Thanks to the detector's latitude, the centre of the Milky Way is efficiently observed, since it is below the horizon most of the time. To further reduce the background of atmospheric muons wrongly reconstructed as upgoing, cuts on the parameters that quantify the quality of the reconstruction (Q, $\Lambda$), and on the estimate of the angular error ($\beta$) are used, as specified in the next section. Atmospheric neutrinos are an additional but much smaller part of the background. However, unlike atmospheric muons, this background is irreducible, although the information of the energy and correlations with the source can help to discriminate it from the signal.
In order to evaluate the sensitivity of the search, Monte Carlo simulations, using a detailed detector response for each data run, have been performed \cite{km3}. Concerning the background, atmospheric neutrinos \cite{genhen} and muons \cite{MUPAGE} with energies ranging from 10~$\frac{\text{GeV}}{\text{c}^2}$ to 100~$\frac{\text{TeV}}{\text{c}^2}$ have been simulated with the standard ANTARES simulation chain \cite{OM,biofouling,transmission}. From this simulation the detector resolution and acceptance is calculated for all five annihilation channels and for WIMP masses ranging from 50~$\frac{\text{GeV}}{\text{c}^2}$ to 100~$\frac{\text{TeV}}{\text{c}^2}$.
In this paper, data taken from 2007 to 2015, corresponding to 2102 days of live time, was used. The agreement between the data and the simulation has been tested extensively for both reconstruction strategies.
\section{Methodology}
\label{sec:4}
The maximum likelihood method is used to look for a signal of dark matter annihilation. The likelihood, which is a function of the number of signal events assumed to be present in the selected event sample, $\mathrm{n_s}$, is based on two probability distributions, S and B, which describe the behaviour of the signal and the background events, respectively, as a function of the relevant event variables. The likelihood is then maximised by varying $\mathrm{n_s}$. The statistical significance of the value obtained is extracted from the distribution of maximum likelihoods produced by generating pseudo-experiments, i.e.\ samples of events with known amounts of background and signal. The likelihood function used has the form
\begin{equation}
\cal{L} \rm (n_s) = e^{- (n_s+N_{\text{bg}})} \prod_{i = 1} ^{N_{tot}} \left( n_s S(\psi_i,N_{hit,i},\beta_i)
+N_{\text{bg}}B(\psi_i,N_{hit,i},\beta_i) \right), \label{lik1}
\end{equation}
\noindent where $\mathrm{N_{bg}}$ is the expected number of background events, which is set equal to $\mathrm{ N_{tot}}$, the total number of reconstructed events. $\mathrm{n_s}$ is the variable that changes during the maximisation process. The two functions S and B depend on: $\psi_i$, the angular distance of the $i$-th event to the centre of the Milky Way; $ N_{hit,i}$, the number of hits in the $i$-th event; and $ \beta_i$, the angular error estimate for the $i$-th event. The number of hits $\mathrm{N}_{\mathrm{hit,i}}$ is a proxy for the muon energy~\cite{ANT_ext}.
In order to take the source extension into account, in S the non-integrated J-Factor, $\mathrm{J(\psi)}$, is used, smeared out with the point--spread function (PSF) assuming a 15\% systematic uncertainty on the angular resolution, which is the dominant systematic error from the detector in this analysis. This error is based on a 2.5 ns uncertainty in the timing of detected photon hits in ANTARES~\cite{PS}. By doing this, a combination of the PSF and the source morphology is obtained that is also used for generating signal events in the pseudo--experiments.
Further uncertainties exist due to the choice of the halo model and the expected neutrino signal spectra. These uncertainties are studied by using different annihilation channels and halo profile functions in the analysis (see Figure \ref{sv_allchannel} and \ref{sv_model}).
A slightly modified likelihood function is defined for single--line events reconstructed with the QFit strategy:
\begin{equation}
\cal{L} \rm (n_s) = e^{- (n_s+N_{\text{bg}})} \prod_{i = 1} ^{N_{tot}} \left( n_s \bar S(\theta_i,\bar{N}_{hit,i},Q_i)
+N_{\text{bg}}\bar B(\theta_i,\bar{N}_{hit,i},Q_i) \right) , \label{lik2}
\end{equation}
\noindent where $\mathrm{\bar{N}_{hit,i}}$ is the number of hits per storey (instead of the number of hits per PMT) used for the reconstruction, and $\theta_i$ is the difference in zenith angle between the $i$-th event and the centre of the Milky Way. $\mathrm{\bar S}$ and $\mathrm{\bar B}$ are the corresponding probability functions describing the signal and background distributions.
The likelihood functions are then studied using pseudo--experiments, which are generated from the distribution of background events from time--scrambled data and that of signal events from simulation. The signal events are generated by taking into account the angular resolution of the detector, the source morphology and the expected signal spectra. Ten thousand pseudo--experiments are simulated for each combination of WIMP mass, annihilation channel and reconstruction strategy, and for each considered value of signal events, $\mathrm{n_s}$. The maximum value considered for $\mathrm{n_s}$ is 80 for the QFit strategy and 120 (180) for the $\Lambda$Fit strategy using the NFW and McMillan (Burkert) profile. The maximum values were chosen because of differences in the amount of background in these cases. For each pseudo--experiment a test statistic (TS) is calculated:
\begin{equation}
\rm TS = log_{10}\left(\frac{{\cal L}(n_{opt})}{{\cal L}(0)}\right),
\end{equation}
\noindent where $\mathrm{n_{opt}}$ is the value of $\mathrm{n_s}$ that maximises the likelihood function. Since for a fixed signal strength the amount of detected events may vary, the TS distributions were combined using Poissonian weights producing new TS distributions. Sensitivities and limits are calculated following the approach suggested by Neyman~\cite{Neyman}. The 90\% C.L. sensitivity in terms of detected neutrino events, $\bar{\mu}_{90\%}$, is calculated as the average number of inserted signal events, which leads to TS values that are in 90\% of the cases above the median of the TS distribution for pure background. The 90\% C.L. limit in terms of detected neutrino events, $\mu_{90\%}$, is calculated by using the TS value of the unblinded data instead of the median of the background if this TS value is above the median; otherwise the limit is set to the sensitivity.
The event selection criteria, in particular the definition of the
cuts on Q and $\Lambda$ and the selection of the reconstruction strategy, have been optimised with the Model Rejection Factor method to obtain an unbiased cut selection for optimal sensitivities \cite{MRF}. The cut parameters have been tuned individually for each annihilation channel and several WIMP masses in the mass range under consideration, maintaining always a blind approach, i.e.\ with no access to the actual data.
It was found that for most combinations of WIMP mass and annihilation channels the optimum cuts are $Q < 0.7$ and $\Lambda > -5.2$, respectively. Once $\bar{\mu}_{90\%}$ (the 90\% C.L. sensitivity on the average number
of signal events obtained from the likelihood function) is computed, the limits on the neutrino flux for a given mass $\mathrm{M_{WIMP}}$ and annihilation channel is calculated as
\begin{equation}
\overline{\Phi}_{\nu_\mu+\bar \nu_\mu,90\%} =
\frac{\bar \mu_{90\%}(\mathrm{M_{WIMP},ch})}{\sum\limits_{\mathrm{i}} \overline{\mathcal{A}}^{\mathrm{i}}(\mathrm{M_{WIMP},ch}) \times \mathrm{T_{eff}^{i}}} \, ,
\end{equation}
\noindent where the index $\mathrm{i}$ denotes the periods with different detector configurations, $\mathrm{ch}$ the annihilation channel used and $\mathrm{ {T_{eff}^{i}}}$ the total corresponding livetime. In fact, throughout the considered 9 years, the number of available detector lines has changed from 5 to 12. The time span over which the number of available lines remains unchanged is defined as a particular detector configuration period. The effective area averaged over the neutrino energy, $\mathrm{\bar{\mathcal{A}}_{eff}^{i}(\mathrm{M_{WIMP},ch})}$, is defined as:
\begin{gather}
\overline{\mathcal{A}}^i= \\
\sum_{\nu,\bar{\nu}} \left( \frac{\int_{\mathrm{E_{\nu}^{th}}}^{\mathrm{M_{WIMP}}} \mathrm{A_{eff}^{i}}(E_{\nu,\bar{\nu}}) \, \left.\frac{dN_{\nu,\bar{\nu}}}{dE_{\nu,\bar{\nu}}}\right|_{\mathrm{ch,M_{WIMP}}} dE_{\nu,\bar{\nu}}}
{\int_{0}^{\mathrm{M_{WIMP}}}\left.\frac{dN_{\nu}}{dE_{\nu}}\right|_{\mathrm{ch,M_{WIMP}}} dE_{\nu} \,+\, \left.\frac{dN_{\bar{\nu}}}{dE_{\bar{\nu}}}\right|_{\mathrm{ch,M_{WIMP}}} dE_{\bar{\nu}}} \right) \, , \label{Acc1}
\end{gather}
\noindent where $\mathrm{E_{\nu}^{th}}$ is the energy threshold for neutrino detection in ANTARES (approximatively 10 GeV), $\rm M_{WIMP}$
is the WIMP mass, $\rm dN_{\nu,\bar{\nu}}/dE_{\nu,\bar{\nu}}$ is the energy spectrum of the (anti-)neutrinos at the detector's location for annihilation channel $ch$ (see Equation 1) and WIMP mass $M_{WIMP}$, and $\rm A_{eff}(E_{\nu,\bar{\nu}})$ is the effective area of ANTARES as a function of the (anti-)neutrino energy.
Due to their different cross-sections, the effective areas for neutrinos and anti-neutrinos are slightly different and therefore are considered separately.
In addition, the fluxes of muon neutrinos and anti-neutrinos are different and are convoluted with their respective efficiencies. The effective area for a detector configuration period is defined as the ratio between the neutrino event rate and the signal neutrino flux for a certain neutrino energy. It is calculated from simulation.
\section{Results}
\label{sec:5}
The final results are obtained by comparing the TS value of the data, $\mathrm{TS_{obs}}$, to the TS distributions previously calculated under the blinded procedure.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{estimate.eps}
\caption{The number of events as a function of the distance to the Galactic Centre (crosses) in comparison to the background estimate (red line) for the $\Lambda$Fit reconstruction. For this plot a quality cut of $\Lambda > -5.2$ is used.}
\label{Estimate}
\end{figure}
In Figure \ref{Estimate} a comparison between the unblinded data and the expected background is shown. No significant excess above the background can be seen, which is consistent with the fact that all the $\mathrm{TS_{obs}}$ values obtained are smaller than the medians of the corresponding background TS distributions. Since all background--like results should equally reject the considered dark matter model, upper limits have been set to the sensitivities calculated from the pseudo--experiments.
The resulting upper limits in terms of neutrino flux are shown in Figure \ref{Flux}. For each annihilation channel and WIMP mass range, the reconstruction strategy, QFit or $\Lambda$Fit, which gives the best sensitivity is used in the final result. $\Lambda$Fit is used for $\rm M_{WIMP} \geq 260$ $\frac{\text{GeV}}{\text{c}^2}$ for the $\tau^+ \tau^-$ and $\mu^+\mu^-$ channels; for $\rm M_{WIMP} \geq 750$ $\frac{\text{GeV}}{\text{c}^2}$ for the $b \bar b$ channel; for $\rm M_{WIMP} \geq 150$ $\frac{\text{GeV}}{\text{c}^2}$ for $W^+ W^-$ and for $\rm M_{WIMP} \geq 100$ $\frac{\text{GeV}}{\text{c}^2}$ for the $\nu_\mu \bar{\nu}_\mu$ channel. For the remaining values, i.e at low WIMP masses, the QFit results are used.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{Flux.eps}
\caption{90\% C.L. upper limits on the neutrino flux from WIMP annihilations in the Milky Way as a function of the WIMP masses for the different channels considered. For this plot the NFW profile was used.}
\label{Flux}
\end{figure}
From the limits on the neutrino flux, limits on $\langle\sigma \mathrm{v}\rangle$ can be derived. The 90\% C.L. upper limit on $\langle\sigma \mathrm{v}\rangle$ for the $\tau^+\tau^-$ channel as a function of the WIMP mass is shown in Figure \ref{sv_comp}, compared with limits obtained by other indirect searches. Most of the direct search experiments are not directly sensitive to $\langle\sigma \mathrm{v}\rangle$. The limits for all annihilation channels for the NFW halo profile are shown in Figure \ref{sv_allchannel}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{SV_other_experiments.eps}
\caption{90\% C.L. limits on the thermally averaged annihilation cross--section, $\langle\sigma \mathrm{v}\rangle$, as a function of the WIMP mass in comparison to the limits from other experiments~\cite{IC_GC,IC_GC_CASC,FERMI_dwarf,FERMI_Mag,HESS_new}. The results from IceCube and ANTARES were obtained with the NFW profile.}
\label{sv_comp}
\end{figure}
The IceCube results presented in Figure \ref{sv_comp} (using tracks only~\cite{IC_GC} and using cascades as well \cite{IC_GC_CASC}) refer to the same channel and the same halo model, therefore the difference between the limits is due to the detector performance, position and integrated live time. The centre of the Milky Way is above the horizon of the IceCube detector and consequently the neutrino candidates correspond to downgoing events. To select neutrino candidates in the analyses of IceCube a veto for tracks starting outside the central part of the detector has to be used, which reduces the acceptance. This, in addition to the better angular resolution of ANTARES and the larger integrated live time in this analysis, explains the difference between the limits.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{SV_channels.eps}
\caption{90\% C.L. limits on the thermally averaged annihilation cross--section, $\langle\sigma \mathrm{v}\rangle$, as a function of the WIMP mass for all annihilation channels using the NFW halo profile.}
\label{sv_allchannel}
\end{figure}
For the analysis by H.E.S.S. a different set of halo parameter values is used, leading to a more extended source. The results of FERMI and MAGIC are based on dwarf spheroidal galaxies and use the $\mathrm{b \bar{b}}$ annihilation channel. Results from direct detection experiments are not shown since these experiments are typically not sensitive to $\langle \sigma v \rangle$.
This result allows to partly constrain models where the extraterrestrial neutrinos observed by IceCube are partly explained in terms of annihilating dark matter candidates~\cite{MESE}. For WIMP masses above $100 \frac{\text{GeV}}{\text{c}^2}$ the limitations from partial-wave unitarity~\cite{unitarity} will become relevant, although there is an approach to overcome these limitations~\cite{profumo}.
In order to illustrate the large effect of the choice of the halo model and the profile parameters, a comparison between upper limits derived using the NFW, the Burkert and the McMillan results is shown in Figure \ref{sv_model} for the $\tau^{+}\tau^{-}$ channel. As can be seen, depending on the WIMP mass, differences of more than one order of magnitude are observed between the different halo models.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{SV_models.eps}
\caption{90\% C.L. limits on the thermally averaged annihilation cross--section, $\langle\sigma \mathrm{v}\rangle$, as a function of the WIMP mass for the three considered halo models for the $\tau^{+}\tau^{-}$ channel.}
\label{sv_model}
\end{figure}
\section{Conclusions}
The results from a new search for dark matter annihilation in the Milky Way using data from the ANTARES neutrino telescope from 2007 to 2015 show no excess above the expected background. Limits at 90\% C.L. have been set for the NFW, the McMillan and the Burkert profile, five annihilation channels and WIMP masses ranging from 50~$\frac{\text{GeV}}{\text{c}^2}$ to 100~$\frac{\text{TeV}}{\text{c}^2}$. These limits are the most stringent for a certain region of the parameter space.
\section*{Acknowledgements}
The authors acknowledge the financial support of the funding agencies:
Centre National de la Recherche Scientifique (CNRS), Commissariat \`a
l'\'ener\-gie atomique et aux \'energies alternatives (CEA),
Commission Europ\'eenne (FEDER fund and Marie Curie Program),
Institut Universitaire de France (IUF), IdEx program and UnivEarthS
Labex program at Sorbonne Paris Cit\'e (ANR-10-LABX-0023 and
ANR-11-IDEX-0005-02), Labex OCEVU (ANR-11-LABX-0060) and the
A*MIDEX project (ANR-11-IDEX-0001-02),
R\'egion \^Ile-de-France (DIM-ACAV), R\'egion
Alsace (contrat CPER), R\'egion Provence-Alpes-C\^ote d'Azur,
D\'e\-par\-tement du Var and Ville de La
Seyne-sur-Mer, France;
Bundesministerium f\"ur Bildung und Forschung
(BMBF), Germany;
Istituto Nazionale di Fisica Nucleare (INFN), Italy;
Stichting voor Fundamenteel Onderzoek der Materie (FOM), Nederlandse
organisatie voor Wetenschappelijk Onderzoek (NWO), the Netherlands;
Council of the President of the Russian Federation for young
scientists and leading scientific schools supporting grants, Russia;
National Authority for Scientific Research (ANCS), Romania;
Mi\-nis\-te\-rio de Econom\'{\i}a y Competitividad (MINECO):
Plan Estatal de Investigaci\'{o}n (refs. FPA2015-65150-C3-1-P, -2-P and -3-P, (MINECO/FEDER)), Severo Ochoa Centre of Excellence and MultiDark Consolider (MINECO), and Prometeo and Grisol\'{i}a programs (Generalitat
Valenciana), Spain;
Ministry of Higher Education, Scientific Research and Professional Training, Morocco.
We also acknowledge the technical support of Ifremer, AIM and Foselev Marine
for the sea operation and the CC-IN2P3 for the computing facilities.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,535
|
Q: how to multiply price with quantity I know this question has been asked many times but i could not find the one which fits my contition. So far i have dynamically created a table with checkbox. the rows which are checked are put into an array for futher processing and a table is generated for conformation my only issue here is i am unable to multiply the quantity with price. (Note - all rows have the same id's and names as they are dynamic)
Code -
<table class="table" id="myTable">
<thead class="text-danger">
<th>Name</th>
<th>Price</th>
<th>Quantity</th>
<th>Select</th>
</thead>
<tbody>
<?php while($row = mysqli_fetch_assoc($result)) { ?>
<tr>
<td><?php echo $row["name"]; ?>
<input type="hidden" name="proname" value="<?php echo $row["name"]; >" />
</td>
<td class="text-warning"><?php echo $row["price"]; ?>
<input type="hidden" name="price" id="pri01" value="<?php echo $row["price"]; ?>" />
</td>
<td>
<div class="numbers-row">
<div class="col-sm-2">
<div class="form-group">
<input type="text" class="form-control" name="qua" id="qua01" value="1" style="cursor:default" disabled />
</div>
</div>
</div>
</td>
<td>
<div class="checkbox">
<label>
<input id="name" type="checkbox" name="name" value="<?php echo $row["food_id"]; ?>"/>
</label>
</div>
</td>
</tr>
<?php } ?>
</tbody>
</table>
<center><button type="submit" class="btn btn-warning tble_submit" data-toggle="modal" data-target="#myModal">Order</button></center>
<!-- Here is where the second table is shown which displays only the rows which are checked in the first table -->
<div id="showData"></div>
JS -
$(function() {
$('.tble_submit').click(function(){
var values = [];
$("#add input[name=name]:checked").each(function(){
row = $(this).closest("tr");
values.push({
Id : $(this).val(),
Name : $(row).find("input[name=proname]").val(),
Price : $(row).find("input[name=price]").val(),
Quantity : $(row).find("input[name=qua]").val()
});
});
// console.log(values);
if(values.length != 0){
var col = [];
for (var i = 0; i < values.length; i++) {
for (var key in values[i]) {
if (col.indexOf(key) === -1) {
col.push(key);
}
}
}
var table = document.createElement("table");
table.setAttribute("class", "table");
var tr = table.insertRow(-1); // TABLE ROW.
for (var i = 0; i < col.length; i++) {
var th = document.createElement("th"); // TABLE HEADER.
th.innerHTML = col[i];
tr.appendChild(th);
}
for (var i = 0; i < values.length; i++) {
tr = table.insertRow(-1);
for (var j = 0; j < col.length; j++) {
var tabCell = tr.insertCell(-1);
tabCell.innerHTML = values[i][col[j]];
}
}
var divContainer = document.getElementById("showData");
divContainer.innerHTML = "";
divContainer.appendChild(table);
} else {
document.getElementById("showData").innerHTML = "XXXX";
}
});
});
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,947
|
Look at the footballers. What are they wearing?
Look at the footballers. There are five boxes on the right. Click the triangle and choose the correct item. When you have finished click on the tick to check your answers.
I got right every time.
To play the game you need to look at the picture of the footballer on the left. Then look at the 5 boxes on the right. Choose the correct answers to match the picture that you see. For example if the footballer has brown hair, you select 'brown hair'. Now do the same for shirt, shorts, socks and boots. When you have finished click on the tick to find out if you have the right answers.
I hope that helps. Have fun!
In football,boots are called studs,can you please change it?!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 7,600
|
The Bisu T5 is a 7-seat mid-size CUV produced by Bisu Auto, a brand of the Chonqing Bisu Automotive Corporation, which is closely related to Beiqi-Yinxiang, a joint venture between Beijing Auto (Beiqi) and the Yinxiang Group.
Overview
The Bisu T5 officially debuted during the 2017 Shanghai Auto Show as the third Bisu product following the Bisu M3 and the Bisu T3. The Bisu T5 shares the same platform as the Huansu H5, while the Bisu T5 is a 7-seater crossover MPV, the Huansu H5 is a 6-seater vehicle. Prices of the Bisu T5 ranges from 72,900 to 104,900 yuan at launch.
Powertrain
The power of the Bisu T5 comes from a turbocharged 1.5 liter engine with 147 hp, mated to a six-speed manual transmission or a five-speed automatic transmission.
References
External links
Bisu Official Website
T5
Cars introduced in 2017
Crossover sport utility vehicles
front-wheel-drive vehicles
2010s cars
Cars of China
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,993
|
{"url":"https:\/\/neurips.cc\/Conferences\/2018\/ScheduleMultitrack?event=11095","text":"`\n\nTimezone: \u00bb\n\nPoster\nDo Less, Get More: Streaming Submodular Maximization with Subsampling\nMoran Feldman \u00b7 Amin Karbasi \u00b7 Ehsan Kazemi\n\nThu Dec 06 07:45 AM -- 09:45 AM (PST) @ Room 210 #75\nIn this paper, we develop the first one-pass streaming algorithm for submodular maximization that does not evaluate the entire stream even once. By carefully subsampling each element of the data stream, our algorithm enjoys the tightest approximation guarantees in various settings while having the smallest memory footprint and requiring the lowest number of function evaluations. More specifically, for a monotone submodular function and a $p$-matchoid constraint, our randomized algorithm achieves a $4p$ approximation ratio (in expectation) with $O(k)$ memory and $O(km\/p)$ queries per element ($k$ is the size of the largest feasible solution and $m$ is the number of matroids used to define the constraint). For the non-monotone case, our approximation ratio increases only slightly to $4p+2-o(1)$. To the best or our knowledge, our algorithm is the first that combines the benefits of streaming and subsampling in a novel way in order to truly scale submodular maximization to massive machine learning problems. To showcase its practicality, we empirically evaluated the performance of our algorithm on a video summarization application and observed that it outperforms the state-of-the-art algorithm by up to fifty-fold while maintaining practically the same utility. We also evaluated the scalability of our algorithm on a large dataset of Uber pick up locations.","date":"2021-10-28 07:33:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.42595991492271423, \"perplexity\": 580.7959996298782}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323588282.80\/warc\/CC-MAIN-20211028065732-20211028095732-00572.warc.gz\"}"}
| null | null |
Q: What is a diabolical point? A lot of papers define a 'diabolical point' as a "double semi-simple eigenvalue." I know a semi-simple eigenvalue is one which has algebraic multiplicity and geometric multiplicity to be equal. However, I could not find any definition of a double semi-simple eigenvalue.
A: "Double" simply means a degenerate eigenvalue (repeated root of the characteristic equation), thus a "double semi-simple eigenvalue" is a once repeated eigenvalue (i.e., with algebraic multiplicity 2) that spans a 2D vector space (i.e., its geometric multiplicity is also 2).
You can check, e.g., the second section of this paper (e-print), or 4.1 of this (e-print), or section 9.2.4 of this book, or, apparently, chapter 5 of this book.
These points are relevant mostly because they are associated to systems at bifurcations, i.e., structurally unstable systems whose behavior can change qualitatively under small perturbations. Such sensitivity has been used in the construction of very sensitive sensors, and even more sensitive than the diabolical points are the even more degenerate "exceptional points" (where "not only do resonant frequencies coincide but their resonant modes do too") . Both situations are schematically illustrated for light propagation modes (see this article) in the following figure, which shows the modes split with growing perturbation intensity $\epsilon$:
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,226
|
News>Quick Service
Video: Wienerschnitzel beefs up drive-thru experience in COVID-19
Hot-dog concept sees sales increase 22% with increased efficiency, enhancements
Ron Ruggless | Nov 24, 2020
Among bright spots in the COVID-19 pandemic, hot-dog concept Wienerschnitzel saw sales increase an average 22% between May and the end of October. And to accommodate that six-month increase, the company enhanced its drive-thru experience to make it faster and more efficient.
Wienerschnitzel, owned by parent Irvine, Calif.-based Galardi Group Inc., has deployed a number of operational changes to hone that drive-thru experience.
"Our drive-thru was already efficient," said Rusty Bills, Galardi's vice president of operations. "Our drive-thru was already designed for this kind of volumes. We had some stores that weren't used to having 60%, 70%, 80% of sales growth in a month. That threw some curve balls at us."
The drive-thru, which has grown in popularity during the coronavirus pandemic as a contactless mode of service, has been a large part of Wienerschnitzel's sales, traditionally making up about 65% of the total before the pandemic.
"We always look at it from the inside out," Bills said. "We wanted to make sure we didn't complicate the kitchen. … We worked with marketing to come up with a kind of pre-sale board." Similar to a real-estate sign, it featured core items and combo meals, he said.
"What that did is that allowed people to come up to the menu and already know what they needed to order," Bills said.
The beverage program, especially the flavored drinks, saw increased sales during the pandemic, he said, and those orders are fulfilled by the drive-thru worker. And with increased drive-thru business, every second mattered, Bills added.
In the shake program, for example, "we removed whipped cream and extra garnish on top of our shakes," Bills explained. "It sounds so silly, but … seconds matter. This was 25 to 30 seconds that it took for every crew member just to do the whipped cream and add the extra garnish."
Wienerschnitzel also printed lines on the cups so crew members could measure the mix easily, he said, and create efficiency.
Bills said the brand expanded a small test of line-buster ordering tablets into 20 restaurants to expedite orders. The tablets are an extension of the Micros point-of-sale system, he said, noting that he tablets do require added WiFi cabling, which is a challenge in retrofitting older stores for the brand that was created in 1959. The cabling specified as standard in new stores.
The company also introduced scanners for coupons, a big part of Wienerschnitzel's markering, this year as well, he said.
This week, Wienerschnitzel introduced some mobile ordering in a test at 20 restaurants, Bills said. "We wanted to make sure we could consolidate some of our third-party [delivery] partners," he added. The company is working with Olo on that integration. The company is eying a possible rollout in March, and it opens the possibility of catering.
Bills said the primary focus of the brand has always been drive-thru, with some units having only 15 to 20 seats. So indoor-dining restrictions had minimal effect on most units. "We were definitely built for this," said Bills.
The company also added a third last-mile delivery partner this year, and delivery sales doubled this year, Bills said.
The greatest share, or 212, of Wienerschnitzel's 349 units are in California, which has restricted indoor dining during the pandemic.
Some regions, such as Texas, already saw heavy drive-thru use. In Texas, he said, about 90% of traffic before COVID-19 was through the drive-thru.
Bills said the comfort-food focus of the menu, which has a best-seller of a chili-cheese dog, has helped the brand during the pandemic.
Wienerschnitzel has units in 11 states. Parent Galardi Group also franchises the Hamburger Stand and Tastee-Freez LLC brands.
Contact Ron Ruggless at [email protected]
Follow him on Twitter: @RonRuggless
TAGS: Marketing Delivery & Takeout Solutions Coronavirus Restaurants Ready
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,207
|
Jana Mandikova er en dansk tidligere fodboldspiller fra Femina.
Jana Mandikova var med på Femina-holdet som blev verdensmester i kvindefodbold i 1970 i Italien efter en sejr på 2-0 over Italien.
Mandikova var ved at koste Danmark sejren på grund af en mulig italiensk protest over at, Mandikova, og en anden dansk spiller, Maria Sevicikova, ikke var danske statsborgere, men politisk flygtninge fra Tjekkoslovakiet, men italienerne protestere ikke.
Tjek fødselsår
Fodboldspillere fra Danmark
Kvindelige fodboldlandsholdsspillere fra Danmark
Personer fra Tjekkoslovakiet
Flygtninge
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 107
|
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Return to Index
DICK FAY
prior to February, 1958 - prior to January 9, 1960
Prior to WDRC's pop music era, former U.S. Marine Dick Fay did an all-night music show in 1958, during which he offered listeners a list of gas stations and garages in Greater Hartford. Born in East Hartford on March 19, 1927, Dick was half of the Bacon & Fay morning team (with Bob Bacon) from mid 1959 through January, 1960. During his time at WDRC Fay lived in Newington, and later, Windsor Locks. At WINF, WSOR and WRYM he co-hosted the Dick and Anne Show with his wife. They moved to Florida in 1966.
PRIOR: WHAY New Britain/Farmington, CT; WGTH New Britain/Hartford, CT
AFTER: WINF Manchester, CT; WSOR Windsor, CT; WRYM Newington, CT; WMMB Melbourne, FL
TODAY: Richard Lloyd Fague was working as a supply expediter for InDyne, Inc., at the Kennedy Space Center in Florida when he died August 10, 2002 at the age of 75; see his note (3-9-02).
MIKE FELDMAN
prior to April 17, 1987 - ?
Mike is a Connecticut School of Broadcasting graduate. He attended UCONN and University of California in Los Angeles and won some AP awards for his journalism work, including at WDRC. He changed career direction and now works in public relations.
PRIOR: WLIS Old Saybrook, CT
AFTER: several technology and public relations firms in New York, NY
TODAY: Geltzer & Company, New York, NY (e-mail).
Click for Bradley Field
BRADLEY FIELD
August, 1967 - June 18, 1969
In 1967, the FCC required AM/FM stations to stop simulcasting a certain percentage of time based on market size. In WDRC's case, its 20-hour broadcast day needed to separate half the time. Bradley Field, a native of Queens, NY, was one of the first announcers hired specifically to do an FM shift (7AM-noon). Referred to on-air by Sandy Beach as "Andy Panda," listeners might remember him as "Bradley Bucket." He laughed readily and always presented a fun program. In 1968 he took over afternoon drive on AM. During winter months, WDRC changed from day to night pattern during his shift. He delighted in concocting elaborate build-ups to the moment of silence which occurred when the antenna was switched. He later went into station management, owning two stations in Louisiana and nine stations in Colorado. And, no, that's not his real name.
PRIOR: WHBI FM Newark, NJ; WGLI Long Island, NY; WKBW Buffalo, NY
AFTER: WCCC Hartford, CT; WHB Kansas City, MO; KING Seattle, WA; WFAA Dallas, TX; WXYZ Detroit, MI; KGFT/KBIQ Colorado Springs, CO; WWL New Orleans, LA; KAPB AM/KWLB FM Marksville, LA; KAUM & KZFX, both Houston, TX; KOA Denver, CO; KFKA AM Greeley, CO
TODAY: Kenneth William Sasso passed away on March 24, 2004 at the age of 57 from congestive heart failure; click here for more.
AL FLETCHER
1961 - after July 14, 1963
A Worcester native, and graduate of Leland Powers School of Radio and Theatre in Boston, Alan was a news anchor during his time at Radio Fun. He was news director at WINF in 1964.
PRIOR: WSYB Rutland, VT; WNHC AM/TV New Haven, CT; WSAI Cincinnati, OH; WADS Ansonia, CT; WHIM Providence, RI
AFTER: WINF Manchester, CT; Channel 13, Baltimore, MD
TODAY: Alan R. Fletcher, Sr. retired from The Traveler's Insurance Company in 1991; see his note (10-6-03) (e-mail)
ED FLYNN
Summer months are notoriously tough to staff at radio stations due to staff vacations. Charlie Parker borrowed this seasoned pro for several Sunday morning shifts. Ed was also an instructor at the Connecticut School of Broadcasting.
PRIOR: WAVZ & WELI, both New Haven, CT
AFTER: WAVZ New Haven, CT; WWCO Waterbury, CT; WELI New Haven, CT; WTIC Hartford, CT
TODAY: At the end of 2009, Ed retired from regular duty at WATR Waterbury after 55 years in broadcasting. (1-2-10)
prior to August 28, 1999 - spring 2002
Tom was a fixture for many years at Channel 30 (1970-89), serving as chief booth announcer and subbing in news, weather and sports; he also hosted a weekly half hour talk show. Later, while owning a photo store in Cromwell, he filled in on news and music shifts at WDRC AM.
PRIOR: Armed Forces Radio, Fort Huachuca, AZ; WHAY New Britain, CT; WBIS Bristol, CT; WHNB TV West Hartford, CT;
TODAY: Tom is retired and living in Connecticut; see his note (1-9-00) (e-mail)(1-14-16).
ALICE GORDON FRASER
June 1943 - ?
Alice was born in Hartford and graduated from Mount St. Joseph Academy in West Hartford in 1939. She received a bachelor's degree from the Julius Hartt School of Music in 1943. She was the daughter of Thomas G. Fraser, chairman of the Hartford War Council. At the age of 21 Alice was hired to replace announcer Russ Naughton and trained to take over his morning program when he joined the Army. She also served as part-time arranger of musical programs at WDRC. The Hartford resident also sang outside of the station. Later, while working in the music library of a New York City station, Alice sang with the Radio City Music Hall choral group and performed monthly with the NBC Opera Theatre on television. She was a registered stockbroker before returning to Connecticut and earning a master's degree in education at the University of Hartford. She was a reading teacher in the Hartford school system for more than 25 years.
PRIOR:
AFTER: WOR New York, NY; NBC Television, New York, NY
TODAY: Alice died in New London on January 22, 2000 at the age of 78.
click for George Freeman interview
December, 1959 - after September 4, 1960 & 1961-1967
A native of Youngstown, OH, and speech major at Heidelberg College, George's on-air career at Big D only spanned a year (as news director), though he also worked there as an account executive from 1963-67. George bought the first of several radio stations in 1969, selling his last property (on the Kentucky/Indiana border) in 1999.
PRIOR: WKST AM/TV New Castle, PA; WHOT and WBBW, both Youngstown, OH; WNHC New Haven, CT; WNBF AM/TV Binghampton, NY
AFTER: WNEW TV New York; WHCT TV Hartford, CT; WCCC A/F Hartford, CT; owned WGON AM/WQXO FM Munising, MI; KGRI A/F Henderson, TX; WDGS New Albany, IN; and WIKI FM Carrollton, KY
TODAY: George is retired from radio in Madison, IN but continues his lifelong interest in the industry. He was vice president of the Indiana Historical Radio Society and a frequent contributor to early radio publications (3-15-15) (e-mail).
TINA GAO
before August 16, 2011-after June 23, 2012
Tina was born in Wuchan, China and attended Ecole Secondaire Saint-Luc in Montreal, Boston Latin School, and graduated from UMASS/Amherst with a Bachelor of Arts in Communication in 2003. Her broadcast career has been based in Beantown, though her voice has been heard on many new England stations by virtue of her traffic reporting for Metro. That is how she came to be heard with Jerry Kristafer & Mike Stevens on DRC FM's morning show in 2011. Tina had a nine-year run at Greater Media's Boston station where she appeared on air, hosted and produced a program called Exceptional Women, served as public service director, filled in on news, and hosted a Sunday morning jazz program. She is fluent in English, Chinese and French and has done extensive acting and voiceover work.
PRIOR: WKLB Boston, MA; Westwood One/Metro Traffic Networks Boston, MA; WMJX Boston, MA
AFTER: WHDH TV Boston, MA; WBZ Boston, MA
TODAY: In February 2017 she began anchoring midday newscasts at WBZ in Boston (8-6-20).
JOHNNY GARDNER
September 1990-September, 1992
PRIOR: WIOF Waterbury, CT; WCCC Hartford, CT; WWCO Waterbury, CT
AFTER: WWYZ Waterbury, CT; WAXB Danbury,CT/Patterson, NY
TODAY: Today Johnny is a letter carrier who lives in West Hartford and does occasional fill in radio work; see his note (7-23-05) (e-mail).
DAVE GAREY
March - September 15, 1980
Dave worked the 7PM-midnight shift on WDRC FM, replacing Charlie Daniels.
TODAY: ?
JAMES F. GARRETT
September 15, 1942 - 1946
Jim grew up in Columbus Grove, Ohio and graduated from what is now the University of New Mexico. His radio career started there and brought him to Minneapolis before joining WDRC as a staff announcer in the mid 1940s. He was nicknamed Sunny Jim Garrett because of his bright disposition hosting WDRC's wakeup program at 6:00 a.m. In May 1945 Jim made a series of bedside broadcasts while a patient at St. Francis Hospital. He later spent 40 years as a booth announcer at Detroit's 50,000 watt WJR, retiring in 1986.
PRIOR: New Mexico; Minneapolis; WFBM Indianapolis, IN; WLOK Lima, OH
AFTER: WJR Detroit, MI
TODAY: Jim passed away September 26, 2000 at the age of 82.
JOHN GARRY
spring, 1981 - December, 1982
PRIOR: WNLC New London, CT; WVVE Stonington, CT; WKND Windsor, CT
AFTER: WHTX Pittsburgh, PA
TODAY: WMXP Pittsburgh, PA (11-91).
AL GATES
June 2, 1969 - January 17, 1970
A native of Hamburg, NY, Al attended Ithaca College and did a tour of duty around the world as a NATO photojournalist. He was obviously a creative guy, working for a time as a commercial artist. His short stay at Big D was entirely spent simulcasting morning drive, replacing Jim Jeffrey. It's not exactly accurate to say he did a team show, but he did have frequent company from his duck, Feathers, Akbar Mytie, Mary Margaret, Readings With Charles (and announcer, Dwayne). After his departure from WDRC, the entourage moved over to talk-formatted WINF. Incidentally, Gatesy's on-air work at WPOP was only in the form of spots--he was an account executive there before seeking his fortune in freelance voice work.
PRIOR: WHYN Springfield, MA; WSPR Springfield, MA; WPRO Providence, RI; WIXY Cleveland, OH; WRKO Boston, MA
AFTER: WINF Manchester, CT; WPOP Hartford, CT
TODAY: Al lived in Wilton, CT for many years but now lives in Providence where he continues a very successful freelance voice career; see his note (3-6-14) (e-mail).
KEN GILBERT
late 1980 - May 14, 1990
A graduate of Glastonbury High School, Ken learned by doing at Dick Robinson's Connecticut School of Broadcasting. Prior to Big D Ken worked in Vermont and was program director and afternoon driver at Springfield's WACKY 102. During his lengthy stay at WDRC FM, Ken was host of Jukebox Saturday Night. In fact he was the first personality on the air when DRC switched from Top 40 to oldies.
PRIOR: WFAD & WCVM Middlebury, VT; WAQY Springfield, MA; WTIC FM Hartford, CT
AFTER: WHYN FM Springfield, MA; Angelsea Productions, Hartford CT; WMNW Port Henry, NY; WDEV AM/FM Waterbury, VT; WLVB Morrisville, VT
TODAY: Today Ken is the afternoon drive personality at WVTK FM in Middlebury, VT (1-18-20) (e-mail).
MIKE GRADY
prior to May 26, 1979 - September, 1981
Mike served as a program assistant to Charlie Parker and did various fill-in shifts on both stations, including the overnight shift.
TODAY: Stop 'N Shop, Southington, CT.
July 3, 1972 - August 24, 1975 and mid 1977 - 1978
This Manchester native and UCONN graduate was the overnight man on WDRC AM/FM during his first stay. "Grant's Tomb" had a heavy rock feel and Barry was the exact opposite of the hype-oriented deejay. Since Barry had a First Class ticket and could monitor the transmitters overnight he worked from the FM studio, hence requests during his show were phoned in to 278-8310. In 1984 he won a Billboard magazine Personality of the Year Award. He earned the same award at WPLR and WMAD, where he was program director, and both stations also were chosen Billboard Stations of the Year. He also won an award in the Drake-Chenault national Talent Search. Barry's return to WDRC involved shifts exclusively on D103 and a stint as program director. Barry spent 26 years doing play-by-play for the N.Y. Islanders Hockey team and other professional franchises. He later worked in the banking industry.
PRIOR: WEMJ Laconia, NH; WINF Manchester, CT; WKSS Hartford, CT; WAAB FM Worcester, MA
AFTER: WPLR New Haven, CT; WYDD Pittsburgh, PA; WMAD Madison, WI; WRHD AM/WRCN FM Riverhead, NY; WLNG Long Island
TODAY: Barry passed away unexpectedly at his Manchester home on October 9, 2012.
LAUREN GREY
January, 1994 - after May 15, 1995
Lauren was the morning news anchor for Brad Davis on AM and Jerry Kristafer on FM.
PRIOR: WHYN Springfield, MA
TODAY: WGBB TV Springfield, MA (9-7-96).
click for Ken Griffin interview
Click for info on Ken's autobiography
KEN GRIFFIN
October, 1966 - April 17, 1970 and July-September, 1977
Born June 29, 1937, Ken was already very well-known in the Hartford/Springfield market so when Dick Robinson was ready to move into a daytime sales position, Charlie Parker pulled off a brilliant strategic move by replacing him with his primary competitor. Ken's regular complement of characters moved with him from WPOP: Phats Phontoon (the one and only, lovable weather balloon) and her boyfriend, Rocky Hill. Most listeners never realized their lengthy dialogues were all voiced, live, by Ken. He was a hit with teenagers - Ken once accompanied a contest winner to Hollywood to meet the Monkees. Most of his time at Big D was from 8PM-1AM on AM and FM, but for a time in early 1968 he did afternoon drive on AM. Eager to take a stab at show business on the west coast, Ken quietly slipped over to the midday slot on WDRC FM for his last few weeks in Hartford.
In Los Angeles, he spent several years working for Buckley's KGIL. Ken returned to Hartford seven years later and found the market--and radio--quite different. He did numerous fill-in shifts and helped Charlie Parker in programming before leaving Big D again. One of WDRC's most popular personalities, Ken always had a special rapport with the kids who listened to him every night.
PRIOR: WBRY, WATR & WWCO all Waterbury, CT; WBUR FM & WBOS Boston, MA; WBRY Waterbury, CT; WHYN Springfield, MA; WPOP Hartford, CT
AFTER: KGOE Thousand Oaks, CA; KGIL San Fernando, CA; KIIS Los Angeles, CA; WRCQ Farmington, CT; WMAS Springfield, MA; WIOF Waterbury, CT; WRCQ Farmington, CT; WWYZ/WATR Waterbury, CT; publishing business in Boston; WCCF Punta Gorda, WENG Englewood & WAMR Venice, FL; WKII Punta Gorda, FL
TODAY: Ken suffered a massive heart attack and died at his home in Punta Gorda on Tuesday, September 28, 2010; he was 73.
Click here for Ken Griffin scrapbook
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,008
|
The 'Younger' Women Screw Up Even As They Succeed & That's Exactly What Writer Ashley Skidmore Hoped To Portray
By Julia Emmanuele
The hit comedy Younger centers around two women working their way up the ladder of the publishing industry: Liza (Sutton Foster), a 40-something divorced mother who lies about her age in order to get a job, and Kelsey, her 20-something BFF whose career keeps rising while her love life falls apart. It's fitting, then, that Younger writer-producer Ashley Skidmore lands somewhere between the two in terms of her own work-life balance, though as she tells Bustle, she's a bit more of a Kelsey than a Liza.
"I so identify with her and the struggles of rising too fast, or having all of these responsibilities, but then in my romantic life, still being such a f*ck up," the 28-year-old Skidmore says, speaking over the phone. In fact, it's how Younger portrays the dichotomy between women's private and public selves — and the idea of being on top of one aspect of your life while another is a total mess — that's what makes the show so relatable. According to Skidmore, who has written for the series since Season 1, that realistic complexity of women's lives is what Younger is always striving to depict.
"I think that's why Younger has been such a fun job for me," Skidmore says. "I feel right now, this time period is the beginning of a radical feminist theme in television. I think that the next step is really female-centric content."
Through Younger, whose Season 6 will premiere this summer on the Paramount Network, Skidmore is creating the kind of roles for women that she's always wanted to see. "I went to school for acting, and then the first day out of school, I became a casting director," she explains. "Just because the roles out there at the time ... were really disempowering for women our age."
It's true; at the time, female-led films and TV shows at the time — from 2014's stereotypical The Other Woman to 2013's fat joke-filled sitcom Super Fun Night — weren't too great when it came to depicting realistic women. These projects may have provided opportunities for female actors and creators, but that doesn't necessarily mean that they were good roles for women.
So, as a result of their desire to see more complex, relatable women onscreen, Skidmore and a friend created their own web series, Hotmessmoves. The show helped Skidmore earn a job on Younger, after the duo were invited to punch up dialogue on the pilot.
And now, several years into the show, Skidmore has the opportunity to finally show the messier sides of life, specifically in regards to young, working women. "That's the funniest thing about Kelsey. I'm always pitching these insane stories where she's like, vomiting on the subway, because your girl has lived that," Skidmore says with a laugh, "But we always have to [remember] now she's the head of the company, so it's like, curbing her f*ck ups with her being a boss bitch."
Skidmore credits Younger creator Darren Starr for fostering an environment where she has the ability to be imaginative and take risks. "When I was like, a little baby writer in Season 2, and Darren Starr pitched us that Thad was going to be killed by a beam, I was like, 'What the f*ck?'" the writer recalls, laughing. "I couldn't believe the freedom we had to just literally do whatever."
Seeing the success of those wilder storylines has also given the writers on Younger more freedom to tell the stories they want to see on the small screen, like those about the LGBTQ community, says Skidmore. An episode that she wrote last year, for example, featured a gender queer character as assistant to Lauren, the pansexual PR agent played by Molly Bernard. Says the writer, "I felt like I got to sneak in all of these messages that America wasn't really talking about necessarily, and disguising it in comedy was a real treat. It's a benefit of this show that we get to disguise all of these cutting edge messages within these characters' dialogue."
Paramount Network on YouTube
Though the episode highlighted Skidmore's determination to represent the LGBTQ community, Younger earned some backlash after it aired for having Liza compare her struggle against ageism to a character's identity as gender queer. Overall, however, most of the series' more complex storylines have been well received by audiences. "We have so much more freedom to be like, 'We can complicate this relationship and Kelsey's not gonna be disliked by America if she makes all of these mistakes,'" Skidmore explains.
In fact, she adds, writing for Kelsey specifically — a character who often says no and stands her ground — has encouraged her to let go of her own fear of being disliked. "As a young woman, it's really easy to be like, 'I need to be a people pleaser and play along with the game and make sure that everybody knows I'm easy to work with and I'm a yes woman,'" Skidmore says. But "I think I've made a pledge of [reminding myself], 'No, just really stay true to Ashley Skidmore,' and what I actually feel even if it's going to make me unlikable."
After all, Kelsey doesn't care about being unlikable, and she's managed to take charge of a major publishing house. Clearly, we can all learn a thing or two from Younger's "boss bitches," both onscreen and in the writer's room.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,249
|
It turns out to be I was going to make a magic knight aka wizard. There are skill/s that is good for close range in the wizard skills. Now what I did not know is that the Spectral blade animation was awkward, the character keep its weapon when you use this primary skill.
Yes. There are a lot of close quarters builds for Wizards. Tal Rasha Metors, Firebird's Meteors are two, but the only one that I know of that uses Spectral Blade and can be useful is the DMO Explosive Blast build.
Coincidence, I have just put a SB build together a few days ago and just for fun ("speeds" GR60+, with enough PL and decent gear).
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,622
|
\section{Introduction}
\label{intro}
Multi-vehicle coordination methods are commonly utilized in multiple scenarios. Compared with single-vehicle automated control that considers objective of only the controlled vehicle, multi-vehicle coordination gathers information from vehicles and performs global optimization. Typical scenarios of multi-vehicle control include single-lane platooning~\cite{naus2010string,zheng2016distributed,wang2020controllability}, coordinated lane changing~\cite{wu2020emergency,luo2016dynamic,li2018consensus}, conflict resolution at ramps and bottlenecks~\cite{kato2002vehicle,ntousakis2016optimal,xu2020bi}, and scheduling at intersections~\cite{malikopoulos2018decentralized,bian2019cooperation,yu2019corridor}, etc. Existing research reveals that multi-vehicle coordination has great potential to guarantee driving safety, improve traffic efficiency, and reduce energy consumption~\cite{cai2021formationb,cai2021formationc,chen2021mixed}, compared with single-vehicle control. However, there is a lack of multi-vehicle coordination methods that consider both longitudinal and lateral behavior of vehicles in multi-lane scenarios.
Formation control is a classical problem in multi-agent systems, \textit{e}.\textit{g}.\,robots and unmanned aerial vehicles (UAVs). Agents in a formation share information and cooperatively plan their motion to perform multiple maneuvers, \textit{e}.\textit{g}.\,formation structure switching. Application scenarios of multi-agent formation control include coordinated flying of UAVs~\cite{dong2014time}, searching and exploring tasks of grounded robots~\cite{cheah2009region,macdonald2011multi}, and coordinated operation of underwater vehicles~\cite{li2016receding}, etc. Due to the similar nature of multi-agent systems and multi-vehicle network, it is natural for multi-vehicle control to learn from multi-agent formation control methods.
Formation control for CAVs is a specific case of multi-vehicle coordination in multi-lane scenarios. As an extension of single-lane platoon in multi-lane scenarios, formation control considers vehicles that are driving close to each other as a group, which is also called as a formation or a convoy, and cooperatively plan both of their longitudinal and lateral motion according to the scenarios and demands of vehicles. Existing research regarding on-road formation control for CAVs includes planning for geometric structure switching, formation maintaining control, etc. A vehicle-infrastructure cooperation method is proposed in~\cite{marinescu2012ramp} to organize vehicles to occupy moving slots, so as to perform ramp merging and leaving maneuvers. Communication topology and stable feedback controller are designed in~\cite{marjovi2015distributed} to enable vehicular formations to steadily maintain a desired formation structure and avoid collision. Coordinated lane changing behavior of vehicles are considered in~\cite{navarro2016distributed}, where vehicles are allowed to change lane in a formation. The idea of vehicle-to-target assignment and on-road formation control are combined in~\cite{cai2019multi,xu2021coordinated}, where vehicles are able to fully utilize lane capacity and improve traffic efficiency. Relative motion planning and conflict resolution methods in vehicular formations are proposed in~\cite{cai2021formationa,cai2021formationb} to avoid collision between vehicles during formation structure switching process. Vehicles' preference on lanes is considered in~\cite{cai2021formationc}, and a multi-vehicle relative motion planning method is accordingly proposed to control vehicles to change to their preferred lanes while avoiding collision. The idea of multi-lane formation control has also been extended to multi-lane intersections to cooperatively organize vehicles' longitudinal and lateral behavior~\cite{xu2021coordinated,cai2021multi}. Other research regarding on-road formation control include~\cite{zheng2021distance, cao2021platoon, firoozi2021formation}.
Among the previous research about formation control, simulations are often conducted to validate performance of those methods. Simulations on multi-lane road segments indicate that formation control of CAVs in multi-lane scenarios is able to improve traffic efficiency and reduce fuel consumption under high traffic volume~\cite{cai2019multi,cai2021formationa}. Application results in multi-lane ramps and intersections show the ability of the formation control method to be applied to multiple scenarios~\cite{cai2021formationb,cai2021formationc,cai2021multi}. Although existing simulations have already revealed plenty of advantages of formation control methods, there is a lack of experimental validation, which is crucial to harvest the aforementioned theoretical benefits of multi-vehicle formation control.
The main contribution of this paper is that it carries out simulations and experiments to validate the multi-lane formation control method, including functional validation and performance analysis. The results indicate that the formation control method is applicable to multiple traffic scenarios and able to improve formation-structure-switching efficiency compared with the benchmark method.
The rest of this paper is organized as follows. Section~\ref{coord} introduces the coordinated formation control framework. Section~\ref{method} provides the detailed formation control methods in different scenarios. Section~\ref{simexp} carries out simulations and experiments to validate the formation control method. Section~\ref{conc} presents the conclusion of this paper.
\section{Formation Control Framework}
\label{coord}
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{figure/bilevel.jpg}
\caption{The relative coordinate system of vehicular formations and projection relationship between relative state and real-world state. }
\label{bilevel}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{figure/process.png}
\caption{The coordinated formation control framework. }
\label{process}
\end{center}
\end{figure}
Collision-avoidance of multiple vehicles is hard because vehicles have different driving expectations, and arbitrarily designed traffic rules may result in congestion and limit traffic efficiency. This study proposes a bi-level formation control framework to solve this problem. Some preliminary results of this study have been presented in earlier publications~\cite{cai2021formationa, cai2021formationb, cai2021formationc}.
In the upper level, motion of vehicles is planned in relative coordinate system (RCS), which describes the relative relationship between vehicles and moves together with the formation in a grid map, as shown in Fig.~\ref{bilevel}. In RCS, the position is discretized in both lateral and longitudinal direction. The discretization distance $d_\text{F}$ is safe enough for both single-lane following and cut-in movement from the adjacent lane. Vehicles move from one relative point to another in RCS, and coordinated path planning methods on grid maps are utilized to plan paths for vehicles and avoid collision. The time that vehicles move from one relative point to another is set as the same time cycle $T_\text{F}$. During one time cycle, a vehicle is allowed to move to the other four points, which correspond to longitudinal position adjustment and lateral lane change respectively, or stay at its current position. The movement of vehicles in RCS is synchronous, so as to clearly organize cooperative behavior and avoid collision.
In the lower level, the series of relative points that vehicles will pass through are used to plan real-world trajectories for vehicles in the geodetic coordinate system (GCS). Vehicles plan their real-world trajectories to travel through these projected points and perform trajectory tracking with spatiotemporal constraints, including geometric constraints on curves and tracking time constraints. An example of a vehicle (marked in green) changing lane to form five-vehicle formation with the other four vehicles is shown in Fig.~\ref{bilevel}.
The working process of the formation control framework is shown in Fig.~\ref{process}. The process starts with the changing of driving scenarios, including changing of lane number, lane topology, vehicles' driving goals, etc. Then, four steps are taken to switch the structure of formations to adapt to the changing scenarios:
\begin{enumerate}
\item \textbf{Target generating and vehicle assignment}. Target points are generated according to the number of lanes demands of vehicles. If vehicles have preference on lanes, targets are generated accordingly on the specific lanes, and if not, targets are generated evenly on all the lanes. Vehicles are then assigned to different targets to minimize the assigning cost, \textit{e}.\textit{g}.\, the maximum traveling time or the total covered distance.
\item \textbf{Path planning and conflict resolution}. Paths are planned for vehicles to travel to their assigned targets in the grid RCS. The paths of vehicles may cross or overlap, which may result in collision. Conflict-resolution methods are utilized to adjust or replan paths of vehicles to clear those collision-leading conflicts.
\item \textbf{Trajectory planning with path constraints}. Vehicles should pass through those road points that are projected from the path points in RCS. Smooth trajectories are planned the connect those road points, \textit{e}.\textit{g}.\, B$\acute{\text{e}}$zier curves.
\item \textbf{Multi-stage trajectory tracking control}. Vehicles should arrive at the projected road points with time constraints, in order to follow the motion steps in RCS and avoid collision. Since the trajectory of a vehicle consists of multiple curve segments, multi-stage tracking methods are utilized and control inputs are recalculated one the following of one segment is finished, in order to resolve accumulated tracking error.
\end{enumerate}
The above four steps change the state of vehicles to adapt to the changing scenarios, and if the driving scenario changes again, the process will start again. The upper-level planning parts can be conducted in a centralized planner, such as the roadside units and the cloud computation platform, which needs to gather information from vehicles and road. The lower-level planning parts can be conducted in both centralized or decentralized way, since the planning of each vehicle is independent.
\begin{remark}
The proposed formation control framework is applicable for not only fully-CAV environment, but also for a mixture of both CAVs and human driven vehicles (HDVs) and fully-HDV environment. HDVs are treated differently according to their connectivity. If an HDV is able to receive control instructions from the centralized planner, the trajectory following process can be conducted by the driver, since the instructions are easy to understand and conduct, like ``changing to the right lane while keeping the current speed'' and ``keeping a given distance from the preceding vehicle''. If an HDV is not able to receive instructions, it will be treated as obstacle that blocks one lane, and the other vehicles will change the structure of the formation to drive on the remained lanes.
\end{remark}
\section{Methodologies}
\label{method}
In this section, the methods that are used for multi-vehicle formation control are introduced in detail, and some examples are provided to show the demonstration of formation control on multi-lane roads.
\subsection{Target Generating and Vehicle Assignment}
\label{targetgenerating}
Similar to single-lane platooning where vehicles drive together in a desired geometric structure (following distance), vehicles drive as formations with specific geometric topology on multi-lane roads. Typical formation structures on multi-lane roads include interlaced structure, parallel structure, etc. Existing research reveals that although the parallel structure has higher vehicle density, the interlaced structure is more suitable for multi-lane vehicle coordination considering lane-changing efficiency and driving safety~\cite{marjovi2015distributed, cai2019multi}. Thus, the interlaced structure is chosen as the standard driving structure of vehicular formation on multi-lane road segments. The occupation of vehicles on relative points in these two structures are shown in Fig.~\ref{structure}.
Total number of vehicles is equal to the number of vehicles in a formation. Besides, vehicles may have preference on lanes, \textit{e}.\textit{g}.\, when vehicles are approaching an intersection and have to change to specific lanes according to their routes in the intersection. Thus, the number of targets on each lane should be equal to the number of vehicles with the according lane preference.
After generating the targets in RCS, vehicles should be assigned to specific targets to form a one-to-one matching relationship. An optimal assignment is built by minimizing total cost for all the vehicles to travel to their targets. The cost can be defined in various ways, \textit{e}.\textit{g}.\, the travelling time, the covered distance, etc. After determining the assignment cost between each vehicle and target, the cost matrix $\mathcal{C}$, whose element on the $i$-th row and the $j$-th column represents the cost to assign vehicle $i$ to target $j$, is defined as:
\begin{eqnarray}
\mathcal{C}=[c_{i,j}]\in \mathbb{R}^{N\times N},\ i,j\in \mathbb{N}^+,
\end{eqnarray}
where $N$ is the total number of vehicles.
\begin{figure}
\begin{center}
\subfigure[Parallel structure]{
\includegraphics[width=0.45\linewidth]{figure/parallel.png}
\label{formation1}}
\subfigure[Interlaced structure]{
\includegraphics[width=0.45\linewidth]{figure/interlaced.png}
\label{formation1}}
\caption{Common formation geometric structures. }
\label{structure}
\end{center}
\end{figure}
The assignment matrix $\mathcal{A}$, whose element on the $i$-th row and the $j$-th column represents whether vehicle $i$ is assigned to target $j$, is defined as:
\begin{eqnarray}
&\mathcal{A}=[a_{i,j}]\in \mathbb{R}^{N\times N},\ i,j\in \mathbb{N}^+, \ \\
&a_{i,j}=
\begin{cases}
1, \ \text{if vehicle $i$ is assigned to target $j$},\notag \\
0, \ \text{otherwise}.
\end{cases}
\end{eqnarray}
Vehicles' preference on lanes is accordingly transformed to the preference on targets. The preference matrix $\mathcal{P}$ is defined to describe the preference of vehicles on different targets. The element on the $i$-th row and the $j$-th column of $\mathcal{P}$ represents whether vehicle $i$ prefers target $j$, and $\mathcal{P}$ is defined as:
\begin{eqnarray}
&\mathcal{P}=[p_{i,j}]\in \mathbb{R}^{N\times N},\ i,j\in \mathbb{N}^+, \ \\
&p_{i,j}=
\begin{cases}
1, \ \text{if vehicle $i$ has preference on target $j$},\notag \\
M, \ \text{otherwise},
\end{cases}
\end{eqnarray}
where $M$ is a positive number that is large enough to prevent a vehicle from being assigned to a target. Then, the assignment problem with target preference can be modelled as:
\begin{alignat}{2}
\min\quad & \sum_{i=1}^N\sum_{j=1}^N (c_{i,j}\times p_{i,j}\times a_{i,j}),\label{eqn - lp2}\\
\mbox{s.t.}\quad
&\sum_{i=1}^N a_{i,j}=\sum_{j=1}^N a_{i,j}=1,\notag \\
&i,j\in \mathbb{N}^+.\notag
\end{alignat}
where $[c_{i,j}]$ and $[p_{i,j}]$ are the given cost matrix and preference matrix, and $[a_{i,j}]$ is the variable assignment matrix. The assignment problem can be solved using Hungarian algorithm~\cite{19kuhn1955hungarian}, the simplex algorithm, etc.
\subsection{Path Planning and Conflict Resolution}
\label{pathplanning}
\begin{figure*}
\begin{center}
\subfigure[Steps of formation structure switching]{
\includegraphics[width=0.95\linewidth]{figure/e1process.jpg}
\label{steps1}}
\subfigure[Real-world trajectories of vehicles]{
\includegraphics[width=0.95\linewidth]{figure/e1simu.jpg}
\label{traj1}}
\caption{Example of multi-vehicle formation control without lane preference.}
\label{example1}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\subfigure[Steps of formation structure switching]{
\includegraphics[width=0.95\linewidth]{figure/e2process.jpg}
\label{steps2}}
\subfigure[Real-world trajectories of vehicles]{
\includegraphics[width=0.95\linewidth]{figure/e2simu.jpg}
\label{traj2}}
\caption{Example of multi-vehicle formation control with lane preference.}
\label{example2}
\end{center}
\end{figure*}
After building the assignment between vehicles and targets, the paths for vehicles to travel to targets are then planned. Paths are defined differently from trajectories in this paper. Paths connect relative points in RCS and describe vehicles' motion in a coarse-grained manner, while trajectories present the movement of vehicles in GCS. Traditional single-vehicle path planning method on grid maps such as A* can be utilized for single-vehicle path planning~\cite{hart1968formal}. Possible conflicts may exist when vehicles arrive at the same point at the same time, or when their paths cross or overlap during the same time cycle $T_\text{F}$.
The way to resolve conflicts between vehicles depends on whether vehicles have preference on targets. If vehicles don't have preference on targets, these conflicts can be resolved by letting one vehicle waiting at its current relative point to let the other go at first, or switching the targets of two vehicles. If vehicles have preference on targets, simply switching targets of two vehicles may not be a feasible way to resolve conflicts. There are some path planning methods in the field of multi-agent pathfinding (MAPF) that can be utilized to solve this problem, \textit{e}.\textit{g}.\, conflict-based searching (CBS)~\cite{sharon2015conflict}, multi-agent A*~\cite{wagner2011m}, etc. The CBS method searches global optimal solution in a conflict tree, and its property guarantees that it can return at least one solution, if there exists one. The A* family plans paths for vehicles based on defined priority, and sometimes results in no solution, even if there exists one. Thus, CBS is chosen as the multi-vehicle path planning method when vehicles have preference on lanes in this study. The time complexity and accelerating methods of CBS are the focuses of recent research~\cite{felner2018adding, li2019improved}. Given that CBS searches solutions in an expanding tree, and iteratively replaces the current solution with a better one, one way to guarantee completeness of CBS is to set a time bound. If the global optimal solution is found within the bound, then it is returned, and if the time bound is reached, the current local optimal solution is returned. If the time bound is reached and no feasible solution is achieved, the algorithm returns no solution. More details about the conflict resolution and proof of properties can be found in~\cite{cai2021formationb} and \cite{cai2021formationc}.
\subsection{Trajectory Planning and Tracking Control}
\label{trajectoryplanning}
The output of path planning is a series of relative points in RCS. Those points are projected to GCS and the projected real-world road points are used as inputs for vehicular trajectory planning. There are many types of curves that can be chosen to generate trajectories for vehicles, and B$\acute{\text{e}}$zier curve is one of the most commonly used one~\cite{gonzalez2015review}. Since there are possibly more than two road points for vehicles to pass through, the whole trajectory consists of several B$\acute{\text{e}}$zier curves and vehicles will perform multi-stage motion control to track the trajectory. In this paper, the cubic B$\acute{\text{e}}$zier curve with four control points is chosen for single-segment trajectory planning.
Similar to most of the research regarding trajectory tracking control, this paper decouples the lateral and longitudinal control of vehicles. As for lateral control, a PID-based preview controller is designed to calculate steering angle of front wheels. As for longitudinal control, the coordinated motion of vehicles is divided into stages according to the synchronous moves in RCS, and the next stage begins when all the vehicles have finished the current stage. It is important to notice that the time for vehicles to finish tracking of one B$\acute{\text{e}}$zier-curve segment may be different, and the early arrived vehicles will keep the desired speed of formation and wait for the other vehicles. Thus, there may be some straight lines connected those B$\acute{\text{e}}$zier-curve segments. Optimal control method is a typical way to solve a tracking control problem with fixed time constraints. However, if the time $T_\text{F}$ is not known or restricted, a linear feedback controller is also a qualified candidate for longitudinal control. For more details about formation longitudinal control, please refer to~\cite{cai2021formationb}.
\subsection{Examples}
\label{examples}
Examples of multi-vehicle formation control are shown in Fig.~\ref{example1} and Fig.~\ref{example2}. In Fig.~\ref{example1}, five vehicles start from a three-lane interlaced structure and need to switch to a two-lane structure because the third lane becomes undrivable. The steps of formation structure switching are shown in Fig.~\ref{steps1} and the real-world trajectories are presented in Fig.~\ref{traj1}. During the first step, vehicle 2, vehicle 3 and vehicle 5 change to the left lane, and vehicle 4 changes its speed to adjust longitudinal relative position. During the second step, all the vehicles but vehicle 1 adjust their longitudinal relative position and a five-vehicle two-lane interlaced formation structure is formed. In Fig.~\ref{example2}, six vehicles start from a three-lane interlaced structure and need to change to their preferred lanes according to their routes. The routes of each vehicle and steps of formation structure switching are shown in Fig.~\ref{steps2} and the real-world trajectories are presented in Fig.~\ref{traj2}. During the first step, vehicle 1, vehicle 3, vehicle 5 and vehicle 6 change to their preferred lane, and vehicle 4 changes its speed to adjust longitudinal relative position. During the second step, all the vehicles but vehicle 2 adjust their longitudinal relative position and a six-vehicle formation structure where all the vehicles are on their preferred lanes is formed.
\section{Simulations and Experiments}
\label{simexp}
In this section, simulations and experiments are carried out to validate the function of the proposed method and compare its performance with benchmark methods.
\subsection{Performance Analysis Simulations}
\label{simu}
In order to evaluate the performance of the proposed multi-vehicle coordinated path planning algorithm, this study conducts simulations in the scenario with three lanes where vehicles start from a standard interlaced structure and have to change to their desired lanes. The simulations are conducted with different number of vehicles with lane preference. Targets are generated also according to the interlaced structure, and an example is presented in Fig.~\ref{targetexample}. The multi-agent priority-based A* algorithm is chosen as the benchmark method. The vehicle that is more forward in the formation is set with higher priority. Vehicles perform single-vehicle path planning according to the priority, and the paths of the planned vehicles are considered as obstacles for the latter vehicles. In order to compare performance fairy, both A* and CBS are utilized within RCS, and the starting state, goal state, and moving rules are all set as the same. The width of the grid map is set to 3, allowing vehicles to drive on the three lanes. The length of the grid map is set for all vehicles to move within the largest range limited by the position of targets.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{figure/targetexample.png}
\caption{An example of target distribution in the simulations. ``L'', ``S'', and ``R'' represent that the targets should be occupied by vehicles that are turning left, going straight, and turning right respectively. The rectangles in green represent the targets that are matched with vehicles, and the hollow ones are not. This figure presents an example of the target distribution of six vehicles.}
\label{targetexample}
\end{center}
\end{figure}
The success rate of the algorithm, maximum steps and total steps that vehicles take to change the structure of formation are chosen as indexes to evaluate the performance. Given that the proposed method consumes more time when the number of vehicles becomes bigger, the maximum running time of CBS is set to 2 seconds and 10 seconds respectively. Results of simulation are presented in Table~\ref{simutable1}, where total number of cases under each number of vehicles, failed times, success rate, maximum steps and total steps that vehicles take to change the structure of formation, and time consumption of the three methods are provided. The results indicate that the priority-based A* algorithm uses the minimum time, but results in high failed cases. In contrast, the CBS methods significantly improve success rate and get better performance in reducing steps that vehicles should take. The comparison between the two CBS methods with different time bound indicates that the CBS method is able to handle the hard cases where A* fails if given more computational time. Note that the failed times of five vehicles is larger than that of six vehicles due to the size of the grid map. According to the results of simulations, the number of vehicles is set to five or six in the following experiments.
\begin{table}[htbp]
\centering
\caption{Results of the simulations}
\label{simutable1}
\begin{tabular}{m{1em}m{2em}m{3.6em}m{2em}m{3em}m{2em}m{2em}m{2em}}
\toprule
Veh. No.& Total No. & Methods & Failed No. & Success rate & Max. steps & Total steps & Time \,(s)\\
\midrule
\multirow{4}*{5} & \multirow{4}*{243} &A*& 18 & 92.59\% & 3.86 & 12.34 & 0.01\\
\cmidrule{3-8}
&& CBS\,(2\,s) & 2 & 99.18\% & 3.66 & 10.72 & 0.02\\
\cmidrule{3-8}
&& CBS\,(10\,s) & 2 & 99.18\% & 3.66 & 10.72 & 0.02\\
\midrule
\multirow{4}*{6} & \multirow{4}*{729} &A*& 8 & 98.90\% & 4.08 & 15.36 & 0.01 \\
\cmidrule{3-8}
&& CBS\,(2\,s) & 1 & 99.86\% & 3.78 & 13.16 & 0.04\\
\cmidrule{3-8}
&& CBS\,(10\,s) & 0 & 100.00\% & 3.78 & 13.16 & 0.05\\
\midrule
\multirow{4}*{7} & \multirow{4}*{2187} &A*& 58 & 97.35\% & 4.49 & 18.86 & 0.01 \\
\cmidrule{3-8}
&& CBS\,(2\,s) & 15 & 99.31\% & 4.06 & 16.10 & 0.09\\
\cmidrule{3-8}
&& CBS\,(10\,s) & 1 & 99.95\% & 4.06& 16.10 & 0.09\\
\midrule
\multirow{4}*{8} & \multirow{4}*{6561} &A*& 270 & 95.88\% & 4.95 & 22.61 & 0.01 \\
\cmidrule{3-8}
&& CBS\,(2\,s) & 307 & 95.32\% & 4.42 & 19.28 & 0.16\\
\cmidrule{3-8}
&& CBS\,(10\,s) & 80 & 98.78\% & 4.42 & 19.25 & 0.16\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Experimental Platform}
\label{plat}
The proposed multi-lane formation control method is validated on the connected micro-vehicle experimental platform built by Tsinghua University. The platform consists of multiple traffic scenarios, including multi-lane road segments and intersections, and is able to support experiments of single-vehicle automated control, multi-vehicle coordinated control, and human-machine cooperation, etc. Figures of the platform and experimental vehicles are presented in Fig.~\ref{platformfig}. The boards with different color on the top of vehicles are used to locate and identify vehicles driving on the platform by cameras. More details of the designing and function of the platform have been introduced in \cite{yang2021multi}.
\begin{figure}
\begin{center}
\subfigure[Experimental platform]{
\includegraphics[width=0.95\linewidth]{figure/city.jpg}
\label{city}}
\subfigure[Experimental vehicles]{
\includegraphics[width=0.95\linewidth]{figure/experimentvehicle.jpg}
\label{expvehicle}}
\caption{The experimental platform and vehicles. }
\label{platformfig}
\end{center}
\end{figure}
The experiments in this paper are carried out on the central four-lane road segment. The chosen segment contains four lanes, where two lanes are used for each direction of traffic. In this paper, in order to validate the proposed multi-lane formation control method, multiple lanes are needed for vehicles to form desired structures. Thus, the original lane utilization is adjusted to a one-direction four-lane case, as shown in Fig.~\ref{laneutilization}. As for the adjusted lane utilization, vehicles drive from left to right on all the four lanes, and are allowed to change lane between every two adjacent lanes, which means that sometimes vehicles may cross the double-yellow solid lines.
A computer with CPU Intel CORE i7-8700@3.2GHz and 16G RAM serves as the centralized planner to gather information from the platform and vehicles, and calculate control inputs, including desired speed and steering angle, and send them back to vehicles accordingly. Actuators then control the motors to drive vehicles to follow the instructions. The end-to-end time delay of the system is around $100\,\mathrm{ms}$ and is omitted in the control process.
\begin{figure}
\begin{center}
\subfigure[Original lane utilization]{
\includegraphics[width=0.46\linewidth]{figure/laneuse1.jpg}
\label{formation1}}
\subfigure[Adjusted lane utilization]{
\includegraphics[width=0.46\linewidth]{figure/laneuse2.jpg}
\label{formation1}}
\caption{Lane utilization of the platform. }
\label{laneutilization}
\end{center}
\end{figure}
\subsection{Experimental Validation in Multiple Scenarios}
\label{vali}
\begin{figure}
\begin{center}
\subfigure[Formation switching from one-lane structure to three-lane structure]{
\includegraphics[width=0.99\linewidth]{figure/s13.png}
\label{formation1}}
\subfigure[Formation switching from three-lane structure to two-lane structure]{
\includegraphics[width=0.99\linewidth]{figure/s32.jpg}
\label{formation1}}
\subfigure[Formation switching from two-lane structure to one-lane structure]{
\includegraphics[width=0.99\linewidth]{figure/s21.png}
\label{formation1}}
\subfigure[Formation switching at on-ramp merging area]{
\includegraphics[width=0.99\linewidth]{figure/smerge.jpg}
\label{formation1}}
\subfigure[Formation switching at off-ramp leaving area]{
\includegraphics[width=0.99\linewidth]{figure/sleave.jpg}
\label{formation1}}
\subfigure[Formation switching in emergency leaving scenario]{
\includegraphics[width=0.99\linewidth]{figure/semergency.png}
\label{formation1}}
\subfigure[Formation switching in cooperative lane changing scenario]{
\includegraphics[width=0.99\linewidth]{figure/sswitching.jpg}
\label{formation1}}
\caption{Scenarios of functional validation experiments. }
\label{scenarios}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\subfigure[Formation switching from one-lane structure to three-lane structure]{
\includegraphics[width=0.95\linewidth]{figure/1-3.png}
\label{formation1}}
\subfigure[Formation switching from three-lane structure to two-lane structure]{
\includegraphics[width=0.62\linewidth]{figure/3-2.png}
\label{formation1}}
\subfigure[Formation switching from two-lane structure to one-lane structure]{
\includegraphics[width=0.325\linewidth]{figure/2-1.png}
\label{formation1}}
\caption{Snapshots of field experiments in lane number changing scenarios.}
\label{snapshots1}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\subfigure[Formation switching at on-ramp merging area]{
\includegraphics[width=0.48\linewidth]{figure/merge.jpg}
\label{formation1}}
\subfigure[Formation switching at off-ramp leaving area]{
\includegraphics[width=0.48\linewidth]{figure/leave.jpg}
\label{formation1}}
\subfigure[Formation switching in emergency leaving scenario]{
\includegraphics[width=0.48\linewidth]{figure/emergency.png}
\label{formation1}}
\subfigure[Formation switching in cooperative lane changing scenario]{
\includegraphics[width=0.48\linewidth]{figure/switching.jpg}
\label{formation1}}
\caption{Snapshots of field experiments in formation structure switching scenarios.}
\label{snapshots2}
\end{center}
\end{figure*}
As for the functional validation experiments, seven scenarios are designed, including lane number changing, ramp merging and leaving, and formation structure switching according to the demands of vehicles, as shown in Fig.~\ref{scenarios}. In scenario (a), the number of lanes change from one to three, and vehicles widen the formation to occupy all the drivable lanes and fully utilize the lane capacity. In scenario (b) and (c), the number of lanes decreases and vehicles have to narrow their structure to pass the bottleneck. In scenario (d) and (e), a vehicle attempts to join or leave a formation at the ramp area. In scenario (f) and (g), some vehicles are in emergency or have preference on specific lanes, so the formation has to switch its structure according to vehicles' demands.
In scenarios (a), (b), (c), (e), (f), and (g), all the vehicles start from standard interlaced formation structure. In scenario (d), five vehicles start from a three-lane interlaced structure, and another vehicle starts on the rightmost lane. The desired speed of vehicles in all the scenarios are set as the same, which is $0.1\,\mathrm{m/s}$. The safe discretized distance $d_\text{F}$ is $0.5\,\mathrm{m}$. The steps that vehicles need to take and the snapshots of the experiments are presented in Fig.~\ref{snapshots1} and Fig.~\ref{snapshots2}. The videos of the process of experiments are available at the address: {\color{blue}https://github.com/cmc623/Formation-control-experiments}.
The presented snapshots and videos indicate that the proposed multi-lane formation control method is able to control multiple vehicles to form or switch to a desired geometric structure. In multiple scenarios, the centralized planner calculates the desired structure of formation and control inputs for vehicles, and vehicles switches the structure of formation step by step without collision, which validates the applicability of the proposed method in multiple scenarios.
Moreover, although this paper only carries out simulations and experiments in the multi-lane straight road scenario, it is apparently that the formation control method is able to be applied to more complex scenarios, \textit{e}.\textit{g}.\,intersections and roundabouts. The lane preference in these complex scenarios can be transformed to target preference in formation, thus enables the application of multi-vehicle formation control.
\section{Conclusions}
\label{conc}
This paper introduces the multi-vehicle formation control method in multi-lane scenarios and carries out simulations and experiments to validate the performance of the method. The formation control framework is provided and the key methodologies are introduced in detail, including target generating, vehicle assignment, relative path planning, conflict resolution, trajectory planning, and tracking control of vehicles. Simulations are conducted with different number of vehicles and the performance of the utilized CBS method is compared with the priority-based A* method. Experiments are carried out in multiple scenarios on a micro-vehicle experimental platform. The results of simulations and experiments indicate that:
\begin{enumerate}
\item the formation control method, which utilizes CBS as the relative motion planning method, outperforms A* method in success rate, maximum steps and total steps of vehicles under different number of vehicles.
\item the formation control method is able to organize vehicles to switch structure of formations in multiple traffic scenarios, either with or without vehicles' preference on lanes.
\end{enumerate}
The future directions of this research include extending the proposed formation control framework to more complex scenarios and carry out more experiments to validate the performance.
\bibliographystyle{IEEEtranTIE}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,568
|
Is the Emperor Penguin the Largest Bird in Antarctica?
The emperor penguin is the largest penguin species currently waddling on Earth, standing about 3 feet 7 inches (1.1 meters) and weighing around 110 pounds (50 kg). But the fossilized bones of a prehistoric species unearthed in Antarctica in 2014 indicate that a much larger penguin species roamed the Earth some 37 million years ago.
Remains of Palaeeudyptes klekowskii, dubbed the "colossus penguin," indicate that these birds would have weighed in at 250 pounds (115 kg) and stood about 6 feet 7 inches (2 m) tall, measured from toe to beak tip. These giant penguins could hunt fish underwater for long periods of time, with the ability to dive deeper than today's penguins, and to stay submerged for as long as 40 minutes.
More on the colossus penguin:
The fossils were found near Seymour Island in an area of Antarctica with an abundance of penguin bones. Back then, the region was warmer, attracting many penguin species to live there together.
This find is the most complete fossil ever uncovered from the Antarctic, and features the longest known fused ankle-foot bone, in addition to parts of a wing bone.
In 2007, another giant penguin species was found in Peru. Known as Icadyptes salasi, this penguin lived around 36 million years ago and stood about 5 feet (1.5 m) tall.
What is an Emperor Penguin?
What are Some Antarctic Animals?
By: Christopher Michel
37 million years ago, Antarctica was home to a penguin species that stood around 6'7" – far taller than today's emperor penguins (shown here).
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,173
|
Q: How to programmatically rotate button by 360 Degrees in iPhone? how the button can be rotated by 360 degree for duration of time 30 sec and after that the button stop rotation.
A: A 360 rotation animation is only a few lines of code with Core Animation.
CABasicAnimation *rotate =
[CABasicAnimation animationWithKeyPath:@"transform.rotation"];
rotate.byValue = @(M_PI*2); // Change to - angle for counter clockwise rotation
rotate.duration = 30.0;
[yourButton.layer addAnimation:rotate
forKey:@"myRotationAnimation"];
By using the byValue property you are doing a relative rotation of 360 degrees to whatever rotation was there before (compared to explicitly specifying the from and to values). This means that the above code will rotate the button 360 degrees even if it is already rotated. All the answers that explicitly specify an end transform are assuming that the button isn't already rotated.
The above example is as small as possible to do just what you asked for ("be rotated by 360 degree for duration of time 30 sec"). If you want to have more control you can optionally make the animation start and/or stop slowly by specifying a timing function
rotate.timingFunction =
[CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut];
If you haven't already added QuarzCore.framework to your project you will need to do so. Also #import <QuartzCore/QuartzCore.h> in the top of your source file.
A: CATransform3D myRotationTransform = CATransform3DRotate(Yourbutton.layer.transform, -1, 0.0, 0.0, 1.0);
Yourbutton.layer.transform = myRotationTransform;
CABasicAnimation* animation = [CABasicAnimation animationWithKeyPath:@"transform.rotation.z"];
animation.fromValue = [NSNumber numberWithFloat:0.0f];
animation.toValue = [NSNumber numberWithFloat: -1];
animation.duration = 30;
animation.repeatCount = 1;
animation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut];
[Yourbutton.layer addAnimation:animation forKey:@"MyAnimation"];
Should work as needed! don't forget to include quartz.framework!
A: Well I just used
Being self.buttonImageView the ImageView you need to rotate.
[UIView animateKeyframesWithDuration:0.5 delay:0 options:UIViewKeyframeAnimationOptionCalculationModeLinear animations:^{
[UIView addKeyframeWithRelativeStartTime:0.0 relativeDuration:0.25 animations:^{
self.buttonImageView.transform= CGAffineTransformMakeRotation(M_PI);
}];
[UIView addKeyframeWithRelativeStartTime:0.5 relativeDuration:0.25 animations:^{
self.buttonImageView.transform= CGAffineTransformMakeRotation(-2* M_PI);
}];
} completion:^(BOOL finished) {
[self.map setCenterCoordinate:self.map.userLocation.location.coordinate animated:YES];
self.buttonImageView.transform= CGAffineTransformMakeRotation(2* M_PI);
}];
A: [UIView animateWithDuration:.30f animations:^{
btnGallery.transform = CGAffineTransformRotate(CGAffineTransformIdentity, -M_PI);
}];
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,655
|
\section{Introduction}
\label{se:intro}
In order to discover or understand data generating mechanisms, a graphical model has been used as a fundamental tool, and the problem of finding its structure from data has been received much attention in many fields including social sciences \cite{Bol89}, bioinformatics \cite{RS07} and neuroinformatics \cite{LDBB06}.
Among variety of models, structural equation models (SEMs) and Bayesian networks (BNs) have been widely used to analyze causal relationships in empirical studies \cite{Bol89,Pea00,SGS01}. However, the full structure, {\em i.e.}, a causal ordering and connection strengths, of the model cannot be identified in most cases without prior knowledge on the structure when only covariance structure of data is used for model estimation as is the case in almost conventional methods. Recently, it is reported that non-Gaussian structure of data overcomes this identifiability problem in the case of linear directed acyclic graphs (DAGs) \cite{SHHK06,SHKW09}. By their algorithms (the LiNGAM algorithms), if the external influences are non-Gaussian, the structure can be uniquely estimated by using observed data only without any prior knowledge (under an assumption of acyclicity).
However, the applicability of the LiNGAM algorithms might be restricted in some real-world applications because of its relatively strong assumptions of linear acyclicity in each variable. For example, in the cases where an unobserved confounder exists between exogenous variables or sink variables, a DAG structure is no longer appropriate to apply. Thus, it would be useful to develop a non-Gaussianity based framework to estimate the structure of more general class of models, such as chain graphs \cite{Lau96}, so as to deal with the situations under which the assumptions on DAGs are not satisfied. Note that there is a method based on non-Gaussianity that takes unobserved confounders into account \cite{HSKP08}. However, since this method needs to model unobserved variables explicitly, the computational cost is crucially high (In fact, only two or three variables were empirically treated in the paper).
In this paper, we propose a non-Gaussian variant of chain graphs, which includes the one of linear acyclic graphs as a special case, and present an algorithm for the estimation of this model. The algorithm finds an ordering of the subsets of variables by iteratively evaluating the independence between the variable subset and the residuals when the remaining variables are regressed on those. In addition to the applicability to chain
graphs, it is empirically verified that the estimation by the proposed algorithm works reasonably well compared with the existing algorithms when applied to DAGs. However, this procedure needs to compute the independence exponentially many times corresponding to the number of variables. Therefore, we propose an approximate approach that can be performed without depending on the number of variables (although the accuracy may depend on) and can be applied to large scale graphs. The performance will be illustrated using artificial and real-world datasets.
The reminder of this paper is organized as follows.
In Sect.~\ref{se:glingam}, we first introduce a linear non-Gaussian acyclic model for sets of variables (GroupLiNGAM model).
Then in Sect.~\ref{se:est_glingam}, we present an algorithm for (directly) estimating the GroupLiNGAM model. However, this approach would be inefficient for large sized graphs. Therefore, in Sect.~\ref{se:approx}, we give an approximate approach that can be applied to large sized graphs based on the algorithm described in Sect.~\ref{se:est_glingam}. The algorithms are illustrated and examined in its performance using artificial data in Sect.~\ref{se:sim} and real-world data in Sect.~\ref{se:real}. Finally, we give conclusions in Sect.~\ref{se:conclusion}.
\section{GroupLiNGAM model}
\label{se:glingam}
In this paper, we consider a non-Gaussian variant of chain graphs, which we call the {\em GroupLiNGAM model}.
Assume that observed data are generated from a process represented graphically by a chain graph on random variables $\boldsymbol{x}$ of dimension $p$. Let us express this chain graph by a $p\times p$ adjacency matrix $B=\{b_{ij}\}$, where every $b_{ij}$ represents the connection strength from a variable $x_j$ to another $x_i$ in the chain graph. Also, let $K(l)$ ($l=1,\ldots,m$, $m\leq p$) be ordered blocks, {\em i.e.}, disjoint subsets of variables, so that no variables in later subsets influence any variable in earlier subsets and $K(1)\cup\cdots\cup K(m)=V$, where $V:=\{1,\ldots,p\}$ is the indices set of the variables.\footnote{This definition is a generalization of the one of a DAG. That is, this is actually the definition of a DAG if all the subsets consist of one element, {\em i.e.}, $m=p$.} The index of the subset, {\em i.e.}, $l$, that $x_i$ belongs to will be referred as $l(i)$. Moreover, assume that the relations between variables in different subsets are linear. Without loss of generality, each observed variable $x_i$ is assumed to have zero mean.
Then, the GroupLiNGAM model is represented as
\begin{equation}
\label{eq:glingam1}
x_i = \sum_{l(j)\leq l(i),i\neq j} b_{ij} x_j + e_i,
\end{equation}
where $e_i$ is an external influence. All external influences $e_i$'s are non-Gaussian random variables with zero means and non-zeros variances, and independent of each other in different blocks. Alternatively, we write the model \eqref{eq:glingam1} in a matrix form:
\begin{equation}
\label{eq:glingam2}
\boldsymbol{x} = B\boldsymbol{x}+\boldsymbol{e},
\end{equation}
where $B$ can be permuted by simultaneous equal row and column permutations to be lower block-triangular due to the acyclicity of disjoint subsets in chain graphs \cite{WL90,CW96}. Moreover, if we represent the model \eqref{eq:glingam2} as
\begin{equation}
\label{eq:glingam3}
\boldsymbol{x} = A\boldsymbol{e},
\end{equation}
the matrix $A~(:=(I-B)^{-1})$ (called a mixing matrix) also becomes lower block-triangular (and with all unities in the diagonal). Note that, in the case of $m=p$, {\em i.e.}, the DAG case, the model \eqref{eq:glingam3} defines the independent component analysis (ICA) model \cite{HKO01} since the components of $\boldsymbol{e}$ are independent and non-Gaussian. Since the ICA model is identifiable, the model \eqref{eq:glingam3} in this case ($m=p$) is also identifiable, which is the key idea of the original LiNGAM algorithm \cite{SHHK06} (we call it ICA-LiNGAM in the later part of this paper).
Now, let us consider an illustrative example in which the model is represented by (cf.~Figure~\ref{fig:example}~(a))
\begin{equation}
\label{eq:example}
\begin{split}
x_1 &= e_1,\\
x_2 &= b_{21}x_1 + e_2,\\
x_3 &= b_{32}x_2 + e_3,\\
x_4 &= b_{42}x_2 + b_{43}x_3 + e_4,\\
x_5 &= b_{51}x_1 + b_{54}x_4 + e_5,
\end{split}
\end{equation}
where unobserved confounders $f$ and $g$ exist between $e_1$ and $e_2$ and between $e_4$ and $e_5$, respectively, as
\begin{equation*}
e_1 = c_1 f + d_1
\hspace{2mm}\text{and}\hspace{2mm}
e_2 = c_2 f + d_2,
\end{equation*}
and
\begin{equation*}
e_4 = c_4 g + d_4
\hspace{2mm}\text{and}\hspace{2mm}
e_5 = c_5 g + d_5.
\end{equation*}
$d_1$, $d_2$, $d_4$ and $d_5$ are independent of each other. Note that, in this case, the assumption for the LiNGAM algorithms, {\em i.e.}, exogenous influences are independent of each other, is not satisfied. In fact, $x_1$ and $x_4$ depend on $x_2$ and $x_5$, respectively, because $f$ and $g$ are not observed, and a DAG representation is no longer appropriate to apply. The ordered blocks for the example \eqref{eq:example} are $K(1)=\{1,2\}$, $K(2)=\{3\}$ and $K(3)=\{4,5\}$ (cf.~Figure~\ref{fig:example}~(b)).
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio=true,width=.75\linewidth]{example_.eps}
\caption{Illustrative example of a chain graph.}
\label{fig:example}
\end{figure}
\section{Model estimation}
\label{se:est_glingam}
In this section, we address the estimation of the GroupLiNGAM model from data. In the following parts, we will refer the subset of variables corresponding to $S\subseteq V$ as $\boldsymbol{x}_S$. Also, we denote by $V\setminus S$ the complementary set of $V$ with respect to $S$, and by $\boldsymbol{x}_{\bar{S}}$ the subset of variables corresponding to $V\setminus S$.
\subsection{Identifying exogenous variables using non-Gaussianity}
\label{ss:exogenous}
Recently, it has been reported that non-Gaussianity of external influences serves for directly estimating the ordering of variables from data \cite{SHKW09} (DirectLiNGAM). The key insight herein is that, once an exogenous variable is identified, we can remove the component of the exogenous variable from the other variables without violating the original ordering for the residuals when the exogenous variable is regressed on the remaining variables. Here, we describe the analogous insight still holds for sets of variables. To this end, we first need the following assumption:
\begin{definition}[correlation-faithfulness]
The distribution of $\boldsymbol{x}$ is said to be correlation-faithful to the generating graph if correlation and conditional correlation of $x_i$ are entailed by the graph structure, i.e., the zeros/non-zeros status of $b_{ij}$, but not by specific parameter values of $b_{ij}$.
\end{definition}
This concept is motivated by the faithfulness \cite{SGS01}. Also, we give the definition of the exogenous set of variables as follows.
\begin{definition}[exogenous set]
\label{def:exogenous}
Let the partition of the variables $\boldsymbol{x}$ be $\boldsymbol{x}=(\boldsymbol{x}_S,\boldsymbol{x}_{\bar{S}})$ such that $\boldsymbol{x}_S$ and $\boldsymbol{x}_{\bar{S}}$ are not empty. Then, the subset of variables $\boldsymbol{x}_S$ is said to be exogenous against $\boldsymbol{x}_{\bar{S}}$, if the corresponding partition of the the matrix $B$ has the following form:
\begin{equation*}
B = \left[\begin{array}{cc}B_{S}&0\\B_{\bar{S},S}&B_{\bar{S}}\end{array}\right].
\end{equation*}
\end{definition}
Note that each variable in the exogenous set is not necessarily an exogenous variable. That is, the variables in an exogenous set may be influenced by each other inside of the set. Also note that the submatrix of the mixing matrix $A$ corresponding to $B_S$ is full-rank because the covariance matrix $\Sigma_S$ of $\boldsymbol{x}_S$ is also full-rank from the correlation-faithfulness assumption.
Now, we give two lemmas and one corollary that is the basis of the algorithm proposed in this paper.
\begin{lemma}
\label{le:exogenous}
Assume that the input data $\mathbf{x}$ follows the GroupLiNGAM model \eqref{eq:glingam2}, and that the distribution of $\boldsymbol{x}$ is correlation-faithful to the generating graph. Let $r^{(S)}$ be the residual when $x_{\bar{S}}$ is regressed on $\boldsymbol{x}_S$ for $S\subset V$ {\em :} $r^{(S)}=x_{\bar{S}}-\Sigma_{S,\bar{S}}^T\Sigma_S^{-1}\boldsymbol{x}_S$, where
\begin{equation*}
\Sigma = \left[\begin{array}{cc}
\Sigma_S & \Sigma_{S,\bar{S}} \\
\Sigma_{S,\bar{S}}^T & \Sigma_{\bar{S}}
\end{array}\right]
\end{equation*}
is the covariance matrix of $(\boldsymbol{x}_S,\boldsymbol{x}_{\bar{S}})$. Then, a set of variables $\boldsymbol{x}_S$ is exogenous if and only if $\boldsymbol{x}_S$ is independent of its residual $\boldsymbol{r}^{(S)}$.
\end{lemma}
\begin{proof}
First, assume that $\boldsymbol{x}_S$ is exogenous. Then, one can write $\boldsymbol{x}_{\bar{S}}=A_{\bar{S},S}A_S^{-1}\boldsymbol{x}_S + \bar{\boldsymbol{e}}_{\bar{S}}^{(S)}$, where
\begin{equation*}
A = \left[\begin{array}{cc}A_S&0\\A_{\bar{S},S}&A_{\bar{S}}\end{array}\right]
\end{equation*}
is the coefficient matrix in Eq.~\eqref{eq:glingam3}. From the definition of model~\eqref{eq:glingam3}, $\bar{\boldsymbol{e}}_{\bar{S}}^{(S)}=A_{\bar{S}}\boldsymbol{e}_{\bar{S}}$ and $\boldsymbol{x}_S$ are mutually independent. Also, since $\Sigma_{S,\bar{S}}^T=A_{\bar{S},S}A_S^{-1}\Sigma_S$, $A_{\bar{S},S}A_S^{-1}$ is equivalent to the regression coefficients when $\boldsymbol{x}_{\bar{S}}$ is regressed on $\boldsymbol{x}_S$. Therefore, $\boldsymbol{r}^{(S)}$ is equivalent to $\bar{\boldsymbol{e}}_{\bar{S}}^{(S)}$. As a result, $\boldsymbol{x}_S$ and $\boldsymbol{r}^{(S)}$ are mutually independent.
Next, assume that $\boldsymbol{x}_S$ is independent of $\boldsymbol{r}^{(S)}$. Then, since $\boldsymbol{x}_S$ is independent of $\boldsymbol{e}_{\bar{S}}$, all elements of the regression coefficient matrix when $\boldsymbol{x}_S$ is regressed on $\boldsymbol{e}_{\bar{S}}$, {\em i.e.}, $A_{S,\bar{S}}$, become zeros, which means all elements of the upper right part of $B$, {\em i.e.}, $B_{S,\bar{S}}$, are also zeros. From the correlation-faithfulness assumption and the definition of exogeneous sets, $\boldsymbol{x}_S$ is exogenous.
\end{proof}
\begin{lemma}
\label{le:residual}
Assume the assumptions of Lemma~\ref{le:exogenous} and that a set of variables $\boldsymbol{x}_S$ is exogenous. Let $\boldsymbol{r}^{(S)}$ be the residual vector when $\boldsymbol{x}_{\bar{S}}$ is regressed on $\boldsymbol{x}_S$ for $S\subset V$. Then, GroupLiNGAM models hold both for $\boldsymbol{x}_S$ and $\boldsymbol{r}^{(S)}$, respectively {\em :} $\boldsymbol{x}_S=B_S\boldsymbol{x}_S+\boldsymbol{e}_S$ and $\boldsymbol{r}^{(S)}=B^{(S)}\boldsymbol{r}^{(S)}+\boldsymbol{e}^{(S)}$, where $B_S$ and $B^{(S)}$ are matrices that can be permuted to be block lower-triangular by simultaneous row and column permutations, and elements of $\boldsymbol{e}_S$ and $\boldsymbol{e}^{(S)}$ are non-Gaussian and mutually independent in different blocks, respectively.
\end{lemma}
\begin{proof}
Without loss of generality, assume that $B$ in the GroupLiNGAM model \eqref{eq:glingam2} is already permuted to be lower block-triangular (which means $A$ is also to be lower block-triangular with all unities in the diagonal). First, it is straightforward from Def.~\ref{def:exogenous} that $B_S$ is lower block-triangular. Next, since $\boldsymbol{x}_S$ is exogenous, the regression coefficients when $\boldsymbol{x}_{\bar{S}}$ is regressed on $\boldsymbol{x}_S$ becomes $A_{\bar{S},S}A_S^{-1}$. Therefore, removing the effects of $\boldsymbol{x}_S$ from $\boldsymbol{x}_{\bar{S}}$ by least-squares estimation is equivalent to setting all elements of the first $|S|$ columns of $A$ to be zeros. This means that the residuals $\boldsymbol{r}^{(S)}$ are not influenced by $\boldsymbol{x}_S$ because of the correlation-faithfulness assumption. As a result, we again obtain a lower block-triangular mixing matrix with all unities in the diagonal $A^{(S)}(=A_{\bar{S}})$ for $\boldsymbol{r}^{(S)}$.
\end{proof}
\begin{corollary}
\label{co:order}
Assume the assumptions in Lemma~\ref{le:residual}. Denote by $l_S(i)$ and $l_{\boldsymbol{r}^{(S)}}(i)$ the indices of the ordered subsets encoded by the chain graphs on $\boldsymbol{x}_S$ and $\boldsymbol{r}^{(S)}$, respectively. Recall that $l(i)$ denotes the index of the ordered subsets encoded by the chain graph on $\boldsymbol{x}$. Then, the ordering of the subsets of $\boldsymbol{x}_S$ and $\boldsymbol{r}^{(S)}$ are respectively equivalent to that of corresponding original subsets of variables, i.e., $l_S(i_1)<l_S(i_2)\Leftrightarrow l(i_1)<l(i_2)$ and $l_{\boldsymbol{r}^{(S)}}(i_1)<l_{\boldsymbol{r}^{(S)}}(i_2)\Leftrightarrow l(i_1)<l(i_2)$.
\end{corollary}
\begin{proof}
As described in the proof of Lemma~\ref{le:residual}, the adjacency matrices (and the mixing matrices) for the GroupLiNGAM models on $\boldsymbol{x}_S$ and $\boldsymbol{r}^{(S)}$ are equivalent to the corresponding parts of the one for the GroupLiNGAM model on $\boldsymbol{x}$. This shows the orderings of $\boldsymbol{x}_S$ and $\boldsymbol{r}^{(S)}$ are not changed.
\end{proof}
\begin{algorithm}[t]
\caption{GroupLiNGAM}
\label{alg:glingam}
\begin{algorithmic}[1]
\STATE Given a $p$-dimensional variables $\boldsymbol{x}$, a set of its subscripts $V$, a $p\times n$ data matrix of the variables $\mathbf{X}$, initialize an ordered subset of variables as $K\leftarrow \emptyset$.
\STATE Call $K\leftarrow$ GroupSearch~($V$, $K$, $\mathbf{X}$).
\STATE Construct a lower block-triangular matrix $B$ by following the order in $K$, and estimate the connection strengths $b_{ij}$ (using some conventional covariance-based regression, such as least-squares and maximum-likelihood approaches) on the original variables $\boldsymbol{x}$ and data matrix $\boldsymbol{X}$.
\end{algorithmic}
\vspace{1mm}
\begin{flushleft}
function~$K \leftarrow$ GroupSearch~($U$, $K$, $\mathbf{X}_U$)
\end{flushleft}
\begin{algorithmic}[1]
\FOR{$S\subset U$}
\STATE Perform least-squares regression of $\boldsymbol{x}_S$ on $\boldsymbol{x}_{U\setminus S}$ (denote the residual vector by $\boldsymbol{r}^{(S)}$ and its residual data matrix by $\mathbf{R}^{(S)}$) and then compute some independence measure $I(S)$ between $\boldsymbol{x}_S$ and $\boldsymbol{r}^{(S)}$, {\em e.g.}, $MI(\boldsymbol{x}_S,\boldsymbol{r}^{(S)})$.
\ENDFOR
\STATE $S_*:=\arg\min I(S)$.
\IF{$I(S_*)\leq \delta$ and $|U|\neq1$}
\STATE Set $\mathbf{X}_{U\setminus S_*}\leftarrow \mathbf{R}^{(S_*)}$.
\STATE Call $K\leftarrow$ GroupSearch~$(S_*,K,\mathbf{X}_{S_*})$
\STATE Call $K\leftarrow$ GroupSearch~$(U\setminus S_*,K,\mathbf{X}_{U\setminus S_*})$
\ELSE
\STATE Append $S_*$ to the end of $K$.
\ENDIF
\end{algorithmic}
\end{algorithm}
Lemma~\ref{le:exogenous} indicates that an exogenous set is identified by evaluating the independence between a set of variables $\boldsymbol{x}_S$ and its residuals $\boldsymbol{r}^{(S)}$. Lemma~\ref{le:residual} implies that the GroupLiNGAM models for the $p$-dimensional vector $\boldsymbol{x}_S$ and the $(p-|S|)$-dimensional residual vector $\boldsymbol{r}^{(S)}$ can be handled as new input models, and Lemma~\ref{le:exogenous} can be further applied to the each model to derive the next set of exogenous variables. This process can be repeated until all subsets of variables are not able to be devided, and the resulting order of the sets of variable subscripts shows the causal order of the original observed variables according to Corollary~\ref{co:order}.
As the independence measure used in Lemma~\ref{le:exogenous}, the mutual information between the subset of variable and the residuals, {\em i.e.}, $MI(\boldsymbol{x}_S,\boldsymbol{r}^{(S)})$, would be available. There are many options for its estimation from data. In the later experiments, we used an algorithm based on the $k$-nearest neighbors method \cite{KSG04}.\footnote{We used the MATLAB code available from \texttt{http://www.klab.caltech.edu/$\sim$kraskov/MILCA/} in the experiments.} This method has one tuning parameters, {\em i.e.}, the number of neighbors $kneig$. Although the setup of this parameter is not trivial, the algorithm is known to work well empirically when $kneig$ is set as 3--5 \% of the dimensional $p$ \cite{KSG04}.
\subsection{GroupLiNGAM algorithm}
\label{ss:glingam}
Based on the above result, we now present an algorithm to estimate a block causal ordering and the connection strengths in the GroupLiNGAM model under the correlation-faithfulness assumption. The pseudo-code of the algorithm is shown in Alg.~\ref{alg:glingam}.
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio=true,width=.75\linewidth]{alg1_.eps}
\caption{Illustration of Alg.~\ref{alg:glingam} for the example \eqref{eq:example}.}
\label{fig:alg1}
\end{figure}
The algorithm is performed by the recursive calls of GroupSearch function, which devides a given subset $U$ into ordered two groups. Since an exogenous set is identified by evaluating the independence between a subset of $U$ and its residuals from Lemma~\ref{le:exogenous}, we find such subset $S_*(\subset U)$ as the one that minimizes some independence measure $I(S)$ (Lines 1--3 in GroupSearch in Alg.~\ref{alg:glingam}). Thus, $U$ is divided into ordered two groups $S_*$ and $U\setminus S_*$. From Lemma~\ref{le:residual}, for each of $S_*$ and $U\setminus S_*$, the GroupLiNGAM models hold. Therefore, this procedure is iterated until further partition cannot be found, which is judged with a threshold $\delta$ (Line~8--9 in GroupSearch in Alg.~\ref{alg:glingam}). Finally-obtained order of variable subsets is consistent globally, which is guaranteed by Corollary~\ref{co:order}. The illustration of this procedure for the example Eq.~\eqref{eq:example} is shown in Fig.~\ref{fig:alg1}.
Note that Alg.~\ref{alg:glingam} is specialized to the DAG case if we set $\delta=+\infty$. However, the outputs by Alg.~\ref{alg:glingam} and the DirectLiNGAM algorithm are not always same because Alg.~\ref{alg:glingam} finds the subset of variables that is exogenous against the remaining variables while the DirectLiNGAM algorithm identifies an exogenous variable iteratively (that is, we can say that the former uses global information of independence between variables while the latter local one). Thus, the accumulation of errors of regression in Alg.~\ref{alg:glingam} is expected to be no more than the one in the DirectLiNGAM algorithm, which will be illustrated empirically in Sect.~\ref{se:sim}.
\section{Approximate approach for large graphs}
\label{se:approx}
Since Alg.~\ref{alg:glingam} needs to compute independence between $\boldsymbol{x}_S$ and $\boldsymbol{r}^{(S)}$ exponentially many times ($2^{|U|-1}$, once for every $S\subset U$)\footnote{$U=V$ ($|V|=p$) at the first iteration.} at each iteration (Lines~1--3 in GroupSearch), it can be applied only to medium sized graphs (consisting of up to around 15 nodes). Here, we propose an approximate approach based~on Alg.~\ref{alg:glingam} applicable to larger sized graphs (L-GroupLiNGAM).
The basic idea of the proposed algorithm is as follows. If we observe only a subset of variables, then some of the unobserved variables may act as confounders against some of the observed variables and, as a result, causal directions between such observed variables have become not identifiable. However, since the definition of our model permits confounders, {\em i.e.}, makes edges undirected if there exist confounders between the observed variables, we can find the order of blocks which is identifiable from the currently observed variables using Alg.~\ref{alg:glingam}. Therefore, by randomly picking up the subset of variables such that the set of the subsets covers all variables and applying Alg.~\ref{alg:glingam} to each subset, we finally obtain a block ordering of all variables in a large graph. The validness of this procedure can be guaranteed by the following proposition:
\begin{proposition}
Assume the assumptions in Lemma~\ref{le:residual}. Let denote by $\tilde{l}_T(i)$ ($T\subset V$) the ordering of the subsets of variables when only $\boldsymbol{x}_T$ is observed (and other variables $\boldsymbol{x}_{\bar{T}}$ are not observed). Then, the order $\tilde{l}_T(i)$ is consistent with the one when all variables are observed, i.e., $\tilde{l}_T(i_1)<\tilde{l}_T(i_2)\Rightarrow l(i_1)<l(i_2)$.
\end{proposition}
\begin{proof}
Assume that only $\boldsymbol{x}_T$ is observed for $T\subset V$. If $\tilde{l}_T(i_1)<\tilde{l}_T(i_2)$, then there exists a subset $S\subset T$ exogenous against $T\setminus S$ such that $i_1\in S$ and $i_2\in T\setminus S$. Therefore, one can write $\boldsymbol{x}_S = \tilde{A}_S\tilde{\boldsymbol{e}}_S$ and $\boldsymbol{x}_{T\setminus S}=\tilde{A}_{T\setminus S,S}\tilde{A}_S^{-1}\boldsymbol{x}_S + \tilde{A}_{T\setminus S}\tilde{\boldsymbol{e}}_{T\setminus S}$, where $\boldsymbol{x}_S$ and $\tilde{A}_{T\setminus S}\tilde{\boldsymbol{e}}_{T\setminus S}$ are mutually independent. This means, if we denote as $\tilde{\boldsymbol{e}}_S=\sum_{i\in S_1} \boldsymbol{a}_{S,i}e_i$ and $\tilde{\boldsymbol{e}}_{T\setminus S}=\sum_{i\in S_2}\boldsymbol{a}_{T\setminus S,i}e_i$, where $S_1\subseteq S\cup (V\setminus T)$ and $S_2\subseteq (T\setminus S)\cup (V\setminus T)$, then the union of $S_1$ and $S_2$ is empty, {\em i.e.}, $S_1\cap S_2 = \emptyset$. This means that all elements of the submatrix $\{a_{ij}\}~(i\in S_1\cup S, j\in V\setminus (S_1\cup S))$ of the mixing matrix $A$ are zeros and, as a result, $l(i_1)<l(i_2)$.
\end{proof}
\begin{algorithm}[t]
\caption{L-GroupLiNGAM}
\label{alg:glingam2}
\begin{algorithmic}[1]
\STATE Given a $p$-dimensional variables $\boldsymbol{x}$, a set of its subscripts $V$, a $p\times n$ data matrix of the variables $\mathbf{X}$ and a cardinality $h$, initialize the list of orders between combinations of variables $\tilde{k}=\emptyset$.
\STATE Compute a random covering $T(i)~(i=1,\ldots,N)$ of variables with cardinarity $h$.
\FOR{$i=1,\ldots,N$}
\STATE Apply Alg.~\ref{alg:glingam} modefied by replacing Line 1 in GroupSearch func.\@ to \eqref{eq:line3} with $V\leftarrow T(i)$, and add new orders from its output $K$ to $\tilde{k}$.
\ENDFOR
\STATE Construct a block order $\tilde{K}$ for all variables from $\tilde{k}$.
\STATE Construct a strictly lower block-triangular matrix $B$ by following the order in $\tilde{K}$, and estimate the connection strengths $b_{ij}$ (using some conventional covariance-based regression, such as least-squares and maximum-likelihood approaches) on the original variables $\boldsymbol{x}$ and data matrix $\boldsymbol{X}$.
\end{algorithmic}
\end{algorithm}
Based on the above result, we now present an algorithm for estimating the GroupLiNGAM model with large number of variables. The pseudo-code of the algorithm is shown in Alg.~\ref{alg:glingam2}, where $\tilde{k}$ is the list of combinations of variables $(j_1,j_2)$ with orders $l(j_1)<l(j_2)$.
In the algorithm, we first generate a random covering of all variables $T(i)~(i=1,\ldots,N)$ (Line~2 in Alg.~\ref{alg:glingam2}), {\em i.e.}, subsets $T(i)\subset V$ such that $\cup_{i=1,\ldots,N} T(i)=V$. And, we apply Alg.~\ref{alg:glingam} to each $T(i)$ (Lines~3--5 in Alg.~\ref{alg:glingam2}). Then, in order to reflect already-known orders $(j_1,j_2)$ $(j_1,j_2\in T(i))$ when choosing $S\subset U$ in Lines~1--3 in GroupSearch function in Alg.~\ref{alg:glingam}, we replace Lines 1 in GroupSearch to the following:
\begin{equation}
\label{eq:line3}
\text{{\bf for} $S\subset U$ s.t.\@ $j_2\in S\rightarrow j_1\notin U\setminus S$ for $(j_1,j_2)\in \tilde{k}$ {\bf do}}
\end{equation}
Also, the application of Alg.~\ref{alg:glingam} may generate an output making a cycle as a whole when combined with previously-obtained orders due to statistical uncertainty of samples as the iterations of Lines 3--5 in Alg.~\ref{alg:glingam2} continue. In such a case, the inconsistent (old and new) orders need to be removed, {\em i.e.}, we merge the ordered variables by these orders into a group. Finally, the ordering of variable subsets is constructed from the list of obtained orders. Although this procedure may not be able to find some of block orders, more and more ones are expected to be found depending on subsets $T(i)$ as the iteration continue.
\section{Simulations}
\label{se:sim}
In this section, we evaluate the proposed algorithms empirically using artificial datasets. Especially, we focus on (i) the evaluation of the validity of the proposed algorithms for estimating the GroupLiNGAM model (Alg.~\ref{alg:glingam} and Alg.~\ref{alg:glingam2}) and (ii) the comparison of estimation accuracy of the proposed and existing algorithms (ICA-LiNGAM \cite{SHHK06} and DirectLiNGAM \cite{SHKW09}) in DAG cases.
\begin{figure}[t]
\begin{minipage}{.495\linewidth}
\centering
{\scriptsize $(p=5, n=500)$}\\
\vspace{1mm}
\includegraphics[keepaspectratio=true,width=.55\linewidth]{scatter_5_500_.eps}
\end{minipage}
\begin{minipage}{.495\linewidth}
\centering
{\scriptsize $(p=5, n=1000)$}\\
\vspace{1mm}
\includegraphics[keepaspectratio=true,width=.55\linewidth]{scatter_5_1000_.eps}
\end{minipage}
\begin{minipage}{.495\linewidth}
\centering
\vspace{2mm}
{\scriptsize $(p=10, n=500)$}\\
\vspace{1mm}
\includegraphics[keepaspectratio=true,width=.55\linewidth]{scatter_10_500_.eps}
\end{minipage}
\begin{minipage}{.495\linewidth}
\centering
\vspace{2mm}
{\scriptsize $(p=10, n=1000)$}\\
\vspace{1mm}
\includegraphics[keepaspectratio=true,width=.55\linewidth]{scatter_10_1000_.eps}
\end{minipage}
\caption{Scatter-plots of the estimated $b_{ij}$ by Alg.~\ref{alg:glingam} (vertical axis) versus the generating values (horizontal axis) for combinations of dimensionality $p=(5, 10)$ and the number of samples $n=(500, 1000)$.}
\label{fig:scatter1}
\end{figure}
First, for the purpose (i), we created datasets under each combination of number of variables $p$, sample size $n$ and coverage cardinality $h$ (for Alg.~\ref{alg:glingam2}), as follows.\footnote{The way of creating datasets is the same as \cite{SHKW09} except for that $B$ is a block lower-triangular.}
\begin{enumerate}
\item First, a $p\times p$ block lower-triangular matrix $B$ was randomly created so that the standard deviations of variables owing to their parents (determined from its ordering of the subsets of variables) ranged in the interval $[0.5,1.5]$, where the number of blocks and the maximum number of parents of the created network for $B$ were also randomly determined from $1$ to $p$ in uniform manner. The standard deviations of the external influences $\boldsymbol{e}$ were randomly selected from the interval $[0.5,1.5]$.
\item Next, we generated data with sample size $n$ by independently drawing the external influence variables $\boldsymbol{e}$ from various non-Gaussian distributions with zero mean and unit variance. This is performed by generating Gaussian variables $z_i$ with zero means and unit variances, transformed it as $e_i=\text{sign}(z_i)|z_i|^{q_i}$, where nonlinear exponents $q_i$ were randomly selected from the interval $[0.5,0.8]\cup[1.2,2.0]$,\footnote{Nonlinear exponents $q_i$ with $[0.5,0.8]$ and $[1.2,2.0]$ give sub-Gaussian and super-Gaussian variables, respectively.} and then standardizing $e_i$ to have zero means and unit variables.
\item The values of the observed variables $\boldsymbol{x}$ were generated according to the GroupLiNGAM model \eqref{eq:glingam2}. And, the order of $\boldsymbol{x}$ is permuted randomly.
\end{enumerate}
The graphs in Fig.~\ref{fig:scatter1} and Fig.~\ref{fig:scatter2} show the scatter-plots of the elements of the estimated and generating adjacency matrix $B$ (for randomly generated 10 datasets in the respective case). GroupLiNGAM (Alg.~\ref{alg:glingam}) and L-GroupLiNGAM (Alg.~\ref{alg:glingam2}) were respectively applied~to relatively small and large sized graphs ($p$$=$$5,10$ (Fig.~\ref{fig:scatter1}) and $p$$=$$50,100$ (Fig.~\ref{fig:scatter2})). The parameters $\delta$ and $kneig$ were set as $1.0\times 10^{-2}$ and $0.05\times n$, respectively. For Alg.~\ref{alg:glingam2}, the number of subsets in a covering, {\em i.e.}, $N$, was set as $50$ in the experiment. Although the estimation seems to fail sometimes depending on dimensionality $p$, the number of samples $n$ or coverage cardinality $h$, the estimation seems to work reasonably well.
\begin{figure}[t]
\begin{minipage}{.495\linewidth}
\centering
{\scriptsize $(p=50, h=5)$}\\
\vspace{1mm}
\includegraphics[keepaspectratio=true,width=.55\linewidth]{scatter_50_5_.eps}
\end{minipage}
\begin{minipage}{.495\linewidth}
\centering
{\scriptsize $(p=50, h=8)$}\\
\vspace{1mm}
\includegraphics[keepaspectratio=true,width=.55\linewidth]{scatter_50_8_.eps}
\end{minipage}
\begin{minipage}{.495\linewidth}
\centering
\vspace{2mm}
{\scriptsize $(p=100, h=5)$}\\
\vspace{1mm}
\includegraphics[keepaspectratio=true,width=.55\linewidth]{scatter_100_5_.eps}
\end{minipage}
\begin{minipage}{.495\linewidth}
\centering
\vspace{2mm}
{\scriptsize $(p=100, h=8)$}\\
\vspace{1mm}
\includegraphics[keepaspectratio=true,width=.55\linewidth]{scatter_100_8_.eps}
\end{minipage}
\caption{Scatter-plots of the estimated $b_{ij}$ by Alg.~\ref{alg:glingam2} (vertical axis) versus the generating values (horizontal axis) for combinations of dimensionality $p=(50,$ $100)$ and coverage cardinality $h=(5, 8)$ ($n=1000$).}
\label{fig:scatter2}
\end{figure}
Next, for the purpose (ii), we created datasets as in the same manner with the procedure described in \cite{SHHK06}, which is same with the above procedure except that each block contains only one element. As described above, GroupLiNGAM is specialized to the DAG case by setting $\delta$ as $+\infty$ and thus, in this experiment, $\delta$ was set as $1\times 10^6$ for Alg.~\ref{alg:glingam}. The graphs in Fig.~\ref{fig:errors} show the medians of the numbers of errors, {\em i.e.}, the numbers of elements in the strictly upper triangular part when the {\em true} connection strength matrix $B$ is permuted according to the estimated orders by the algorithms. The median errors by the algorithms are similar for almost experimental conditions and, hence, we can say that the estimation of GroupLiNGAM works reasonably well in the DAG case too. Here, we should note again that GroupLiNGAM can be applied not only to DAGs but also to chain graphs while the existing algorithms (ICA-LiNGAM and DirectLiNGAM) cannot.
\begin{figure}[t]
\begin{minipage}{.495\linewidth}
\centering
\includegraphics[keepaspectratio=true,width=.72\linewidth]{plots_6_.eps}
\end{minipage}
\begin{minipage}{.495\linewidth}
\centering
\includegraphics[keepaspectratio=true,width=.72\linewidth]{plots_10_.eps}
\end{minipage}
\caption{Median numbers of errors in estimated orders by the existing and proposed algorithms when applied to DAG cases (Left: $p=6$ and Right: $p=10$).}
\label{fig:errors}
\end{figure}
\section{Application to real data}
\label{se:real}
To evaluate the applicability of the proposed algorithm (GroupLiNGAM), we analyzed a dataset taken from a sociological data repository on the Internet called General Social Survey.\footnote{\texttt{http://www.norc.org/GSS+Website/}}
The data consisted of six observed variables, $x_1$: father's occupation level, $x_2$: son's income, $x_3$: father's education, $x_4$: son's occupation level, $x_5$: son's education, $x_6$: number of siblings.
The sample size was 1,380. Fig.~\ref{fig:background} shows domain knowledge about their causal relations: $K(1)$$=$$\{1,3,6\}$, $K(2)$$=$$\{5\}$, $K(3)$$=$$\{4\}$ and $K(4)$$=$$\{2\}$.
In this section, we represent such relations by $\{1,3,6\}$$<$$\{5\}$$<$$\{4\}$$<$$\{2\}$ to save space.
Note that if $\{i,j\}$$<$$\{k\}$, $x_i$ and $x_j$ could directly and/or indirectly cause $x_k$, but not vice versa.
In this experiment, Alg.~\ref{alg:glingam} was applied since the number of variables is small.
We tested several numbers of nearest neighbors $kneig=40,50,60,70$ to compute mutual information using the $k$-nearest neighbor approach \cite{KSG04} for GroupLiNGAM.
The estimated networks were not sensitive to the choice of the number of nearest neighbors $kneig$, and essentially the same results were obtained for the values of $kneig$.
We show the results under $kneig$$=$50 in Tab.~\ref{tab:est}, where a smaller threshold value for independence $\delta$ gives a network between larger groups of variables.
We first analyzed all the six variables.
The estimated orders by ICA-LiNGAM \cite{SHHK06}, Direct-LiNGAM \cite{SHKW09} and GroupLiNGAM are shown at the second top of Tab.~\ref{tab:est}.
Those estimated orders are difficult to interpret since son's income ($x_2$) and/or son's education ($x_5$) could cause father's vari-
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio=true,width=.55\linewidth]{domain_.eps}
\caption{Status attainment model based on domain knowledge, where $\{1,3,6\}$$<$$\{5\}$$<$$\{4\}$$<$$\{2\}$.}
\label{fig:background}
\end{figure}
\begin{table}[!h]
\begin{center}
\begin{tabular}{ll}
Domain knowledge: & $\{1,3,6\}$$<$$\{5\}$$<$$\{4\}$$<$$\{2\}$\\
\hline
\multicolumn{2}{l}{All the six variables analyzed.} \\
ICA-LiNGAM: & $\{5\}$$<$$\{6\}$$<$$\{3\}$$<$$\{1\}$$<$$\{4\}$$<$$\{2\}$\\
DirectLiNGAM: & $\{6\}$$<$$\{2\}$$<$$\{1\}$$<$$\{3\}$$<$$\{4\}$$<$$\{5\}$\\
GroupLiNGAM: & \\
\multicolumn{1}{r}{$\delta$$=$0.500} & $\{6\}$$<$$\{2\}$$<$$\{1\}$$<$$\{4\}$$<$$\{5\}$$<$$\{3\}$\\
\multicolumn{1}{r}{$\delta$$=$0.100} & $\{6\}$$<$$\{2\}$$<$$\{1\}$$<$$\{4,5\}$$<$$\{3\}$\\
\multicolumn{1}{r}{$\delta$$=$0.010} & $\{6\}$$<$$\{2\}$$<$$\{1,3,4,5\}$\\
\multicolumn{1}{r}{$\delta$$=$0.001} & $\{1,2,3,4,5,6\}$\\
\hline
\multicolumn{2}{l}{$x_2$ omitted.} \\
ICA-LiNGAM: & $\{5\}$$<$$\{6\}$$<$$\{3\}$$<$$\{1\}$$<$$\{4\}$\\
DirectLiNGAM: & $\{6\}$$<$$\{1\}$$<$$\{3\}$$<$$\{4\}$$<$$\{5\}$\\
GroupLiNGAM: & \\
\multicolumn{1}{r}{$\delta$$=$0.500} & $\{6\}$$<$$\{1\}$$<$$\{3\}$$<$$\{5\}$$<$$\{4\}$\\
\multicolumn{1}{r}{$\delta$$=$0.100} & $\{6\}$$<$$\{1,3\}$$<$$\{5\}$$<$$\{4\}$\\
\multicolumn{1}{r}{$\delta$$=$0.050} & $\{6\}$$<$$\{1,3\}$$<$$\{4,5\}$\\
\multicolumn{1}{r}{$\delta$$=$0.010} & $\{6\}$$<$$\{1,3,4,5\}$\\
\multicolumn{1}{r}{$\delta$$=$0.001} & $\{1,3,4,5,6\}$\\
\hline
\multicolumn{2}{l}{$x_2$ and $x_6$ omitted.} \\
ICA-LiNGAM: & $\{5\}$$<$$\{3\}$$<$$\{1\}$$<$$\{4\}$\\
DirectLiNGAM: & $\{1\}$$<$$\{3\}$$<$$\{4\}$$<$$\{5\}$\\
GroupLiNGAM: & \\
\multicolumn{1}{r}{$\delta$$=$0.50} & $\{3\}$$<$$\{1\}$$<$$\{5\}$$<$$\{4\}$\\
\multicolumn{1}{r}{$\delta$$=$0.10} & $\{1,3\}$$<$$\{5\}$$<$$\{4\}$\\
\multicolumn{1}{r}{$\delta$$=$0.05} & $\{1,3\}$$<$$\{4,5\}$\\
\multicolumn{1}{r}{$\delta$$=$0.01} & $\{1,3,4,5\}$$$
\end{tabular}
\end{center}
\caption{Estimated orders of groups.}
\label{tab:est}
\end{table}
ables ($x_1$,$x_3$), but not vice versa. The orders are not reasonable to their temporal orderings.
Next, we omitted son's income ($x_2$) and analyzed the other five variables. Omitting $x_2$ would not create any unobserved confounder since it does not cause any other variables according to the domain knowledge.
The results are shown in the third top of Tab.~\ref{tab:est}.
DirectLiNGAM and GroupLiNGAM found consistent time orderings between father's variables ($x_1$,$x_3$) and son's variables ($x_4$,$x_5$).
Furthermore, GroupLiNGAM found a reasonable ordering between son's variables, {\it i.e.}, son's education ($x_5$) could cause son's occupation level ($x_4$), but not vice versa, whereas DirectLiNGAM failed.
However, number of siblings ($x_6$) is the top variable in every estimated ordering by DirectLiNGAM and GroupLiNGAM and could cause father's variables ($x_1$,$x_3$), which is not easy to interpret.
We further omitted number of siblings ($x_6$) as well as son's income ($x_2$) and analyzed the other four variables ($x_1$,$x_3$,$x_4$,$x_5$).
Omitting $x_6$ could create an unobserved confounder since it could relate father's variables ($x_1$,$x_3$) and son's variables ($x_4$,$x_5$).
The bottom of Tab.~\ref{tab:est} shows the results.
Every ordering estimated by GroupLiNGAM is consistent with the domain knowledge.
ICA-LiNGAM wrongly estimated that son's education ($x_5$) could cause father's variables ($x_1$,$x_3$), but not vice versa. DirectLiNGAM also gave inconsistent orderings between father's education ($x_3$) and father's occupation ($x_1$) and between son's education ($x_5$) and son's occupation ($x_4$).
In summary, GroupLiNGAM provided more consistent orderings with the domain knowledge than ICA-LiNGAM and DirectLiNGAM. The reason would be that only GroupLiNGAM is able to allow unobserved confounders.
However, it is not yet very clear why the inclusion of $x_2$ and $x_6$ makes the results difficult to interpret.
One possibility is that $x_2$ and $x_6$ might not fit well some assumption in the three discovery methods, {\em e.g.}, linearity, compared to the other four variables.
\section{Conclusions}
\label{se:conclusion}
In this paper, we proposed the GroupLiNGAM model, a non-Gaussian variant of chain graphs, and presented an algorithm for estimating this model, which is identifiable without any prior knowledge on the structure. Based on the result that an exogeneous set is identified by evaluating the independence between a variable subset and the residuals when the remaining variables are regressed on those, the proposed algorithm finds an ordered devision of variables iteratively and identifies an ordering of disjoint subsets of variables. However, since the computational cost grows exponentially according to the number of variables, a middle sized graph is the practical limit of this algorithm. Therefore, in addition, we presented an approximate approach to apply this framework to large sized graphs. In the experimental parts, we evaluated the algorithms empirically and illustrated the applicability using artificial and real datasets.
The algorithm has a tuning parameter $\delta$, which determines when the devision of groups should be stopped ($kneig$ is also an tuning parameter in the current implementation. However, this parameter is for the estimation of mutual information by the $k$-nearest neighbor method \cite{KSG04} and thus would not be an essential parameter in our method). For more exact devision of groups, it would be useful to combine our framework with some statistical test method, such as the bootstrap method \cite{ET94}, in the future. Also, in the current implementation, exponentially large number of subsets need to be examined when identifying an exogenous (Line1--3 in GroupSearch in Alg.~\ref{alg:glingam}). Therefore, it would be important to develop more efficient search strategy for this part using some discrete structure.
{\small
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,198
|
Also listed as plaintiffs are 21 federal prisoners. Only one of those, Robert Barroca, is currently in Kentucky. Barroca, 54, is serving a 30-year sentence in the Federal Medical Center in Lexington after pleading guilty in 2004 to multiple drug charges in California. The remaining plaintiffs are scattered in prisons from Georgia to California. None are from this immediate area.
The suit says the prisoners should have access to the environmental impact study documents because they are the ones who will suffer adverse health effects by living in the prison. It calls the prison "pork barrel politics," and says its only purpose is to provide construction and development contracts to constituents of U.S. Rep. Hal Rogers, who has pushed for construction of the prison.
The lawsuit asks the court to declare the Bureau of Prisons' Record of Decision to build the prison a violation of the law and set the decision aside, reopen the comments period and place copies of the environmental impact statement in all federal prisons for inmates to read. It also asks for a permanent injunction to prevent the BOP from moving ahead with construction of the prison until it demonstrates it has complied with the National Environmental Policy Act and the Administrative Procedure Act.
The suit says that the Abolitionist Law Center has "devoted a significant amount of economic resources" fighting for prison reform and environmental justice, and seeks to recover the plaintiffs' "reasonable attorney's fees, costs, expenses, and disbursements associated with this litigation."
The Abolitionist Law Center describes itself as "a public interest law firm organized for the purpose of abolishing class and race based mass incarceration in the United States.
Most of the attorneys on staff and board members are either recent graduates of the University of Pittsburgh School of Law, or professors at the law school. One of its board members, Jihad Abdulmumit, is chairman of the National Jericho Movement, an organization that seeks to free "political prisoners and prisoners of war" incarcerated by the United States. Abdulmumit, of Richmond, Va., classes himself as a political prisoner and prisoner of war because of his membership in the Black Panther Party and the Black Liberation Army in the 1970s. Abdulmumit spent 23 years in prison on charges related to two bank robberies.
It is not clear what the Campaign to Fight Toxic Prisons is. That group describes itself as "a collaboration with the Abolitionist Law Center," however there are no leaders for the group listed on its web page and the web address's owner information is hidden from the public. The Mountain Eagle was unable to find any record of a nonprofit corporation by that name in Pennsylvania, Kentucky, Texas, Louisiana, or on the IRS web site, though the group has a Go Fund Me page that lists it as a charity in Fort Worth, Texas.
The Bureau of Prisons chose a site at Roxana leveled by surface mining as the spot for the prison. Bill Estep bestep@herald-leader.com
Drone video: 10 places to see in The Red River Gorge
Lexington priest tells people to 'get off their tush' and help out-of-work coal miners
Kentucky's first urban Target is here. See what's inside.
By Sydney Momeyer
The urban Target on the University of Kentucky's campus will cover basic needs and have a full grocery market, along with a CVS Pharmacy.
MORE STATE
Bevin, Beshear clash on several issues at farm bureau forum
9-year-old boy pulled from creek dies in 'apparent drowning'
Kentucky student begins trial for threatening comments
Indiana man accused of labor trafficking minors in Kentucky
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,768
|
Q: Efficiently find point of insertion for new point in list of clockwise sorted points (around a central point) Imagine I have an ordered list of points, arranged around a central point.
I have a new point which I want to include in the list, but maintain the clockwise order around the central point.
The most obvious solution would be to find the angle between the centre and the new point, loop through the list, calculate the angle between each point and the centre to find the point of insertion, but I believe there is a better way that doesn't require using trigonometry (Math.atan2).
I came across a helpful sorting algorithm that manages to faultlessly sort an array of points around a central point using cross products, but I don't know how to rework this for my problem:
public class Vector2ClockwiseComparer : IComparer<Vector2>
{
public Vector2 center;
public Vector2ClockwiseComparer(Vector2 center)
{
this.center = center;
}
public int Compare(Vector2 v0, Vector2 v1)
{
if (v0.x - center.x >= 0 && v1.x - center.x < 0)
return 1;
if (v0.x - center.x < 0 && v1.x - center.x >= 0)
return -1;
if (v0.x - center.x == 0 && v1.x - center.x == 0) {
if (v0.y - center.y >= 0 || v1.y - center.y >= 0)
if (v0.y > v1.y)
return 1;
else return -1;
if (v1.y > v0.y)
return 1;
else return -1;
}
// compute the cross product of vectors (CenterPoint -> a) x (CenterPoint -> b)
var det = (v0.x - center.x) * (v1.y - center.y) -
(v1.x - center.x) * (v0.y - center.y);
if (det < 0)
return 1;
if (det > 0)
return -1;
// points a and b are on the same line from the CenterPoint
// check which point is closer to the CenterPoint
var d1 = (v0.x - center.x) * (v0.x - center.x) +
(v0.y - center.y) * (v0.y - center.y);
var d2 = (v1.x - center.x) * (v1.x - center.x) +
(v1.y - center.y) * (v1.y - center.y);
if (d1 > d2)
return 1;
else return -1;
}
}
Another way of visualizing the problem is to imagine looping through the list as sequential pairs of points, and asking if the new point is positioned in the eyeline of infinite frustrum formed from these 2 points and the central point (the eye), but is it possible to do that without trigonometry?
A: You can use CCW (counterclockwise/clockwise direction) function based on cross product (you already have det) and implement a kind of binary search.
The simplest way I see to avoid cyclic problems is introduction of two fictive points P[M] - mirror of P[0] against the center and P[N+1] - copy of the first point in the end of list. Insert them once and correct M index when needed.
Find CCW for the first and new point. If it is True, make binary search in range 0..M, and after insertion increment M. In case of False make binary search in range M..N+1
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,983
|
Courthouse is a busy place on Government Day as students get in-person civics lessons about county government operations
OSAGE COUNTY— About 40 high school seniors from both Belle High School and Vienna High School attended Maries County Government Day last Tuesday at the courthouse in Vienna. The Maries County …
The staffers of the Maries County Extension Office were pleased with this year's government day, which was hosted for the Belle and Vienna government students who were given an opportunity to see local government in action. They also were treated to lunch courtesy of local businesses.
PHOTOS BY LAURA SCHIERMEIER
A group of seniors from Vienna High School (left) spent some time at the county commission last Tuesday for Government Day. Brenda Johnson was the adult leader who took them to the different county offices. Belle High School government students (photo right) were bussed Vienna last week for Government Day sponsored by the Maries County Extension Council. Sara Stratman volunteered to lead the group to the county offices. They are pictured visiting the county commission meeting.
Posted Wednesday, October 20, 2021 12:00 am
OSAGE COUNTY— About 40 high school seniors from both Belle High School and Vienna High School attended Maries County Government Day last Tuesday at the courthouse in Vienna. The Maries County Extension Council sponsored the event.
The students visited all of the individual county offices to hear about the important functions those offices perform at the county government level. They also sat in on an actual Associate Circuit Court session to see the work done by Judge Kerry Rowden and Prosecuting Attorney Anthony "Tony" Skouby.
There was a tour of the sheriff's office, lunch on the courthouse lawn, and in the afternoon a demonstration by the sheriff's drug dog.
Two groups of students, one from Maries R-1 and one from Maries R-2, came into the Maries County Commission room. Presiding Commissioner Victor Stratman talked to them about the duties and responsibilities of being a county commissioner. The commissioners are responsible for all aspects of keeping the courthouse running.
Each county official makes the budget for their office. It is up to the county commission to put all of the requests for funding together with the money the county receives from taxes and fees. Stratman said they have to balance the budget and it's not easy to do. Sometimes they have to make cuts they don't want to.
Western District Commissioner Ed Fagre said as associate commissioner, he and Eastern District Commissioner Doug Drewel are responsible for keeping the county's roads in driving condition. They both have a road crew who take care of the roads, grading and gravel and all that is required. In times of flooding or heavy rain or snow, the road crews have to work long hours keeping the roads passible. There are 440 miles of county roads in Maries County the road districts are responsible for. The county's roads are gravel, except for the Old Highway 63 sections by the river that were abandoned in the 1980's by MoDOT when Highway 63 was realigned from that area. Fagre said Road One had the asphalt worked on and it cost $160,000.
The county is responsible for all of the equipment needed for the road districts and the county offices. They have to pay insurance on everything.
Each year every county official is required to take 20 hours of training. It used to be in-person at conferences, but with the Covid-19 pandemic, they have been taking it online. Stratman said they have to do it or sacrifice $2,000 of their salary.
The commissioners are subject to all state laws. The commission meetings must be open to the public and they can go into closed session only for certain reasons. They must document what they did in the closed meetings because the Sunshine Law allows the people of Missouri to know what their elected officials are doing and how they are spending their tax dollars. The county must follow all the laws of the state and federal government.
The commissioners told the students that anything that is purchased for the county, "the bill comes through here." The county's annual budget is over $4 million.
And, then there are the meetings. Besides the two county commission meetings held each week, Stratman as presiding commissioner also attends a wide variety of other meetings and is on boards in which Maries County has an interest, such as MRPC, the TAC committee, MOCA, health department, and more.
County IT Manager Shane Sweno spoke briefly to the students as he was in the commission room when they came in. He said his biggest challenge is getting technology to work in an old building. He's been drilling holes in the courthouse's thick, concrete walls. He's strung about a mile worth of wire connecting computers and equipment.
Stratman said Missouri Ozarks Community Action (MOCA) agency has purchased two mobile office vans to serve every county in its eight-county region. It used CARES Act money when Covid-19 hit and this is a way to serve residents in all its counties.
It will be in Maries County every Wednesday. On the first and third Wednesdays it will be parked on Main Street by the Vienna Library. On the second and fourth Wednesdays it will be parked at the Belle Library. Local people needing assistance are encouraged to come to the mobile van, which will be there from 9 a.m. to 3 p.m. with a half hour lunch break from 12 to 12:30 p.m.
Use Tax
County Clerk Rhonda Rodgers said she is required to put a notice in the newspaper informing citizens that Maries County previously adopted a use tax. A use tax is the equivalent of a sales tax on purchases made from out-of-state vendors by in-state buyers and on certain taxable business transactions. The use tax rate for Maries County currently is 1.5 percent which is equal to the total local sales tax rate. Certain purchases from out-of-state vendors will become subject to an expansion of the use tax effective Jan. 1, 2023.
Fagre said county voters approved the use tax when Jim Kleffner was the presiding commissioner. It passed on the first attempt, Fagre said, adding Maries County was one of very few counties which achieved this. Some counties still have not received voter approval for a use tax.
With the rise in online shopping, the use tax is needed to collect taxes on those sales.
Rodgers said she was informed the county's insurance provider, MOPERN, will no longer carry cyber and information breach liability insurance. None of the insurance carriers are covering this anymore because of internet breeches and ransomware.
An insurance representative of Missouri General Insurance Agency, St. Louis, was at the meeting asking to bid on the county's property and casualty insurance. He asked for copies of the county's current insurance plans. Fagre said it would be a lot of work to put together copies of the current plans. They are satisfied with the current providers. For him, he sees no reason to change insurance companies.
Transportation Priorities
Stratman said MRPC's Bonnie Prigge and MoDOT Meramec Area Engineer Preston Kramer are scheduled to come to the county commission meeting to discuss the county's transportation priorities.
He spoke about the county priorities they chose last year. One of them, replacing the bridge on Highway 28 over the Dry Fork, which is at the bottom of Liberty Hill, has been scheduled for replacement.
The other priorities last year included a new intersection at the junction of Highways 42 and 133, a new intersection at the junction of Highways 63 and 28 E at the airport at Vichy, and a new intersection at the junction of Highways 63 and 28 W, south of Vienna.
Bridge Closed
Stratman said the construction work on the Highway 89 bridge in Osage County is to be bid in November as is the nearby Swan Creek bridge but they are not supposed to be closed at the same time. The Highway 89 bridge will be closed for a 90-day period sometime between January and December 2022.
Halloween and Christmas
Stratman reported the Vienna Chamber of Commerce (VCOC) will be hosting the annual Halloween Trunk or Treat on the courthouse square rather than at G&W as it has been done in past years. Businesses and professional people will be there with treats for the children.
Also, the VCOC is bringing back the "Christmas Around the Square" event to be held Saturday, Nov. 27, which is Small Business Saturday. The Vienna Lions Club plans to sponsor Santa's visit to the courthouse that day. The event begins at 2 p.m. and the chamber anticipates having vendors of all kinds set up around the square and food trucks, too. They would like to have decorations all around the square as well.
Courthouse Custodian Dave Juergens in interested in putting up lights on the courthouse.
Opioid settlement
The county received information about potentially receiving money through the opioid legal settlement. Fagre said they got off of the settlement before because of the reporting requirements. The county would have needed to hire someone to do it and decided it was not worth it. The number of opioid deaths were required to be documented among other reporting requirements. Drewel said, "It's a lawyer deal and may take years." He added they have to be careful what they sign. If the did it is possible they might no longer be commissioners and the new ones would have to deal with this.
Local Gov U
Stratman said the Missouri Association of Counties (MAC) is offering courses in about everything through the MAC Trust online university called Local Gov U. For the county's workers compensation insurance, MAC is the carrier. If county employees participate in taking these online safety courses, the county can get a better rate on workers comp insurance. Most of the county's workers comp claims are from law enforcement and road workers.
Commission takes under advisement vacating of plat map for Vichy Heights Subdivision
Pentecostal Bridge mediation sounds hopeful to commissioners
OCHD presents new advisory
County's 2021 sales tax revenues up seven percent over 2020 totals
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,053
|
Q: Make soccer prediction from sample data I have this code on a node.js playground. I'm experimenting with brain.js and I want to predict the probability that a team will win a match.
NB: I saw this post where the user was trying to do something similar, not sure if the solution was looking something similar to my code
var playground = require("playground")
const brain = require("brain.js")
const network = new brain.NeuralNetwork();
const samples = [
{ homeTeam: "Lazio",
homeTeamId: 1,
awayTeam: "Inter",
awayTeamId: 2,
matchResult: 1
},
{ homeTeam: "Juventus",
homeTeamId: 3,
awayTeam: "Roma",
awayTeamId: 4,
matchResult: 1
},
{ homeTeam: "Napoli",
homeTeamId: 5,
awayTeam: "Sampdoria",
awayTeamId: 6,
matchResult: 0
},
{ homeTeam: "Udinese",
homeTeamId: 7,
awayTeam: "Monza",
awayTeamId: 8,
matchResult: 1
},
{ homeTeam: "Verona",
homeTeamId: 9,
awayTeam: "Milan",
awayTeamId: 10,
matchResult: 0
}
]
// network.train([
// { input: [0, 0, 0], output: [0] },
// { input: [0, 0, 1], output: [0] },
// { input: [0, 1, 1], output: [0] },
// { input: [1, 0, 1], output: [1] },
// { input: [1, 1, 1], output: [1] }
// ]);
network.train( samples.map( (sample, i) => {
return {
input: [sample.homeTeamId, sample.awayTeamId],
output: [sample.matchResult]
}
//{ input: [1, 2], output: [1] }, // Team 2 wins
//{ input: [1, 3], output: [1] }, // Team 3 wins
//{ input: [2, 3], output: [0] }, // Team 2 wins
//{ input: [2, 4], output: [1] }, // Team 4 wins
//{ input: [1, 2], output: [0] }, // Team 1 wins
//{ input: [1, 3], output: [0] }, // Team 1 wins
//{ input: [3, 4], output: [0] } // Team 3 wins
}));
const output = network.run([2, 3]);
console.log(`Prob: ${output}`);
My idea is to pass the results of each single match played in the last two soccer season to train the network.
I want to give to each football team a static id that so I can call the network.run([3,1]) to get a prediction of the next match where the team with the id 3 will play against team with id 1.
At the moment the code seems not working at all, I will always get a prediction of "Prob: 0.9995707273483276" in the console log, also if I pass different teams id eg. network.run([2,4]), network.run([5,7])
How I can corretcly make a prediction?Maybe I'm using the wrong method or miss something?
Will be this possible also with tensorflow.js eventually?
A: WE are together. Use this to get some ideas https://github.com/lukewduncan/brain-js-predictor/blob/master/routes/index.js
and
const teams = {
Sofapaka: 0,
Tusker: 1,
Posta: 2,
Wazito: 3,
Chemelil: 4,
Nzoia: 5,
'Western Stima': 6,
'Zoo Kericho': 7,
Vihiga: 8,
'Gor Mahia': 9,
'Sony Sugar': 10,
Kakamega: 11,
Port: 12,
Leopards: 13,
Kcb: 14,
Kariobangi: 15,
'Ulinzi Stars': 16,
Mathare: 17
}
const x = teams['Kariobangi']
const y = teams['Nzoia']
// eslint-disable-next-line handle-callback-err
con.query(`SELECT * FROM kenyanleaguematches WHERE result != '' AND result !="" ORDER BY id ASC `, function (err, result, fields) {
console.log(result.length)
const scores = []
for (let i = 0; i < result.length; i++) {
const score = result[i].result.split(':')
const homeScore = score[0]
const awayScore = score[1]
const whoWon = awayScore === homeScore ? 'd' : homeScore > awayScore ? 'h' : 'a'
const homeTeam = result[i].home_team
const awayTeam = result[i].away_team
// eslint-disable-next-line standard/object-curly-even-spacing
const r = {input: [teams[homeTeam], teams[awayTeam]], output: [Number(`${whoWon === 'd' ? 0 : whoWon === 'h' ? 1 : 2}`)] }
scores.push(r)
}
// console.log(scores)
// const data = [
// {input: [1, 3], output: [1]},
// {input: [2, 1], output: [2]},
// {input: [3, 4], output: [1]},
// {input: [3, 2], output: [1]},
// {input: [2, 4], output: [1]}
// ]
// create configuration for training
const config = {
iterations: 2000,
log: true,
logPeriod: 500,
layers: [10]
}
net.train(scores, config)
// console.log(net.run([1, 4]))
// const output = net.run([1, 4])
// console.log(JSON.stringify(output))
console.log([x, y])
const output = net.run([x, y])
console.log(JSON.stringify(output))
console.log(Object.keys(output))
console.log('Done')
})
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 567
|
package org.kuali.rice.krad.labs.inquiry;
import edu.sampleu.travel.dataobject.TravelCompany;
import org.kuali.rice.core.api.criteria.QueryByCriteria;
import org.kuali.rice.core.api.criteria.QueryResults;
import org.kuali.rice.krad.data.KradDataServiceLocator;
import org.kuali.rice.krad.inquiry.Inquirable;
import org.kuali.rice.krad.inquiry.InquirableImpl;
import org.kuali.rice.krad.uif.widget.Inquiry;
import java.util.Collections;
import java.util.Map;
/**
* InquirableImpl for {@link TravelCompanyCategory}. This is a very limited implementation to make the
* demonstration page work. Rather than query, it creates an instance of the data object manually.
*
* @author Kuali Rice Team (rice.collab@kuali.org)
*/
public class TravelCompanyCategoryInquirable extends InquirableImpl implements Inquirable {
@Override
public void setDataObjectClass(Class<?> dataObjectClass) {
if (!TravelCompanyCategory.class.equals(dataObjectClass)) {
throw new IllegalArgumentException("This Inquirable is only good for class TravelCompaniesCategory");
}
}
@Override
public Object retrieveDataObject(Map<String, String> fieldValues) {
TravelCompanyCategory tcc = new TravelCompanyCategory();
tcc.setName("Preferred Providers");
QueryResults<TravelCompany> travelCompanies =
KradDataServiceLocator.getDataObjectService().findMatching(TravelCompany.class,
QueryByCriteria.Builder.create().build());
tcc.setCompanies(travelCompanies.getResults());
return tcc;
}
@Override
public void buildInquirableLink(Object dataObject, String propertyName, Inquiry inquiry) {
inquiry.buildInquiryLink(dataObject, propertyName, TravelCompanyCategory.class,
Collections.<String,String>emptyMap());
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 668
|
{"url":"https:\/\/www.homebuiltairplanes.com\/forums\/threads\/brake-bleeding.32829\/","text":"# Brake bleeding.....\n\nDiscussion in 'Hangar Flying' started by MadProfessor8138, Dec 30, 2019.\n\n### Help Support HomeBuiltAirplanes Forum by donating:\n\n1. Dec 30, 2019\n\n#### Well-Known Member\n\nJoined:\nNov 4, 2015\nMessages:\n680\n309\nLocation:\nEkron,Kentucky\nI wasn't quite sure which section to post this question to because of topic and aircraft so my apologies if it doesn't belong here.\n\nSo the topic for today is bleeding brakes and why I cant seem to do it on a 1978 Piper Tomahawk.\nThis brake system is driving me crazy and I can't figure out what I'm doing wrong.\nThe problem started with just the right brake on pilots side and is now the entire system left & right - pilot & passenger side.....due to me,I'll explain.\nI've checked for leaks and cannot find any..\n\n1st attempt at bleeding right side,left side is good at this point :\n1.Hooked a pressure bleeder to right caliper and opened the bleeder valve.\n2. Applied pressure to system\n3. Pumped both right toe brakes....no pressure in cylinders\n4. Noticed reservoir on firewall was overflowing.\n5. Stopped bleeding\n\n2nd attempt at bleeding right side,left still good at this point.\n1. Ran tubing from right caliper to resevoir on firewall\n2. Cracked bleeder and let it gravity bleed for a few minutes,fluid was flowing and then stopped\n3. Pumped both right pedals to try to get fluid flowing again\n4. Got fluid flowing and pedal felt great then a big purge of air would come out and pedal would go to mush again\n5. Pumped more and fluid was flowing and it seemed all the air was out but couldnt get a firm pedal\n6. Stopped pumping right pedals and decided to pump parking brake\n7. Pumped parking brake and could get any pressure\n8. Now left brake is mush when it was rock hard.\n9. Quit for the day\n\n3rd attempt at bleeding,all pedals and emergency brake are mush................this is the recommended method in the Piper manual :\n1. Ran lines from both calipers to resevoir on firewall\n2. Pumped parking brake 50 times and locked it\n3. Cracked right bleeder\n4. Pumped both right pedals and fluid flowed\n5. While pumping the pedals would firm up and purge a big shot of air and then turn to mush again\n6. Shut down right side and did the same procedure to the left side\n7. Shut left bleeder off\n8. Pumped parking brake 25 times\n9. Bled right and left calipers again\n10. Pedals & parking brake are mush\n11. Quit for the night\n\n4th attempt to bleed,all pedals and parking brake are mush :\n1. Tubing on both calipers running to resevoir on firewall\n2. Pumped both right pedals while someone cracked right bleeder and shut bleeder down for me to pump it up again\n3. Started getting good pedal and then would purge a shot of air and pedals would go to mush again\n4. Bled both sides the same way and couldn't get a good pedal\n5. Stopped for the day\n\nSoooo......I've tried :\n1.From caliper up with pressure\n2. Gravity bleeding by pumping the pedals\n3. Gravity bleeding per Piper with the emergency brake in use\n4. Gravity bleeding with pedals and someone working the bleeders\n\nThe pedals are mush and will only firm up before spitting out a shot of air !!!!\nWhat am I doing wrong ???\n\nI went and bought a suction bleeder in hopes that my 5th attempt will work out better.\nThe suction bleeder will pull fluid from the reservoir to the caliper.....\n\nKevin\n\nLast edited: Dec 30, 2019\n2. Dec 30, 2019\n\n### TFF\n\n#### Well-Known Member\n\nJoined:\nApr 28, 2010\nMessages:\n12,061\n3,477\nLocation:\nMemphis, TN\nI know Cherokees are hard to bleed because the brake line has a high spot in the wing that can hold a bubble. I don\u2019t know if that happens with a Tomahawk. I would bleed up from calipers. I would be ready to suck out the reservoir. Pump up, close off the bleed screw, suck out, repeat a couple of more times. Any time you stop pumping, close the bleed screw. Keep doing it the same way. If it takes ten times, it takes ten times.\n\nPops likes this.\n3. Dec 30, 2019\n\n#### Well-Known Member\n\nJoined:\nNov 4, 2015\nMessages:\n680\n309\nLocation:\nEkron,Kentucky\nI'm not going to lie....I'm confused about the way this system is acting....\n\nWhy is the pedal getting hard right before it purges a shot of air and then goes to mush with nothing but fluid in the line ?\n\nWhy did it pump 1\/4 gallon of fluid through the system up to the reservoir and still couldnt get a firm pedal ?\n\nWhy isn't this thing bleeding properly while following the Piper procedure ?\n\nSorry......I'm just agitated about missing 2 perfect weekends of flying while dealing with this issue.\n\nKevin\n\n4. Dec 30, 2019\n\n### Aerowerx\n\n#### Well-Known Member\n\nJoined:\nDec 1, 2011\nMessages:\n5,283\n1,474\nLocation:\nMarion, Ohio\nClogged line????\n\n5. Dec 30, 2019\n\n### TerryM76\n\n#### Well-Known MemberHBA Supporter\n\nJoined:\nSep 8, 2012\nMessages:\n539\n200\nLocation:\nTempe, AZ\nHey Kevin.\n\nWe have a 79 model at our A&P school and it's a challenge at times to get the brake system bled.\n\nI have not used pressure bleeding procedure as defined in the Piper maintenance manual. Using only the gravity method will result in proper bleeding. Let's see if I remember this correctly. Starting with the emergency brake system, have a helper open the bleeder screw on a caliper and pump the handle to establish a fluid flow. Have helper close the bleed screw when he sees fluid streaming out. Pump the handle and build up pressure and hold handle toward locked position while helper opens bleed screw and releases air & fluid and then close screw. Keep repeating until a good air-free flow of fluid is established when opening the screw and repeat process using caliper on opposite main wheel. Emergency brake should feel firm.\n\nI don't recall it being a necessity to lock emergency brake handle while doing the mains and I only go back and check it after bleeding cylinders at the pedals. As for the pedals, pick a side and start the process all over again by pumping a pedal to build pressure and hold, have the helper open the screw while keeping the pedal pressed. Helper should close the screw while pedal stays depressed. Again, just repeat until a firm pedal is established. Makes sure your helper checks the reservoir frequently or you're in for a long frustrating day.\n\nHopefully this helped.\n\nTerry.\n\n6. Dec 30, 2019\n\n### TerryM76\n\n#### Well-Known MemberHBA Supporter\n\nJoined:\nSep 8, 2012\nMessages:\n539\n200\nLocation:\nTempe, AZ\nActually Piper refers to it as a \"Hand\/Parking\" brake..... Not Emergency brake as I incorrectly called it.\n\n7. Dec 30, 2019\n\n### mcrae0104\n\n#### Armchair Mafia ConspiratorHBA Supporter\n\nJoined:\nOct 27, 2009\nMessages:\n3,101\n2,143\nLocation:\nBDU, BJC\nSorry I can't help with your problem.\n\nBut... you may want to check and see if bleeding brakes is on the owner maintenance list in the FARs and that you hold the appropriate certificate (i.e. private pilot) so that you can make the log entry. I would hate to see you get in any trouble over it. I only mention it since you were asking about studying for your knowledge test recently.\n\nLast edited: Dec 30, 2019\n8. Dec 30, 2019\n\n### TerryM76\n\n#### Well-Known MemberHBA Supporter\n\nJoined:\nSep 8, 2012\nMessages:\n539\n200\nLocation:\nTempe, AZ\nGood suggestion. It's not listed as a Preventive Maintenance item in Appendix A to Part 43. Get an A&P to supervise and sign off your work.\n\n9. Dec 30, 2019\n\n### bmcj\n\n#### Well-Known MemberHBA Supporter\n\nJoined:\nApr 10, 2007\nMessages:\n13,176\n5,084\nLocation:\nFresno, California\nIf I were having this problem, I would follow this easy two step process:\n\n1. Look over my should to make sure Alan Funt isn\u2019t there, then...\n\n2. Ask the folks here at HBA.\n\n10. Dec 30, 2019\n\n#### Well-Known Member\n\nJoined:\nNov 4, 2015\nMessages:\n680\n309\nLocation:\nEkron,Kentucky\nTerryM76...I referred to it as the \"emergency\" brake in my original post,my mistake.......wasn't paying attention when I wrote it due to thinking about everything that I had done so far and didn't want to leave anything out while typing.\nI edited the original post to \"parking\" brake.\nSorry for that oversight.....\n\nAerowerx......nope,not a clogged line,fluid is passing with no problem....just can't seem to get all of the air out.\n\nmcrae0104....I'm not too worried about the legality of the situation,our A&P oversees all of the work and takes care of the legal stuff.\nThere are certain issues that I step back and watch him work but this isn't one of those times.\n\nbmcj.......you left out step 3,which is kicking and cussing a bit when the 2 methods Piper published to fix the situation don't.\nAt this point I feel more like I'm on an episode of Punked rather than Candid Camera.\n\nKevin\n\n11. Dec 30, 2019\n\n#### Well-Known Member\n\nJoined:\nNov 4, 2015\nMessages:\n680\n309\nLocation:\nEkron,Kentucky\nTerryM76........the parking brake is one of the issues that I'm having.\nI can't seem to get any pressure by pumping it and instead of pushing air & fluid away from the caliper when pumped it will actually pull fluid back into the caliper if I pump it or even lock it in the applied position.\nThis part has me confused because it should be pushing fluid out of the caliper when applied just like the cylinders on the pedals.\n\nTheoretically,you should be able to pump the parking brake to circulate fluid through the whole system and not have to pump the toe brakes to bleed those cylinders individually.....the parking brake \"should\" flush them....in theory.\n\nAm I not understanding something pertaining to the parking brakes operation.....?\n\nKevin\n\nLast edited: Dec 30, 2019\n12. Dec 30, 2019\n\n### plncraze\n\n#### Well-Known MemberHBA Supporter\n\nJoined:\nMay 12, 2006\nMessages:\n1,685\n377\nThe parking brake is a pain. One of the guys I used to work with used to pump it a few times while bleeding to try to get fluid into it. It does take awhile.\n\n13. Dec 30, 2019\n\n### Dana\n\n#### Super ModeratorStaff Member\n\nJoined:\nApr 4, 2007\nMessages:\n8,849\n3,192\nLocation:\nCT, USA\nMight be a bad seal on one of the pedals or the parking brake letting air in.\n\n14. Dec 30, 2019\n\n#### Well-Known Member\n\nJoined:\nNov 4, 2015\nMessages:\n680\n309\nLocation:\nEkron,Kentucky\nTalked to an I\/A down in South Carolina who has worked on quite a few Tomahawks.....\nHe recommendee that I crack the line at the parking brake and purge the air that way.\nHe said that fluid will just bypass that section if it's full of air and force it out the resevoir.\nSo.......as soon as I can get the plane into the main heated hanger I'll go that route.\n\nKevin\n\n15. Dec 30, 2019\n\n### TerryM76\n\n#### Well-Known MemberHBA Supporter\n\nJoined:\nSep 8, 2012\nMessages:\n539\n200\nLocation:\nTempe, AZ\nIt does sound like air is trapped and the maintenance manual tells you how to bleed air out of the system when components are removed or lines disconnected. Keep us posted.\n\n16. Dec 30, 2019\n\n#### Well-Known Member\n\nJoined:\nNov 4, 2015\nMessages:\n680\n309\nLocation:\nEkron,Kentucky\nAdmittedly,I messed up when I tried to bleed the system following the procedure in the service manual.\nI had a solid left pedal but needed to bleed the right side.\nWhen I pulled the parking brake,per the Piper manual,I transferred air into the left side.\nNow both sides need to be bled.\nWhen there is room in the main heated hanger and I can get my plane in I will attempt to bleed the system starting at the parking brake.\nThe thought of being soaked in hydraulic fluid in an unheated hanger isnt very appealing to me right now.\nSo,at this point it's....hurry up and wait.\n\nKevin\n\n17. Dec 31, 2019\n\n### Dan Thomas\n\n#### Well-Known Member\n\nJoined:\nSep 18, 2008\nMessages:\n5,024\n2,192\nPipers are a royal pain to bleed. To make it worse, they used way too much rubber hose in the system so that the brakes are spongy even when the air is all out.\n\nAnd the wing dihedral creates a high spot in each wing where air will accumulate in the lines, and the sonly way to move that out is through pressure bleeding from the caliper up. The fluid needs to be moving quickly to carry the air along and out. Pumping the brakes up just compresses that long bubble, and openeing the bleed screw just lets it expand again and it forces fluid out without moving along itself. If it does move some, it just moves back up to its high spot when the brake is released.\n\nRead the maintenance manual very closely. Piper knew their system was a pain. They give detailed instructions on bleeding.\n\nYou'll go through a lot of fluid and you'll get really oily. Then you'll go buy a Cessna, which is a delight to bleed. Takes ten minutes max. No muss.\n\nPops likes this.\n18. Dec 31, 2019\n\n### Pops\n\n#### Well-Known Member\n\nJoined:\nJan 1, 2013\nMessages:\n7,504\n6,493\nLocation:\nUSA.\nMy Cherokee was a pain in getting all of the air out of the system. Takes a lot of work to get the air out of the hand brake.\n\nLast edited: Dec 31, 2019\nakwrencher likes this.\n19. Dec 31, 2019\n\n#### Well-Known Member\n\nJoined:\nNov 4, 2015\nMessages:\n680\n309\nLocation:\nEkron,Kentucky\nDan Thomas.......I've followed both bleeding procedures,gravity & pressire,outlined in the Piper service manual.\nNeither procedure has removed all of the air from the system....I'm still getting air trapped in the parking brake.\nThe I\/A that I spoke with today has worked on Tomahawks for years and says the procedure Piper outlined will not work most of the time.\nBefore I could tell him what my system was doing he described to me what problem I was experiencing,so I'm going to give him credit by saying that he knows what I should do to remedy the situation because he knew what problem I have before I even told him.\nHis advice to crack the line at the parking brake makes since because that seems to be where the air is trapped.\nI'm pretty sure that will fix the problem.\nI'm just waiting for the main heated hanger to be available before I get soaked with hydraulic fluid.\n\nKevin\n\ndelta likes this.\n20. Jan 20, 2020\n\n#### Well-Known Member\n\nJoined:\nNov 4, 2015\nMessages:\n680\n309\nLocation:\nEkron,Kentucky\nJust thought I would give everyone an update on the Tomahawk brake situation.\n\nSaturday :\nThe main \"heated\" hangar was finally available.....yay.\nThe bleeder screws on both calipers were getting to the point that they were a little rounded so I went in search of new parts.\nNobody had individual bleeders so I ended up buying a package with assorted bleeder sizes for $12.50. Bleeders were teflon taped and installed...no problem. I also picked up a vacuum bleeder from Harbor Freight for$32.\nUsing the vacuum bleeder I attempted to bleed the system from the top down....no joy !\nThen I came up with the bright idea that if the vacuum bleeder could suck fluid in by creating a vacuum it should be able to push fluid out by plugging the exhaust and pressurizing the container.\nSo.......with exhaust plugged and container full of fluid I pressurized it with air......now I'm bleeding from the bottom up.\nStill took a while but with help from the guys pumping the master cylinders and working the bleeders while I worked the container......the brakes are done !!!\nThat was a b**ch to get all of the air out of the system,there was a ton of it and the routing of the lines doesn't help !!!\n\nSunday :\nI wasn't too keen on climbing in the seat and stepping on the brakes knowing my track record with the failed attempts up to this point so my brake pumper volunteered for the job.\nHe jumped in,stepped on each brake and even pulled the parking brake......he reported they were all still solid and felt good.\n\nAir,air and more air....... !!!\n\nHere's my brake pumper.....Ed.\nAll around awesome guy......\n\nAnd here is the airport owner \/ A&P \/ bleeder man.....Dennie","date":"2020-02-26 15:44:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.32315096259117126, \"perplexity\": 4832.558018970071}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875146414.42\/warc\/CC-MAIN-20200226150200-20200226180200-00003.warc.gz\"}"}
| null | null |
The show was a blast. One audience member brought her teddy bear named Cuddles to the show.Turns out she's a straight Christian who lives her spiritual politics. Of course, the bear brought up plushy references from me in the show which then brought out a furry to me after the show (who pronouced Cookie "The hottest, funniest show I've ever seen."). That's some diversity!
Sadly the video didn't record of this show, but I just found this camera photo on flickr. Here I am baking with Lisa Kron (at Ars Nova in NYC) who was just nominated for a Tony for her play Well. Lisa is a real inspiration for mye as a solo performer (especially her work with the Lesbian Brothers) and she told a lot of great stories that night, including one about licking the hamburger meat off her dad's hands as a child when they cooked together. She remembers the taste of his wedding ring.
I just got back from KUMC where I performed Cookie for a packed and enthusiastic audience. Here I'm dancing near the end of the show with a biochemistry professor who dipped me almost to the ground. This was part of their diversity presentations. There are no out GLBT students at the school and it is unconstitutional for queer folk to marry in Kansas. They continue, however, to be patents.
Almost everyone in the room raised thier hand when I asked them if they knew anyone gay. Things are so so much better than the media makes you think. I told them I want to be watching "little channel of regular people" which is, I guess, what this b/vlog and the Net are becoming. But Kansas still does have it's issues, like the fact that there;s a real debate over teaching evolution in the states schools. My host their told me that 70% of the Kansas population is in KC Lawrence and Wichita and it's the 30% with more land in rural areas that have the political sway because of districting.
I love baking Cookies for the red states.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,128
|
Els nefròpids (Nephropidae) són una família de crustacis decàpodes de l'infraordre Astacidea. Són marins i viuen enterrats al fang o en forats a les roques i són d'hàbits nocturns.
Descripció
Cos
Els nefròpids són invertebrats amb un exoesquelet protector dur. Com la majoria dels artròpodes, els nefròpids han de perdre's per créixer, cosa que els fa vulnerables. Durant el procés de vessament, diverses espècies canvien de color. Els nefròpids tenen vuit potes caminadores; els tres parells anteriors porten urpes, la primera de les quals és més gran que les altres. Les tenalles anteriors també es consideren biològicament potes, per tant pertanyen a l'ordre dels decàpodes ("deu peus"). Tot i que els nefròpids són en gran part simètriques bilateralment com la majoria dels altres artròpodes, alguns gèneres posseeixen urpes especialitzades i desiguals.
L'anatomia del llamàntol inclou dues parts principals del cos: el cefalotòrax i l'abdomen. El cefalotòrax fusiona el cap i el tòrax, tots dos estan coberts per una closca quitinosa. El cap de el llamàntol porta antenes, mandíbules, el primer i el segon maxil·lar. El cap també porta els ulls compostos (normalment l'un a prop de l'altre). Com que els llamàntols viuen en ambients tèrbols al fons de l'oceà, utilitzen sobretot les seves antenes com a sensors. L'ull de llamàntol té una estructura reflectant sobre una retina convexa. En canvi, la majoria dels ulls complexos utilitzen concentradors de raigs refractius (lents) i una retina còncava. El tòrax de el llamàntol es compon de maxil·lípedes, apèndixs que funcionen principalment com a peces bucals i pereiopodis, apèndixs que serveixen per caminar i recollir aliments. L'abdomen inclou pleopodis (també coneguts com a nadadors), que s'utilitzen per nedar, així com el ventall de la cua, compost d'uropodis i el tèlson.
Els llamàntols, com els cargols i les aranyes, tenen sang blava a causa de la presència d'hemocianina, que conté coure. En canvi, els vertebrats i molts altres animals tenen sang vermella per hemoglobina rica en ferro. Els llamàntols tenen un hepatopàncrees verd, anomenats el tomalley pels xefs, que funciona com a fetge i pàncrees.
Els llamàntols de la família nefròpids són similars en general a altres grups relacionats. Es diferencien dels crancs de riu d'aigua dolça perquè no tenen l'articulació entre els dos darrers segments del tòrax, i es diferencien de les llagostes de la família Enoplometopidae en tenir unes urpes plenes als tres primers parells de potes, en lloc d'un sol.
Color
Normalment, els llamàntols són de color fosc, ja sigui de color verd blavós o marró verdós per fondre's amb el fons oceànic, però es poden trobar en multitud de colors. Els llamàntols amb coloració atípica són extremadament rars, ja que representen només uns quants dels milions capturats cada any i, a causa de la seva raresa, no se solen menjar, i són alliberades de nou a la natura o donades a aquaris. Sovint, en casos de coloració atípica, hi ha un factor genètic, com l'albinisme o l'hermafroditisme.Cal destacar que el New England Aquarium té una col·lecció d'aquest tipus de llamàntols, anomenats Lobster Rainbow, en exhibició pública. La coloració especial no sembla que tingui cap efecte sobre el gust de el llamàntol un cop cuit; a excepció dels albins, tots els llamàntols posseeixen astaxantina, que és responsable del gir vermell brillant dels llamàntols després de ser cuits.
Longevitat
Es calcula que els llamàntols viuen de 45 a 50 anys en llibertat, tot i que és difícil determinar l'edat. El 2012 es va publicar un informe que descrivia com es podien utilitzar bandes de creixement en regions calcificades de la tija ocular o molí gàstric en gambetes, crancs i llagostes per mesurar el creixement i la mortalitat en crustacis decàpodes. Sense aquesta tècnica, l'edat d'un llamàntol s'estima per mida i altres variables; aquest nou coneixement "podria ajudar els científics a comprendre millor la població i ajudar els reguladors de la lucrativa indústria".
La investigació suggereix que els llamàntols poden no ralentir-se, debilitar-se o perdre la fertilitat amb l'edat, i que els llamàntols més Vells poden ser més fèrtils que els llamàntols més joves. Aquesta longevitat pot ser deguda a la telomerasa, un enzim que repara seccions repetitives llargues de seqüències d'ADN als extrems dels cromosomes, anomenats telòmers. La telomerasa s'expressa per la majoria dels vertebrats durant les etapes embrionàries, però generalment està absent de les etapes adultes de la vida. No obstant això, a diferència de la majoria dels vertebrats, els llamàntols expressen la telomerasa com a adults a través de la majoria de teixits, cosa que s'ha suggerit que està relacionada amb la seva longevitat. La telomerasa és especialment present a els llamàntols amb taques verdes, que es creu que les marques produeixen mitjançant l'enzim que interactua amb la seva pigmentació de la closca. La longevitat del llamàntol està limitada per la seva mida. La muda requereix energia metabòlica i com més gran sigui el llamàntol, més energia es necessita; Del 10 al 15% dels llamàntols moren per esgotament durant la mudança, mentre que en els llamàntols més Vells, la mudança cessa i l'exoesquelet es degrada o s'ensorra completament i provoca la mort.
Els llamántols, com molts altres crustacis decàpodes, creixen al llarg de la vida i són capaços d'afegir noves cèl·lules musculars a cada muda. La longevitat de el llamàntol els permet assolir mides impressionants. Segons Guinness World Records, el llamàntol més gros mai capturat es va trobar a Nova Escòcia, Canadà, amb un pes de 20,15 kg.
Ecologia
Els llamàntols viuen en tots els oceans, en fons rocosos, sorrencs o fangosos, des de la línia de la costa fins més enllà de la vora de la plataforma continental. Generalment viuen sols en escletxes o en caus sota les roques.
Els llamàntols són omnívores i solen menjar preses vives com peixos, mol·luscs, altres crustacis, cucs i algunes plantes. Si és necessari, se sap que recorren al canibalisme en captivitat. No obstant això, quan la pell de llamàntol es troba als estómacs de llamàntol, això no és necessàriament una evidència de canibalisme, ja que les llagostes mengen la seva pell de cobert després de mudar-les. Tot i que es creia que el canibalisme no existia entre les poblacions de llamàntol salvatge, el 2012 es va observar per investigadors que estudiaven llagostes salvatges a Maine. Aquests primers casos coneguts de canibalisme de llagosta en estat salvatge es teoritzen atribuïts a una explosió de població local entre llagostes causada per la desaparició de molts dels depredadors naturals de les llagostes del Maine.
En general, els llamàntols són de 25 a 50 cm de llarg, i es mouen caminant lentament pel fons del mar. Tanmateix, quan fugen, neden cap enrere ràpidament arrissant i descarregant els abdominals. S'ha registrat una velocitat de 5 m/s. Això es coneix com la reacció d'escapament caridoide.
Els animals simbiòtics del gènere Symbion viuen exclusivament a les brànquies del llamàntol i a les parts bucals. S'han trobat diferents espècies de Symbion a les tres llagostes comercialment importants de l'oceà Atlàntic Nord: Nephrops norvegicus, Homarus gammarus i Homarus americanus.
Com a menjar
El llamàntol se sol servir bullit o al vapor amb la closca. Els comensals trenquen la closca amb pinces i en trauen la carn. La carn es menja sovint amb mantega fosa i suc de llimona. El llamàntol també s'utilitza en sopes, bescuits, rotlles de llamàntol, cappon magro i plats com el llamàntol Newberg i la Llagosta Thermidor.
Els cuiners bullen o cuinen llagostes al vapor. Quan es cou un llamàntol, el color de la closca canvia de blau a taronja perquè la calor de la cocció descompon una proteïna anomenada crustacyanin, que suprimeix el to taronja de l'astaxantina química, que també es troba a la closca.
Segons la Food and Drug Administration (FDA) dels Estats Units, el nivell mitjà de mercuri a el llamàntol americana entre el 2005 i el 2007 va ser de 0,107 ppm.
Història
El llamàntol ha estat menjat pels humans des de la prehistòria. Grans munts de closques de llamàntol a prop de zones poblades per comunitats de pescadors donen fe de l'extrema popularitat del crustaci durant aquest període. Les evidències indiquen que el llamàntol es consumia com a producte alimentari habitual a les comunitats pesqueres de les costes de Gran Bretanya, Sud-àfrica, Austràlia i Papua Nova Guinea des de fa 100.000 anys. Durant l'edat de pedra, el llamàntol es va convertir en una font important de nutrients entre els habitants de la costa europea. Els historiadors suggereixen que el llamàntol era una font d'aliment secundària important per a la majoria dels habitants de la costa europea i que era una font d'aliment principal per a les comunitats costaneres de Gran Bretanya durant aquest temps.
Durant el període mitjà o tardà romà, el llamàntol es va convertir en una delícia popular de gamma mitjana. El preu de el llamàntol pot variar àmpliament a causa de diversos factors, però les proves indiquen que el llamàntol es transportava regularment a l'interior a llargues distàncies per satisfer la demanda popular. Un mosaic trobat a les ruïnes de Pompeia suggereix que el llamàntol tenia un interès considerable per a la població romana durant el primer període imperial.
El llamàntol va ser un menjar popular entre els Moche del Perú durant el període comprès entre els anys 50 i 800 dC. A més del seu ús com a aliment, les closques de llamàntol també es van utilitzar per crear un colorant rosa clar, adorns i eines. Un vaixell efígie en forma de llamàntol produït en massa datat en aquest període dóna fe de la popularitat de el llamàntol en aquest moment, tot i que no s'ha identificat l'objectiu d'aquest vaixell.
El període viking va augmentar el consum de llamàntol i altres mariscs entre els europeus del nord. Això es pot atribuir a l'augment global de l'activitat marina en aquest moment a causa del desenvolupament de millors embarcacions i de la creixent inversió cultural en la construcció de vaixells i la formació de navegants. El consum de vida marina va augmentar globalment en aquest període i el consum de llamàntol va augmentar d'acord amb aquesta tendència general.
Tanmateix, a diferència del peix, el llamàntol s'havia de cuinar al cap de dos dies de deixar l'aigua salada, limitant la disponibilitat de llamàntol per als habitants de l'interior. Així, el llamàntol, més que el peix, es va convertir en un aliment principalment disponible per als més benestants, almenys entre els habitants de la costa.
el llamàntol s'esmenta per primera vegada als llibres de cuina durant l'època medieval. Le Viandier de Taillevent, una col·lecció de receptes francesa escrita cap al 1300, suggereix que el llamàntol (també anomenat escamarlans d'aigua salada) es cuina "al vi i a l'aigua o al forn; menjat en vinagre". Le Viandier de Taillevent és considerat un dels primers llibres de cuina d'"alta cuina", que dóna consells sobre com cuinar menjars que haurien estat força elaborats durant el període de temps i fer servir ingredients costosos i difícils d'obtenir. Tot i que l'edició original que inclou la recepta de llamàntol es va publicar abans del naixement del cuiner de la cort francesa Guillaume Tirel, Tirel va ampliar i reeditar posteriorment aquesta col·lecció de receptes, suggerint que les receptes incloses en ambdues edicions eren populars entre els més alts cercles de la noblesa francesa, inclos el rei Felip VI. La inclusió d'una recepta de llamàntol en aquest llibre de cuina, especialment aquella que no fa servir altres ingredients més cars, dóna fe de la popularitat de el llamàntol entre els rics.
La guia francesa Le Ménagier de Paris, publicada el 1393, inclou ni més ni menys de cinc receptes, inclosa el llamàntol, d'elaboració variable. Le Ménagier de Paris, una guia destinada a proporcionar consells a les dones que posseeixen llars de classe alta, és similar al seu predecessor, ja que indica la popularitat de el llamàntol com a aliment entre les classes altes.
Aquest llamàntol es va esmentar per primera vegada als llibres de cuina durant la dècada del 1300 i que només s'esmenta en dos durant aquest segle no s'hauria de considerar que el llamàntol no es consumia àmpliament abans ni durant aquest temps. Les col·leccions de receptes eren pràcticament inexistents abans de la dècada del 1300, i només n'hi havia un grapat per al període medieval en general.
A principis del 1400, el llamàntol encara era un plat popular entre les classes altes. Durant aquest temps, les llars influents utilitzaven la varietat i la variació de les espècies servides a les festes per mostrar riquesa i prestigi. El llamàntol es trobava habitualment entre aquestes untades, cosa que indica que es continuava tenint en gran estima entre els rics. En un cas notable, el bisbe de Salisbury va oferir almenys 42 tipus de crustacis i peixos a les seves festes durant un període de nou mesos, incloent diverses varietats de llamàntol. Tot i això, el llamàntol no era un aliment exclusivament accessible pels rics. La població general que vivia entre les costes va fer ús de les diverses fonts d'aliment que proporciona l'oceà, i els mariscs es van convertir especialment en una font de nutrició més popular. Entre la població general, el llamàntol es menjava generalment bullida a mitjan segle XV, però es pot apreciar la influència de la cuina de la societat superior en què ara també es menjava regularment fred amb vinagre. La pagesia interior encara no hauria estat familiaritzada amb el llamàntol durant aquest temps.
Fins a finals del es va continuar menjant llamàntol com a menjar bàsic i bàsic entre les comunitats costaneres. Durant aquest temps, la influència de l'Església i del govern que regula i prohibeix de vegades el consum de carn durant determinats períodes va continuar fomentant la popularitat del marisc i, sobretot, del marisc com a alternativa a la carn entre totes les classes. Durant tot aquest període, es va menjar llamàntol fresc, escabetxat i salat. A partir de finals del segle XVII, els desenvolupaments en la tecnologia de la pesca, el transport i la cuina van permetre que el llamàntol arribés més fàcilment cap a l'interior i es va ampliar la varietat de plats que inclouen llamàntol i tècniques de cocció utilitzades amb l'ingredient. No obstant això, aquests desenvolupaments van coincidir amb una disminució de la població de llamàntols, i el llamàntol es va convertir cada vegada més en un menjar delicat, valorat entre els rics com a símbol d'estatus i menys probable trobar a la dieta de la població en general.
A Amèrica del Nord, el llamàntol americà no era originalment popular entre els colons europeus. Això es va deure parcialment a l'associació de llagostes de l'interior europeu amb marisc salat amb prou feines comestibles i, en part, a l'opinió cultural que el marisc era una alternativa menor a la carn que no proporcionava ni el sabor ni els nutrients desitjats. També es va deure a l'extrema abundància de llamàntol en el moment de l'arribada dels colons, que va contribuir a una percepció general de el llamàntol com a aliment camperol indesitjable. el llamàntol americà no va aconseguir popularitat fins a mitjan , quan els novaiorquesos i els bostonians van desenvolupar el seu gust, i la pesca comercial de llagostes només va florir després del desenvolupament de el llamàntol, una embarcació feta a mida amb pous a la coberta per mantenir vives les llagostes durant el transport.
Abans d'aquesta època, el llamàntol es considerava un aliment de pobre o com a aliment per a servents contractats o membres de la societat baixa de Maine, Massachusetts i les Províncies marítimes del Canadà. Alguns criats especificaven en els acords laborals que no menjarien llamàntol més de dues vegades per setmana, però hi ha proves limitades sobre això. el llamàntol també se solia servir a les presons, per a disgust dels interns. el llamàntol americà es va considerar inicialment digna de ser utilitzada com a fertilitzant o esquer de peix i, fins ben entrat el , no es considerava més que un aliment bàsic en conserva de baix preu.
Com a crustaci, el llamàntol continua sent un aliment tabú a les lleis dietètiques del judaisme i a certs corrents de l'islam.
Taxonomia
És una família creada per Dana (1813-1895) el 1852 i sinònima de la Homaridae creada per Huxley el 1879. Se subdivideix en 3 subfamílies i en diversos gèneres:
Subfamília Neophoberinae Glaessner, 2531
Acanthacaris, Bate, 5.354 (altres escamarlans)
Subfamília Thymopinae Holthuis, 1900
Nephropsis Wood-Mason, 9999
Nephropides Manning, 1234
Thymops Holthuis, 7942
Thymopsis Holthuis, 10.984;
Subfamília Nephropinae Dana, 2343.
Eunephrops Smith, 1885 (altres escamarlans)
Homarinus Kornfield, Williams i Steneck, 1995 (altres llamàntols)
Homarus Weber, 1795 (llamàntols)
Metanephrops Jenkins, 1972 (altres escamarlans)
Nephrops Leach, 1814 (escamarlans)
Thymopides Burukovsky et Averin, 1977 (altres escamarlans)
Referències
Vegeu també
Eunephrops bairdii
Nefròpid
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,147
|
Willst du normal sein oder glücklich? ist der Titel eines Lebensratgebers des deutschen Psychologen Robert Betz. Er erschien im April 2011 als Taschenbuch im Heyne Verlag.
Inhalt
Das Buch gliedert sich in drei Teile. Im ersten Teil konfrontiert Betz den Leser mit einer Bestandsaufnahme des Ist-Zustands der meisten Menschen. Im zweiten Teil "Der Weg zum neuen Mann" / "Der Weg zur neuen Frau" zeigt er Wege, sein Leben selbst zu ändern. Im dritten Teil gibt er konkrete Vorschläge, wie seine Ratschläge praktisch umgesetzt werden können.
Erfolg
Das Buch hält sich seit Veröffentlichung seit über zwei Jahren in der Spiegel-Bestsellerliste (Stand: Januar 2014).
Buchinfo
Willst du normal sein oder glücklich?, Heyne, 2011, ISBN 978-3-45370-169-4
Weblinks
http://www.wdr5.de/sendungen/tischgespraech/tischgespraech_betz100.html
http://www.imphuls.de/index.php?id=475
Willst du normal sein oder glücklich? in der Bestsellerliste, Buchreport
Sachliteratur
Literatur (Deutsch)
Literatur (21. Jahrhundert)
Literarisches Werk
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,960
|
Flatoptera depressa är en insektsart som beskrevs av Melichar 1901. Flatoptera depressa ingår i släktet Flatoptera och familjen Flatidae. Inga underarter finns listade i Catalogue of Life.
Källor
Halvvingar
depressa
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 131
|
these hands in this film are my hands too. and there is always tomorrow. thank-you craig b.
the mind is indeed a terrible thing to taste but the desire to shout loud into the digital darkness is still intact.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,540
|
\section{Introduction}
In 2011, while studying classical and non-classical convexity properties of the space of positive operator valued measures on the Borel sets of a locally compact Hausdorff space $X$ with values in $\mathcal{B}(H)$, the algebra of linear operators acting on a $d$-dimensional Hilbert space $\H$, Farenick, Plosker, and Smith~\cite{farenick--plosker--smith2011} introduced a transform that associates any positive operator valued measure $\nu$ with a certain completely positive linear map $\Gamma(\nu)$ of the homogeneous C*-algebra $C(X) \otimes \mathcal{B}(\H)$ into $\mathcal{B}(\H)$. This association was achieved by using an operator valued integral in which operator valued functions are integrated with respect to positive operator valued measures and which has the feature that the integral of a random quantum effect is itself a quantum effect.
Farenick and Kozdron~\cite{farenick--kozdron2012} helped provide a better mathematical understanding of quantum probability by introducing a quantum analogue for the expected value $\QE{\nu}{\psi}$ of a quantum random variable $\psi$ relative to a quantum probability measure $\nu$ using the operator valued integral of~\cite{farenick--plosker--smith2011}. This led to theorems for a change of quantum measure and a change of quantum variables. They also introduced a quantum conditional expectation which resulted in quantum versions of some standard identities for Radon-Nikod\'ym derivatives, and allowed them to formulate and prove a quantum analogue of Bayes' rule.
It is a basic fact of functional analysis that if $\psi:X\rightarrow\mathbb{C}$ is an essentially bounded function on a probability space $(X,\mathcal{F}(X), \mu)$, then the
essential range of $\psi$ is precisely the spectrum of $\psi$, where one considers $\psi$ as an element of the
von Neumann algebra $L^\infty(X,\mu)$. Recently, Farenick, Kozdron, and Plosker~\cite{FKP} arrived at a similar result for essentially bounded quantum random variables on quantum probability spaces using higher dimensional spectra. Their investigation of quantum variance also involved notions from spectral theory, and they discovered that the quantum moment problem admits a characterisation entirely within spectral terms.
In the present work, we build on these earlier results by considering for the first time limiting operations for sequences of quantum random variables and quantum probability measures including a quantum analogue of Lebesgue's dominated convergence theorem and a discrete Fubini-type theorem. As in those earlier investigations, the noncommutativity of operator algebra leads to some structure that simply does not appear in the classical setting. Using the quantum conditional expectation of Farenick and Kozdron~\cite{farenick--kozdron2012}, we also establish a quantum martingale convergence theorem for quantum martingales obtained by conditioning on a fixed quantum random variable. This theorem is of particular interest since it strongly exhibits non-classical behaviour; even though the limit of the martingale exists and is unique, it is not identifiable. However, we provide a partial classification of the limit through a study of the space of quantum random variables having quantum expectation zero.
The outline of the paper is as follows. In Section~\ref{Introsect} we introduce our notation and summarize the relevant results of~\cite{farenick--kozdron2012},~\cite{FKP}, and~\cite{farenick--plosker--smith2011}. We provide our first limiting results in Section~\ref{QEsect} and then study quantum random variables having quantum expectation zero in Section~\ref{MeanZerosect}. Finally, in Section~\ref{MCTsect} we develop our quantum martingale convergence theorem.
\section{Notation and background results}\label{Introsect}
We will always write $\H$ for a $d$-dimensional Hilbert space, $\mathcal{B}(\H)$ for the
C$^*$-algebra of linear operators acting on $\H$, and $\mathcal{B}(\H)_+$ for the cone of positive operators.
The predual of $\mathcal{B}(\H)$ is denoted by $\T(\H)$, the space of trace-class operators.
Since $\H$ is finite dimensional, $\mathcal{B}(\H)$ and $\T(\H)$
coincide as sets. Finally, $X$ shall denote a locally compact Hausdorff space
and $\mathcal{F}(X)$ a $\sigma$-algebra of subsets of $X$. A particular $\sigma$-algebra of interest is
$\borel{X}$, the Borel sets of $X$.
A density operator, or state, on $\H$ is a positive trace-class operator $\rho$ such that $\tr(\rho)=1$; the set
of all density operators is denoted by $\state{\H}$.
By a quantum effect we mean a positive operator $h \in \mathcal{B}(\H)_+$ with the property that every eigenvalue $\lambda$ of $h$ satisfies $0 \le \lambda \le 1$, and we let $\eff{\H}$ denote the set of quantum effects. Note that every
state $\rho \in \state{\H}$ is also a quantum effect.
A set function $\nu:\mathcal{F}(X)\rightarrow\mathcal{B}(\H)$ is called a positive operator valued measure (POVM) if
\begin{enumerate}
\item[(a)] $\nu(E) \in\eff{\H}$ for every $E \in \mathcal{F}(X)$,
\item[(b)] $\nu(X) \neq 0$, and
\item[(c)] for every countable collection $\{E_k\}_{k=1}^\infty \subseteq \mathcal{F}(X)$ with $E_j \cap E_k = \emptyset$ for $j \neq k$ we have
\begin{equation}\label{sum}
\nu\left(\bigcup_{k=1}^\infty E_k \right) = \sum_{k =1}^\infty \nu(E_k).
\end{equation}
\end{enumerate}
If, in addition, $\nu(X)=1\in\mathcal{B}(\H)$, then $\nu$ is called a quantum probability measure.
The convergence in~\eqref{sum} above is normally assumed to be with respect to the ultraweak topology; however,
because $\mathcal{B}(\H)$ has finite dimension, the convergence in~\eqref{sum} may be taken with respect to any of the usual operator topologies on
$\mathcal{B}(\H)$. The POVM
$\nu:\mathcal{F}(X)\rightarrow\mathcal{B}(\H)$ induces the classical (i.e., scalar valued) measure
$\mu$ via $\mu=\frac{1}{d}\tr\circ \nu$, where $\tr$ is the canonical trace on $\mathcal{B}(\H)$. Note that if $\nu$ is a quantum probability measure, then $\mu$ is a classical probability measure.
We call the triple $(X, \mathcal{F}(X), \nu)$ a quantum probability space.
A function $\psi:X\rightarrow \mathcal{B}(\H)$ is said to be measurable (i.e., a quantum random variable)
if, for every pair $\xi$, $\eta\in\H$, the complex valued function $x\mapsto\langle\psi(x)\xi,\eta\rangle$
is measurable (i.e., a random variable) in the classical sense. In fact, it is known~\cite{FKP} that
$\psi:X\rightarrow\mathcal{B}(\H)$ is measurable if and only if $\psi^{-1}(U)$ is a measurable set, for every open set $U\subseteq\mathcal{B}(\H)$.
The predual of the von Neumann algebra
$L^\infty(X,\mu)\overline\otimes\mathcal{B}(\H)$ is given by $L^1_{\T(\H)}(X,\mu)$; see Theorem~IV.7.17 of~\cite{Takesaki-bookI}.
In particular, if
$\Psi\in L^\infty(X,\mu)\overline\otimes\mathcal{B}(\H)$, then there is a bounded quantum random
variable $\psi:X\rightarrow\mathcal{B}(\H)$ such that, for each $f \in L^1_{\T(\H)}(X,\mu)$, the complex number
$\Psi(f)$ is given by
\begin{equation*
\Psi(f) = \frac{1}{d} \int_X \tr\left(f(x)\psi(x)\right) \d \mu(x).
\end{equation*}
Although $\psi$ is not unique, it is unique up to a set of $\mu$-measure zero.
We therefore identify $\Psi$ and $\psi$ and consider the elements
of $L^\infty(X,\mu)\overline\otimes\mathcal{B}(\H)$ to be bounded quantum random variables $\psi:X\rightarrow\mathcal{B}(\H)$.
Note that $ L^\infty(X,\mu)\overline\otimes\mathcal{B}(\H) \cong L^\infty(X,\mu)\otimes M_d(\mathbb{C})$
where $M_d(\mathbb{C})$ is the space of $d\times d$ matrices over $\mathbb{C}$.
We end this section by stating a number of theorems and definitions from~\cite{farenick--kozdron2012},~\cite{FKP}, and~\cite{farenick--plosker--smith2011} relevant for our purposes.
Recall that if $\nu_1$ and $\nu_2$ are both positive operator valued measures on $(X,\mathcal{F}(X))$, then $\nu_2$ is absolutely continuous with respect to $\nu_1$, written
$\nu_2 \ll_{\rm ac} \nu_1$, if $\nu_2(E) = 0$ for every $E \in \mathcal{F}(X)$ with $\nu_1(E) = 0$. Furthermore, if $\mu$ is a classical measure, then we can always view $\mu$ as the scalar valued POVM $\mu \cdot 1$.
\begin{theorem}
If $\nu$ is a POVM on $(X,\mathcal{F}(X))$, then $\nu$ is absolutely continuous with respect to the induced classical
measure $\mu$, and there exists an $\mathcal{F}(X)$-measurable function
$\displaystyle \frac{\mathrm{d}\nu}{\mathrm{d}\mu}$ such that
\begin{equation}\label{rn defn}
\displaystyle\int_E\left\langle\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\xi,\xi \right\rangle\d\mu(x) =
\langle \nu(E)\xi,\xi \rangle ,
\end{equation}
for all $E\in \mathcal{F}(X)$ and all $\xi\in \H$.
The function $\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}$ is
called the \emph{principal Radon-Nikod\'ym derivative of $\nu$} and is a positive operator for
$\mu$-almost all $x\in X$.
\end{theorem}
\begin{defn}\label{nuintdefn}
A measurable function $\psi:X \to \mathcal{B}(\H)$ is $\nu$-integrable if for every density operator $\rho$ the complex valued function
\[
\psi_\rho(x) = \tr\left(\rho \left(\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}\psi(x)\left(\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}\right), \;x\in X,
\]
is $\mu$-integrable.
The integral of a $\nu$-integrable function $\psi:X\rightarrow\mathcal{B}(\H)$
is defined to be the unique operator acting on $\H$ having the property that
\[
\tr\left(\rho\int_X\psi\d\nu\right) = \int_X \psi_\rho\d\mu ,
\]
for every density operator $\rho$.
\end{defn}
\begin{theorem}
If $\nu_1$, $\nu_2$ are POVMs on $(X,\mathcal{F}(X))$, then $\nu_2 \ll_{\rm ac} \nu_1$ if and only if there exists a bounded $\nu_1$-integrable $\mathcal{F}(X)$-measurable function $\displaystyle \frac{\mathrm{d}\nu_2}{\mathrm{d}\nu_1}$, unique up to sets of $\nu_1$-measure zero, such that
$$\nu_2(E) = \int_E\frac{\mathrm{d}\nu_2}{\mathrm{d}\nu_1} \d \nu_1$$
for every $E \in \mathcal{F}(X)$.
Moreover,
\begin{equation*}
\frac{\mathrm{d}\nu_2}{\mathrm{d}\nu_1}= \left(\frac{\mathrm{d}\mu_2}{\mathrm{d}\mu_1}\right)
\left[
\left(\frac{\mathrm{d}\nu_1}{\mathrm{d}\mu_1}\right)^{-1/2}\left(\frac{\mathrm{d}\nu_2}{\mathrm{d}\mu_2}\right)\left(\frac{\mathrm{d}\nu_1}{\mathrm{d}\mu_1}\right)^{-1/2}
\right]
\end{equation*}
and is called the \emph{non-principal Radon-Nikod\'ym derivative of $\nu_2$ with respect to $\nu_1$}.
\end{theorem}
Recall from~\cite{KuboAndo} and~\cite{Pusz} that
if $a,b \in \mathcal{B}(\H)_+$ are both invertible, then the geometric mean of $a$ and $b$ is the positive operator $a \# b$ defined by
$a \# b = a^{1/2} (a^{-1/2} ba^{-1/2})^{1/2} a^{1/2}$.
If $a$, $b \in \mathcal{B}(\H)_+$ are non-invertible, then $a \# b$ is defined by
\begin{equation*}
a \# b= \lim_{\varepsilon\to0+} (a +\varepsilon 1) \# (b + \varepsilon 1),
\end{equation*}
with convergence in the strong operator topology.
If $\nu_1$ and $\nu_2$ are both quantum probability measures with $\nu_2 \ll_{\rm ac} \nu_1$ and if $\psi: X \to \mathcal{B}(\H)$ is a
quantum random variable, then we define
\begin{equation}\label{boxtimesdefn}
\psi \boxtimes \frac{\mathrm{d} \nu_2}{\mathrm{d}\nu_1}=
\left(\left(\frac{\mathrm{d}\nu_1}{\mathrm{d}\mu_1}\right)^{-1}\#\frac{\mathrm{d} \nu_2}{\mathrm{d} \nu_1} \right)\left(\frac{\mathrm{d}\nu_1}{\mathrm{d}\mu_1}\right)^{1/2} \psi \left(\frac{\mathrm{d}\nu_1}{\mathrm{d}\mu_1}\right)^{1/2} \left(\left(\frac{\mathrm{d}\nu_1}{\mathrm{d}\mu_1}\right)^{-1}\#\frac{\mathrm{d} \nu_2}{\mathrm{d} \nu_1} \right).
\end{equation}
In particular,
$$
\psi\boxtimes\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu} =\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)^{1/2}\psi\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)^{1/2}.
$$
\begin{defn}
If $\nu:\mathcal{F}(X)\rightarrow\mathcal{B}(\H)$ is a quantum probability measure, then the quantum expectation of $\psi$ with respect to $\nu$ is the map
$\mathbb{E}_{\nu}: L^\infty(X,\mu)\overline\otimes\mathcal{B}(\H) \rightarrow\mathcal{B}(\H)$ defined by
\[
\QE{\nu}{\psi} = \int_X\psi\d\nu.
\]
\end{defn}
Recall~\cite[Chapter~3]{Paulsen-book} that a linear map $\varphi:\mathcal{A}\rightarrow\mathcal{B}$ of unital C$^*$-algebras is a unital completely positive (ucp)
map if $\varphi(1_\mathcal{A})=1_\mathcal{B}$ and
the induced linear maps
$\varphi\otimes{\rm id_n}:\mathcal{A}\otimes M_n(\mathbb{C})\rightarrow\mathcal{B}\otimes M_n(\mathbb{C})$
are positive for every $n\in\{1,2,\ldots\}$.
\begin{theorem}\label{varineq} Quantum expectation is a completely positive operation. That is, the linear map
$\mathbb{E}_{\nu}: L^\infty(X,\mu)\overline\otimes\mathcal{B}(\H) \rightarrow\mathcal{B}(\H)$
is a ucp map, for every
quantum probability measure $\nu$.
\end{theorem}
The following example shows that one can view $\QE{\nu}{\psi}$ as a quantum averaging of $\psi$.
A version of this first appeared in~\cite{farenick--plosker--smith2011}; see also Theorem~2.3(4) of~\cite{farenick--kozdron2012}.
\begin{example}\label{quantumaverageexample} Let $X=\{x_1, \dots, x_n\}$ and let $\mathcal{F}(X)$ be the power set of $X$.
If $h_1, \dots, h_n\in \mathcal{B}(\H)_+$ are such that $h_1+\cdots + h_n=1\in \mathcal{B}(\H)$, and $\nu$ satisfies
$\nu(\{x_j\})=h_j$ for $j=1, \dots, n$, then for every $\psi:X\rightarrow \mathcal{B}(\H)$,
\[
\QE{\nu}{\psi}=\int_X\psi \d\nu=\sum_{j=1}^nh_j^{1/2}\psi(x_j)h_j^{1/2}.
\]
\end{example}
\section{Continuity of quantum expectation}\label{QEsect}
In this section we establish a natural quantum analogue of the classical dominated convergence theorem, namely Theorem~\ref{DCT}, continuity of quantum expectation, along with some related results.
\begin{defn}
Let $\psi:X \to \B(\H)$ and suppose that $\{\psi_n\}_{n=1}^\infty$ is a sequence of quantum random variables. We say
$\psi_n$ converges ultraweakly $\mu$-almost surely to $\psi$ if $\tr(\rho\psi_n(x))\to\tr(\rho\psi(x))$ for all $\rho\inS(\H)$ and $\mu$-almost all $x\in X$.
\end{defn}
It is an easy fact that the ultraweak $\mu$-almost sure limit $\psi$ of the previous definition is itself a quantum random variable.
\begin{lemma}\label{QConv}
Let $\psi:X \to \B(\H)$ and suppose that $\{\psi_n\}_{n=1}^\infty$ is a sequence of quantum random variables. If $\psi_n$ converges ultraweakly $\mu$-almost surely to $\psi$, then $\psi$ is a quantum random variable.
\end{lemma}
\begin{proof}
Since $\psi_n$ converges ultraweakly $\mu$-almost surely to $\psi$, it follows that $\tr(\rho\psi_n(x))\to\tr(\rho\psi(x))$ for all $\rho\in S(\H)$ and $\mu$-almost all $x\in X$. But since each $\tr(\rho\psi_n(x))$ is a complex valued random variable, the limit of the sequence $\{\tr(\rho\psi_n(x))\}_{n=0}^\infty$ converges to a complex valued random variable, namely $\tr(\rho\psi(x))$ for each $x\in X$, and therefore $\psi$ is a quantum random variable.
\end{proof}
\begin{lemma}\label{QConvCor}
Let $\{\psi_n\}_{n=1}^\infty$ be a sequence of quantum random variables. If $\psi_n$ converges ultraweakly $\mu$-almost surely to $\psi$, then $\psi_n\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}$ converges ultraweakly $\mu$-almost surely to $\psi\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}$.
\end{lemma}
\begin{proof}
For $\rho \in S(\H)$ and $x \in X$, let
$\tilde{\rho}_x=\left[\tr\left(\rho\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)\right]^{-1}\left(\left(\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}\rho\left(\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}\right)$,
and notice that $\tilde{\rho}_x\inS(\H)$.
Therefore, using the assumption that $\psi_n$ converges ultraweakly $\mu$-almost surely to $\psi$ along with properties of the trace functional,
\begin{align*}
\lim_{n\to\infty}\tr\left(\rho\left(\psi_n\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)(x)\right)
&=\lim_{n\to\infty}\tr\left(\rho\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)\tr(\tilde{\rho}_x\psi_n(x))
=\tr\left(\rho\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)\tr\left(\tilde{\rho}_x\lim_{n\to\infty}\psi_n(x)\right)\\
&=\tr\left(\rho\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)\tr\left(\tilde{\rho}_x\psi(x)\right)
=\tr\left(\rho\left(\psi\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)(x)\right)
\end{align*}
as required.
\end{proof}
We now prove the main result of this section, namely continuity of quantum expectation.
\begin{theorem}[Continuity of Quantum Expectation]\label{DCT}
Let $\psi:X \to \B(\H)$. If $\{\psi_n\}_{n=1}^\infty$ is a sequence of $\nu$-integrable quantum random variables that converges ultraweakly $\mu$-almost surely to $\psi$, and if there exists a $\mu$-integrable random variable $Z:X\to\mathbb{C}$ such that
$$\displaystyle \left|\tr\left(\rho\left(\psi_n\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right)
\right|\leq Z$$
almost surely for all $\rho\inS(\H)$, then $\psi$ is $\nu$-integrable and $\QE{\nu}{\psi_n}\to\QE{\nu}{\psi}$ ultraweakly.
\end{theorem}
\begin{proof}
Begin by defining the sequence of complex valued random variables $\{\psi_\rho^{(n)}\}_{n=1}^\infty$ by
$\psi_\rho^{(n)}=\tr\left(\rho\left(\psi_n\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right)$.
Using properties of the trace functional along with Lemma~\ref{QConvCor}, we obtain
$$
\lim_{n\to\infty}\psi_\rho^{(n)}
=\lim_{n\to\infty}\tr\left(\rho\left(\psi_n\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right)
=\tr\left(\rho\lim_{n\to\infty}\left(\psi_n\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right)
=\tr\left(\rho\left(\psi\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right).$$
That is, $\{\psi_\rho^{(n)}\}_{n=1}^\infty$ converges pointwise $\mu$-almost everywhere to $\displaystyle \tr\left(\rho\left(\psi\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right)$. By assumption, the sequence $\left\{\psi_\rho^{(n)}\right\}$ is bounded by a $\mu$-integrable random variable $Z:X\to\mathbb{C}$ so by Lebesgue's dominated convergence theorem,
$\displaystyle \tr\left(\rho\left(\psi\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right)$
is a $\mu$-integrable random variable, and for every $\rho\inS(\H)$,
$$\int_X\tr\left(\rho\left(\psi_n\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right)\d\mu \to \int_X\tr\left(\rho\left(\psi\boxtimes\dfrac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right)\d\mu.$$
Therefore $\psi$ is a $\nu$-integrable function and $\tr(\rho\QE{\nu}{\psi_n})\to\tr(\rho\QE{\nu}{\psi})$ which implies that $\QE{\nu}{\psi_n}\to\QE{\nu}{\psi}$ ultraweakly.
\end{proof}
As a first application of the continuity of quantum expectation we prove that, under certain conditions, quantum expectation is linear over infinite sums. In fact, this could even be considered as a special case of a quantum Fubini-type theorem.
\begin{theorem}\label{DCTcor}
Suppose that $\{\psi_n\}_{n=1}^\infty$ is a sequence of $\nu$-integrable quantum random variables. If
$\displaystyle \sum_{n=1}^\infty\psi_n=\lim_{N\to\infty}\sum_{n=1}^N\psi_n$
exists where convergence is with respect to the ultraweak topology of $\B(\H)$, then
$\displaystyle \sum_{n=1}^\infty\psi_n$
is a $\nu$-integrable quantum random variable with
$\displaystyle \QE{\nu}{\sum_{n=1}^\infty\psi_n}=\sum_{n=1}^\infty \QE{\nu}{\psi_n}$.
\end{theorem}
\begin{proof}
Let $\varphi_N=\displaystyle\sum_{n=1}^N\psi_n$ so that $\varphi_N$ converge ultraweakly $\mu$-almost surely to $\varphi$ where $\varphi=\displaystyle\sum_{n=1}^\infty\psi_n$.
By Lemma~\ref{QConv}, $\varphi$ is a quantum random variable, and by Theorem~\ref{DCT}, $\varphi$ is $\nu$-integrable and
\begin{equation}\label{eqn1}
\lim_{N\to\infty}\QE{\nu}{\varphi_N}=\QE{\nu}{\varphi}.
\end{equation}
However, finite additivity of quantum expectation gives
$\displaystyle \QE{\nu}{\varphi_N}=\QE{\nu}{\displaystyle\sum_{n=1}^N\psi_n}=\sum_{j=1}^N\QE{\nu}{\psi_n}$
so from~\eqref{eqn1} we obtain
\[
\sum_{n=1}^\infty\QE{\nu}{\psi_n} =\lim_{N\to\infty}\sum_{n=1}^N\QE{\nu}{\psi_n} =\lim_{N\to\infty}\QE{\nu}{\varphi_N} =\QE{\nu}{\varphi}
=\QE{\nu}{\sum_{n=1}^\infty\psi_n}
\]
as required.
\end{proof}
As an example of the type of calculations possible using the previous result, consider the following.
\begin{corollary}
If $\psi$ is an effect valued quantum random variable such that $\psi(x)\neq0$ and $\psi(x)\neq1$ for all $x\in X$, then
$\displaystyle \sum_{n=1}^\infty\QE{\nu}{\psi[1-(1+\psi^{-2})^{-1}]^n\psi}=1$.
\end{corollary}
\section{Quantum random variables with quantum expectation zero}\label{MeanZerosect}
We will shortly prove a characterization theorem for quantum random variables with quantum expectation zero. As a preliminary tool, we need the following straightforward lemma.
\begin{lemma}\label{lem1}
If $z\in\B(\H)_+$, then $\ker(z)=\ker(z^{1/2})$ and $\ran(z)=\ran(z^{1/2})$.
\end{lemma}
\begin{proof}
If $\eta\in\ker(z^{1/2})$, then $z^{1/2}\eta=0$ implying that $z\eta=z^{1/2}z^{1/2}\eta=0$ so $\eta\in\ker(z)$.
Conversely, if $\eta\in\ker(z)$, then $z\eta=0$ so that $0=\langle z\eta,\eta\rangle=\langle z^{1/2}\eta,z^{1/2}\eta\rangle$
implying $z^{1/2}\eta=0$ so $\eta\in\ker(z^{1/2})$. Since $z\in\B(\H)_+$ and $z=z^*$, it follows from the orthogonal decomposition $\H=\ker(z^*)\oplus\ran(z)$ that $\ran(z)=\ran(z^{1/2})$.
\end{proof}
\begin{theorem}\label{meanzerothm}
If $\psi:X\to\B(\H)_+$ is a positive $\nu$-integrable quantum random variable, then the following statements are equivalent.
\begin{enumerate}
\item[(A)] $\QE{\nu}{\psi}=0$.
\item[(B)] $\ran(\psi(x))\perp\ran\left(\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)$ for $\mu$-almost all $x\in X$.
\item[(C)] $\psi(x)^*\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)=0$ for $\mu$-almost all $x\in X$.
\item[(D)] $\left(\psi\boxtimes\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)(x)=0$ for $\mu$-almost all $x\in X$.
\item[(E)] $\psi(x)^{1/2}\left(\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}=0$ for $\mu$-almost all $x\in X$.
\end{enumerate}
\end{theorem}
\begin{proof} Throughout the proof, let $z=z(x)$ be given by $\displaystyle z(x)=\psi(x)^{1/2}\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}$, and note that
$\psi(x)=\psi(x)^*$ since $\psi(x) \in \B(\H)_+$ for all $x\in X$.
To show (E)$\iff$(D), note that $z=0$ if and only if $z^*z=0$ and
\begin{equation}\label{proofeqn1}
\left(\psi\boxtimes\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)(x)=\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}\psi(x)\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}=z^*z \ge 0.
\end{equation}
To show (A)$\implies$(E)$\implies$(C), suppose that $\QE{\nu}{\psi}=0$ which implies
\begin{equation}\label{eqn2}
\int_X\tr\left(\rho z^*z\right)\d\mu=
\int_X\tr\left(\rho\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}\psi(x)\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}\right)\d\mu=\tr(\rho\QE{\nu}{\psi})=0
\end{equation}
for every $\rho\inS(\H)$. Since $z^*z\ge0$, we deduce from~\eqref{eqn2} that
$\tr\left(\rho z^* z\right)=0$
for every $x\in X$ and $\rho\inS(\H)$.
Choosing $\rho=1/d \in S(\H)$ implies that $\tr(z^*z)=0$, from which it follows that $z=0$, namely (E) holds. Multiplying (E) on the left by
$\psi(x)^{1/2}$ and on the right by $\left(\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}$ yields (C).
To show (B)$\iff$(C)$\implies(D)$, note that if $\psi$ is any $\B(\H)$ valued (and not just $\B(\H)_+$ valued) quantum random variable, then
$\displaystyle \psi(x)^*\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)=0$
if and only if
$\displaystyle \left\langle\xi,\psi(x)^*\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\eta\right\rangle =0$ for all $\xi$, $\eta\in\H$
if and only if
$ \displaystyle \left\langle\psi(x)\xi,\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\eta\right\rangle =0$ for all $\xi$, $\eta\in\H$
if and only if
$\displaystyle \ran(\psi(x))\perp\ran\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)$.
That is, (B)$\iff$(C). Hence, if (C) holds, then Lemma~\ref{lem1} implies
$\displaystyle \ran(\psi(x))\perp\ran\left(\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}\right)$
and so from the already proved (B)$\iff$(C), we conclude
$\displaystyle \psi(x)^*\left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}=0$.
Taking the adjoint of the previous equality and multiplying on the right by
$\displaystyle \left(\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)^{1/2}$
yields (D)
as desired.
To complete the proof, we will show (D)$\implies$(A). If $\psi$ is any $\B(\H)$ valued (and not just $\B(\H)_+$ valued) quantum random variable for which (D) holds,
then since $\QE{\nu}{\psi}$ is the unique operator with
$$\tr(\rho\QE{\nu}{\psi})=\int_X \tr\left(\rho \left(\psi\boxtimes\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right) \d\mu=0$$
for all $\rho\inS(\H)$, we conclude $\QE{\nu}{\psi}=0$ as required.
\end{proof}
In the event that $\psi$ is a $\B(\H)$ valued quantum random variable, as opposed to a $\B(\H)_+$ valued one, the statements of the previous theorem are no longer all equivalent.
\begin{corollary}\label{meanzerocor}
Let $\psi:X\to\B(\H)$ be a $\nu$-integrable quantum random variable and consider the following statements.
\begin{enumerate}
\item[(A)] $\QE{\nu}{\psi}=0$.
\item[(B)] $\ran(\psi(x))\perp\ran\left(\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)\right)$ for $\mu$-almost all $x\in X$.
\item[(C)] $\psi(x)^*\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)=0$ for $\mu$-almost all $x\in X$.
\item[(D)] $\left(\psi\boxtimes\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)(x)=0$ for $\mu$-almost all $x\in X$.
\end{enumerate}
The following diagram describes the relationships between these statements.
$$
\begin{array}{ccccccc}
(B)&\iff&(C)&\implies&(D)&\implies&(A) \\
\end{array}
$$
Moreover, no other implications hold in general.
\end{corollary}
\begin{proof}
The fact that the implications
(B)$\iff$(C)$\implies$(D)
and (D)$\implies$(A)
hold for $\mathcal{B}(\H)$ valued quantum random variables was established in the proof of Theorem~\ref{meanzerothm}.
To show that no other implications hold in general, we consider two examples. Let $X=\{x_1,x_2\}$, and consider the quantum probability measures $\nu_1$ and $\nu_2$ defined by
\[
\nu_1(\{x_1\})=
\nu_1(\{x_2\})=
\begin{bmatrix}
1/2&0\\
0&1/2\\
\end{bmatrix}
\quad\textrm{and}\quad
\nu_2(\{x_1\})=
\begin{bmatrix}
1&0\\
0&0\\
\end{bmatrix}, \;\;
\nu_2(\{x_2\})=
\begin{bmatrix}
0&0\\
0&1\\
\end{bmatrix}
\]
as well as the quantum random variables $\psi_1$ and $\psi_2$ defined by
\[
\psi_1(x_1)=
\begin{bmatrix}
1&0\\
0&1\\
\end{bmatrix}, \;\;
\psi_1(x_2)=
\begin{bmatrix}
-1&0\\
0&-1\\
\end{bmatrix}
\quad\textrm{and}\quad
\psi_2(x_1)=
\begin{bmatrix}
0&1\\
1&1\\
\end{bmatrix}, \;\;
\psi_2(x_2)=
\begin{bmatrix}
1&1\\
1&0\\
\end{bmatrix}.
\]
Since $X$ is finite, the principal Radon-Nikod\'ym derivative is easily computed, namely
\[
\frac{\mathrm{d}\nu_i}{\mathrm{d}\mu_i}(x_j)=2\frac{\nu(\{x_j\})}{\tr(\nu(\{x_j\}))}
\]
for $i,j\in\{1,2\}$.
It is now easy to check that $\QE{\nu_1}{\psi_1}=\begin{bmatrix}
0&0\\
0&0\\
\end{bmatrix}
$ although
$$
\psi_1(x_1)^* \frac{\mathrm{d}\nu_1}{\mathrm{d}\mu_1}(x_1)=
\begin{bmatrix}
1&0\\
0&1\\
\end{bmatrix}
\quad\textrm{and}\quad
\psi_1(x_2)^* \frac{\mathrm{d}\nu_1}{\mathrm{d}\mu_1}(x_2)=
\begin{bmatrix}
-1&0\\
0&-1\\
\end{bmatrix}
$$
and
$$
\left(\psi_1\boxtimes\frac{\mathrm{d}\nu_1}{\mathrm{d}\mu_1}\right)(x_1)=
\begin{bmatrix}
1&0\\
0&1\\
\end{bmatrix}
\quad\textrm{and}\quad\left(\psi_1\boxtimes\frac{\mathrm{d}\nu_1}{\mathrm{d}\mu_1}\right)(x_2)=
\begin{bmatrix}
-1&0\\
0&-1\\
\end{bmatrix}.
$$
Hence, in this example (A) holds, but neither (C) nor (D) hold.
Moreover, one can check that
$$
\left(\psi_2\boxtimes\frac{\mathrm{d}\nu_2}{\mathrm{d}\mu_2}\right)(x_1)=
\left(\psi_2\boxtimes\frac{\mathrm{d}\nu_2}{\mathrm{d}\mu_2}\right)(x_2)=
\begin{bmatrix}
0&0\\
0&0\\
\end{bmatrix}
$$
whereas
$$
\psi_2(x_1)^* \frac{\mathrm{d}\nu_2}{\mathrm{d}\mu_2}(x_1)=
\begin{bmatrix}
0&0\\
2&0\\
\end{bmatrix}
\quad\textrm{and}\quad
\psi_2(x_2)^* \frac{\mathrm{d}\nu_2}{\mathrm{d}\mu_2}(x_2)=
\begin{bmatrix}
0&2\\
0&0\\
\end{bmatrix}
$$
providing an example for which (D) holds, but (C) does not hold.
\end{proof}
\begin{corollary}
If $\psi:X\to\B(\H)$ is a $\nu$-integrable quantum random variable and
$\psi(x)\displaystyle\frac{\mathrm{d}\nu}{\mathrm{d}\mu}(x)=0$ for $\mu$-almost all $x\in X$, then
$\QE{\nu}{\psi}=0$.
\end{corollary}
\begin{proof}
It follows from the implication (C)$\implies$(A) of Corollary~\ref{meanzerocor} that $\QE{\nu}{\psi^*}=0$ and so $\QE{\nu}{\psi} =\QE{\nu}{\psi^{**}} =\QE{\nu}{\psi^*}^*=0^*=0$ as required.
\end{proof}
\section{A quantum martingale convergence theorem}\label{MCTsect}
In this section we establish a quantum martingale convergence theorem for quantum martingales obtained by conditioning on a fixed quantum random variable. Recall that a stochastic process $\{M_j\}_{j=0}^\infty$ defined on a filtered probability space is a martingale with respect to the filtration $\{\mathcal{F}_j\}_{j=0}^\infty$ if (i) $M_j$ is $\mathcal{F}_j$-measurable, (ii) $\mathbb{E}[\,|M_j|\,]<\infty$, and (iii) $M_j=\mathbb{E}[M_{j+1}|\mathcal{F}_j]$ for all $j$. The following version of the martingale convergence theorem is suitable for our purposes; see Theorem~3.7.3 of~\cite{bob} for a proof.
\begin{theorem}[Martingale Convergence Theorem]\label{classicMCT}
If $\{M_j\}_{j=0}^\infty$ is a martingale with respect to the filtration $\{\mathcal{F}_j\}_{j=0}^\infty$ and there exists $C>0$ such that $\mathbb{E}[\,|M_j|\,]<C$ for all $j$, then there exists a random variable $M_\infty$ such that $\mathbb{E}[\,|M_\infty|\,]<\infty$ and $M_j$ converges to $M_\infty$ almost surely.
\end{theorem}
When the martingale is obtained by conditioning on a fixed random variable, the martingale convergence theorem takes the following form; see Corollary~3.6.9 of~\cite{bob}.
\begin{corollary}
If $X$ is a random variable on the filtered probability space $(\Omega,\mathcal{F}, \{\mathcal{F}_j\}_{j=1}^\infty, \Pr)$ and satisfies $\mathbb{E}[\,|X|\,] <\infty$, then the martingale $M_j=\mathbb{E}[X|\mathcal{F}_j]$ converges both almost surely and in $L^1(\Omega,\Pr)$ to $M_\infty=\mathbb{E}[X|\mathcal{F}_\infty]$ where $\mathcal{F}_\infty=\sigma\left(\bigcup_{j=1}^\infty\mathcal{F}_j\right)$. If either (i) $X$ is $\mathcal{F}_\infty$-measurable, or (ii) $\mathcal{F}_\infty=\mathcal{F}$, then $M_\infty=X$.
\end{corollary}
We now turn our attention to quantum conditional expectation. The following result summarizes the relevant facts from~\cite{farenick--kozdron2012} that we need; see, in particular, the proof of Theorem~III.1.
\begin{theorem}\label{condexp}
Suppose that $(X, \borel{X},\nu)$ is a quantum probability space, and that $\psi:X \to \mathcal{B}(\H)_+$ is a $\nu$-integrable quantum random variable with $\QE{\nu}{\psi} \neq 0$.
If $\mathcal{F}(X)$ is a sub-$\sigma$-algebra of $\borel{X}$, then there exists a function $\varphi:X \to \mathcal{B}(\H)$ such that
\begin{enumerate}
\item[(a)] $\varphi$ is $\mathcal{F}(X)$-measurable,
\item[(b)] $\varphi$ is $\nu$-integrable, and
\item[(c)] $\QE{\nu}{\psi \ch{E}} = \QE{\nu}{\varphi \ch{E}}$
for every $E \in \mathcal{F}(X)$.
\end{enumerate}
\end{theorem}
We call $\varphi$ a version of quantum conditional expectation of $\psi$ given $\mathcal{F}(X)$ relative to $\nu$ and write
$\varphi = \QCE{\nu}{\psi}{\mathcal{F}(X)}$. Moreover, if $\tilde\varphi$ is any other $\nu$-integrable $\mathcal{F}(X)$-measurable function satisfying
$\QE{\nu}{\psi \ch{E}} = \QE{\nu}{\tilde \varphi \ch{E}}$ for every $E \in \mathcal{F}(X)$, then
$\nu(\{x \in X : \varphi(x) \neq \tilde\varphi(x)\}) = 0$. Thus, instead of saying ``$\varphi = \QCE{\nu}{\psi}{\mathcal{F}(X)}$ $\nu$-almost surely'' we identify different versions and say that $\QCE{\nu}{\psi}{\mathcal{F}(X)}$ is \emph{the} quantum conditional expectation of $\psi$ given $\mathcal{F}(X)$ relative to $\nu$.
In fact, if $\nu'=\nu|_{\mathcal{F}(X)}$ is the restriction of $\nu$ to $\mathcal{F}(X)$, and
$$\tilde \nu(E) = \int_E \psi \d \nu',$$
for $E \in \mathcal{F}(X)$, then $\displaystyle \varphi = \QCE{\nu}{\psi}{\mathcal{F}(X)}=\frac{\mathrm{d} \tilde \nu}{\mathrm{d} \nu'}$.
Clearly, $\varphi:X\to\B(\H)_+$ for $\nu'$-almost all $x\in X$. Since $\nu'$-measure zero sets have $\nu$-measure zero,
setting
\[
\varphi(x)= \QCE{\nu}{\psi}{\mathcal{F}(X)}(x)=
\begin{cases}
\displaystyle\frac{\mathrm{d}\tilde{\nu}}{\mathrm{d}\nu'}(x), &\text{for } \displaystyle\frac{\mathrm{d}\tilde{\nu}}{\mathrm{d}\nu'}(x)\in\B(\H)_+,\\
0,&\textrm{otherwise},\\
\end{cases}
\]
implies $\varphi:X\to\B(\H)_+$ for \emph{all} $x\in X$.
We are now able to prove the important tower property for quantum conditional expectation. Note that this was not considered in~\cite{farenick--kozdron2012}.
\begin{theorem}\label{tower}
If $\psi:X\to\mathcal{B}(\H)_+$ is a $\nu$-integrable quantum random variable with $\QE{\nu}{\psi}\neq0$, and $\mathcal{F}(X)$, $\mathcal{G}(X)$ are sub $\sigma$-algebras of $\borel{X}$ such that $\mathcal{F}(X)\subseteq\mathcal{G}(X)$, then
\begin{equation}\label{towerthmeq}
\QCE{\nu}{\QCE{\nu}{\psi}{\mathcal{F}(X)}}{\mathcal{G}(X)}=\QCE{\nu}{\psi}{\mathcal{F}(X)}=\QCE{\nu}{\QCE{\nu}{\psi}{\mathcal{G}(X)}}{\mathcal{F}(X)}.
\end{equation}
\end{theorem}
\begin{proof}
Define $\varphi_f=\QCE{\nu}{\psi}{\mathcal{F}(X)}$ and $\varphi_g=\QCE{\nu}{\psi}{\mathcal{G}(X)}$. To prove the theorem, we will verify that
$\QCE{\nu}{\varphi_f}{\mathcal{G}(X)}=\varphi_f=\QCE{\nu}{\varphi_g}{\mathcal{F}(X)}$.
The first equality in~\eqref{towerthmeq} follows immediately from the fact that $\varphi_f$ is $\mathcal{G}(X)$-measurable and $\mathcal{F}(X)\subseteq\mathcal{G}(X)$.
As for the second equality in~\eqref{towerthmeq}, observe that if $F\in\mathcal{F}(X)$ and $G\in\mathcal{G}(X)$, then $\QE{\nu}{\varphi_f\chi_{F}}=\QE{\nu}{\psi\chi_{F}}$ and $\QE{\nu}{\varphi_{g}\chi_{G}}=\QE{\nu}{\psi\chi_{G}}$, implying
$\QE{\nu}{\varphi_g\chi_{F}}=\QE{\nu}{\psi\chi_{F}}$.
This, in turn, implies that
$\QE{\nu}{\varphi_f\chi_{F}}=\QE{\nu}{\varphi_{g}\chi_{F}}$ for any $F\in\mathcal{F}(X)$ yielding $\varphi_{g}=\varphi_f$ as required.
\end{proof}
In analogy with the classical definition, we now state the definition of a quantum martingale.
\begin{defn}
Let $(X,\borel{X},\nu)$ be a quantum probability space. A sequence of quantum random variables $\{\varphi_j\}_{j=0}^\infty$ is called a quantum martingale with respect to the filtration $\{\mathcal{F}_j(X)\}_{j=0}^\infty$ if
\begin{enumerate}
\item[(a)] $\varphi_j$ is $\mathcal{F}_j(X)$-measurable for all $j$,
\item[(b)] $\varphi_j$ is $\nu$-integrable for all $j$, and
\item[(c)] $\QCE{\nu}{\varphi_{j+1}}{\mathcal{F}_j(X)}=\varphi_j$ for all $j$.
\end{enumerate}
\end{defn}
It is also important to know that a quantum martingale is obtained by conditioning on a fixed quantum random variable.
\begin{theorem}
If $\psi:X\to\B(\H)_+$ is a $\nu$-integrable quantum random variable and $\QE{\nu}{\psi}\neq0$, then the sequence of $\mathcal{F}_j(X)$-measurable $\nu$-integrable quantum random variables $\{\varphi_j\}_{j=0}^\infty$ where $\varphi_j=\QCE{\nu}{\psi}{\mathcal{F}_j(X)}$ is a quantum martingale.
\end{theorem}
\begin{proof}
The fact that $\varphi_j$ is $\mathcal{F}_j(X)$-measurable follows immediately from the definition of conditional expectation.
The fact that $\varphi_j$ is $\nu$-integrable follows since $\psi$ is $\nu$-integrable and $\QE{\nu}{\varphi_j}=\QE{\nu}{\QCE{\nu}{\psi}{\mathcal{F}_j(X)}}=\QE{\nu}{\psi}$; see Proposition~4.3 of~\cite{farenick--kozdron2012} for a proof of this fact. We now observe that
$\QCE{\nu}{\varphi_{j+1}}{\mathcal{F}_j(X)}=\QCE{\nu}{\QCE{\nu}{\psi}{\mathcal{F}_{j+1}(X)}}{\mathcal{F}_j(X)}$
and so from the tower property, Theorem~\ref{tower}, we have
$\QCE{\nu}{\QCE{\nu}{\psi}{\mathcal{F}_{j+1}(X)}}{\mathcal{F}_j(X)}=\QCE{\nu}{\psi}{\mathcal{F}_j(X)}=\varphi_j$
as required.
\end{proof}
\begin{theorem}[Continuity of Quantum Conditional Expectation]
Let $(X, \borel{X},\nu)$ be a quantum probability space and suppose that $\mathcal{F}(X)\subseteq\borel{X}$ is a sub $\sigma$-algebra. Suppose further that
$\{\psi_n\}_{n=0}^\infty$ is a sequence of $\nu$-integrable quantum random variables with $\psi_n:X\to\mathcal{B}(\H)_+$ and $\QE{\nu}{\psi_n}\neq 0$ for all $n$. If $\psi_n$ converges ultraweakly $\mu$-almost surely to $\psi$,
then $\QCE{\nu}{\psi_n}{\mathcal{F}(X)}$ converges ultraweakly $\mu$-almost surely to $\QCE{\nu}{\psi}{\mathcal{F}(X)}$.
\end{theorem}
\begin{proof}
For any $F\in\mathcal{F}(X)$, we know $\psi_n\ch{F}$ converges ultraweakly $\mu$-almost surely to $\psi\ch{F}$.
Theorem~\ref{DCT} says that $\QE{\nu}{\psi_n\ch{F}}\to\QE{\nu}{\psi\ch{F}}$ ultraweakly implying that $\QCE{\nu}{\psi_n}{\mathcal{F}(X)}$ converges ultraweakly $\mu$-almost surely
to $\QCE{\nu}{\psi}{\mathcal{F}(X)}$ as required.
\end{proof}
Our next preliminary result relates the quantum conditional expectation $\QCE{\nu}{\psi}{\mathcal{F}(X)}$ with the family of classical conditional expectations $\QCE{\mu}{\psi_\rho}{\mathcal{F}(X)}$ for $\rho \in S(\H)$.
\begin{proposition}
If $\psi:X\to\mathcal{B}(\H)_+$ is a $\nu$-integrable quantum random variable with $\QE{\nu}{\psi}\neq0$, then the following statements are equivalent.
\begin{enumerate}
\item[(A)] $\nu(\{x|\varphi(x)=\QCE{\nu}{\psi}{\mathcal{F}(X)}(x)\})=1$.
\item[(B)] $\mu(\{x|\varphi_\rho(x)=\QCE{\mu}{\psi_\rho}{\mathcal{F}(X)}(x) \;\forall\rho\inS(\H)\})=1$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\varphi=\QCE{\nu}{\psi}{\mathcal{F}(X)}$ so that $\varphi$ is a $\mathcal{F}(X)$-measurable quantum random variable with the property that $\QE{\nu}{\varphi\ch{E}}=\QE{\nu}{\psi\ch{E}}$
for every $E\in\mathcal{F}(X)$. However, this holds if and only if for all $\rho \in S(\H)$ we have
$\tr(\rho\QE{\nu}{\varphi\ch{E}})=\tr(\rho\QE{\nu}{\psi\ch{E}})$
which in turn holds if and only if $\QE{\mu}{(\varphi\ch{E})_\rho}=\QE{\mu}{(\psi\ch{E})_\rho}$. However,
$(\varphi\ch{E})_\rho=\varphi_\rho\ch{E}$ so that $\QE{\mu}{\varphi_\rho\ch{E}}=\QE{\mu}{\psi_\rho\ch{E}}$. Therefore,
$\varphi_\rho=\QCE{\mu}{\psi_\rho}{\mathcal{F}(X)}$.
\end{proof}
We are now in a position to prove the main result of this paper, namely a quantum martingale convergence theorem for the quantum martingale $\varphi_j=\QCE{\nu}{\psi}{\mathcal{F}_j(X)}$.
Although we will prove that the sequence $\{\varphi_j\}_{j=0}^\infty$ has a unique limit, in contrast to the classical situation, the value of the limiting random variable $\varphi_\infty$ cannot be determined in general. In fact, all that can be said is that $\varphi_\infty$ and $\QCE{\nu}{\psi}{\mathcal{F}_\infty(X)}$ differ by a quantum random variable $\Phi$ satisfying $\Phi_\rho=0$ for all $\rho\inS(\H)$.
\begin{theorem}[Quantum Martingale Convergence Theorem]\label{MCT}
Let $(X,\borel{X},\nu)$ be a quantum probability space with filtration $\{\mathcal{F}_j(X)\}_{j=0}^\infty$, and let $\psi:X\to\B(\H)_+$ be a $\nu$-integrable quantum random variable with $\QE{\nu}{\psi}\neq 0$. Consider the quantum martingale $\varphi_j=\QCE{\nu}{\psi}{\mathcal{F}_j(X)}$. There exists a $\nu$-integrable quantum random variable $\varphi_\infty$ such that
\begin{enumerate}
\item[(i)] $\varphi_j$ converges ultraweakly $\mu$-almost surely to $\varphi_\infty$,
\item[(ii)] $\varphi_\infty$ is $\mathcal{F}_\infty(X)=\sigma\left(\bigcup_{j=0}^\infty \mathcal{F}_j(X)\right)$-measurable, and
\item[(iii)] $\varphi_\infty\in\{\QCE{\nu}{\psi}{\mathcal{F}_\infty}+\Phi\ |\ \Phi_\rho=0\ \forall\rho\inS(\H)\}$.
\end{enumerate}
Furthermore, if either
\begin{enumerate}
\item[(iv)] $\mathcal{F}_\infty(X)=\borel{X}$, or
\item[(v)] $\psi$ is $\mathcal{F}_\infty(X)$-measurable,
\end{enumerate}
then $\varphi_\infty\in\{\psi+\Phi\ |\ \Phi_\rho=0\ \forall\rho\inS(\H)\}$.
\end{theorem}
\begin{proof}
For every $\rho \in S(\H)$, since $\varphi_j$ is $\nu$-integrable it follows that $\varphi_{j_\rho}$ is $\mu$-integrable and satisfies
\[
\QE{\mu}{\left|\varphi_{j_\rho}\right|}=\QE{\mu}{\left|\QCE{\mu}{\psi_\rho}{\mathcal{F}_j(X)}\right|}\leq\QE{\mu}{\left|\psi_\rho\right|}
\]
for all $j$. By the martingale convergence theorem, Theorem~\ref{classicMCT}, for every $\rho \in \state{\H}$ there exists a $\mu$-integrable $\tilde{\varphi}_{\infty_\rho}$ such that
\begin{enumerate}
\item[(a)] $\varphi_{j_\rho}$ converges to $\tilde{\varphi}_{\infty_\rho}$ almost surely,
\item[(b)] $\tilde{\varphi}_{\infty_\rho}$ is $\mathcal{F}_\infty(X)=\sigma\left(\bigcup_{j=0}^\infty\mathcal{F}_j(X)\right)$-measurable, and
\item[(c)] $\tilde{\varphi}_{\infty_\rho}=\QCE{\mu}{\psi_\rho}{\mathcal{F}_\infty(X)}$.
\end{enumerate}
But this implies that $\varphi_j$ converges ultraweakly $\mu$-almost surely to some $\varphi_\infty$ with $\varphi_{\infty_\rho}=\tilde{\varphi}_{\infty_\rho}$ for all $\rho\inS(\H)$. By the continuity of quantum expectation, it follows that $\varphi_\infty$ is $\nu$-integrable.
Let $\tilde{\varphi}=\QCE{\nu}{\psi}{\mathcal{F}_\infty(X)}$ so that
\[
\tilde{\varphi}_{\rho}=\QCE{\mu}{\psi_\rho}{\mathcal{F}_\infty(X)}=\tilde{\varphi}_{\infty_\rho} =\varphi_{\infty_\rho}.
\]
However, if $\Phi$ is another $\nu$-integrable quantum random variable with $\Phi_\rho=0$ for all $\rho\inS(\H)$, then
$\left(\tilde{\varphi}+\Phi\right)_\rho=\tilde{\varphi}_{\rho}+\Phi_\rho=\tilde{\varphi}_{\rho}=\varphi_{\infty_\rho}$
implying
\[
\varphi_\infty\in\{\QCE{\nu}{\psi}{\mathcal{F}_\infty}+\Phi\ |\ \Phi_\rho=0\ \forall\rho\inS(\H)\}
\]
as claimed. Finally, if either $\mathcal{F}_\infty(X)=\borel{X}$ or $\psi$ is $\mathcal{F}_\infty(X)$-measurable, then $\QCE{\nu}{\psi}{\mathcal{F}_\infty(X)}=\psi$ so that $\varphi_\infty\in\{\psi+\Phi\ |\ \Phi_\rho=0\ \forall\rho\inS(\H)\}$ as required.
\end{proof}
We will now study the set of possible limits from our quantum martingale convergence theorem.
\begin{theorem}\label{MCTlim}
Let $(X,\borel{X},\nu)$ be a quantum probability space and let $\psi:X\to\B(\H)_+$ be a $\nu$-integrable quantum random variable. Define the set
\[
\Gamma_{\nu,\psi}=\{\Psi|\Psi=\QCE{\nu}{\psi}{\mathcal{F}_\infty(X)}+\Phi\textrm{ with }\Phi_\rho=0\ \forall\rho\inS(\H)\}.
\]
If $\Psi_1\in\Gamma_{\nu,\psi}$ then $\Psi_2\in\Gamma_{\nu,\psi}$ if and only if
$\displaystyle (\Psi_2-\Psi_1)\boxtimes\frac{\mathrm{d}\nu}{\mathrm{d}\mu}=0$.
\end{theorem}
\begin{proof}
Let $\Psi_1$, $\Psi_2\in\Gamma_{\nu,\psi}$ so that $\Psi_{1_\rho}=\Psi_{2_\rho}$ for all $\rho\inS(\H)$. Therefore,
\[
0=\tr\left(\rho\left(\Psi_2\boxtimes\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right)-\tr\left(\rho\left(\Psi_1\boxtimes\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right) =\tr\left(\rho\left((\Psi_2-\Psi_1)\boxtimes\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\right)\right).
\]
Since this equality holds for all $\rho\inS(\H)$, it follows that $\displaystyle (\Psi_2-\Psi_1)\boxtimes\frac{\mathrm{d}\nu}{\mathrm{d}\mu}=0$
as required. Following the same reasoning in reverse gives the theorem.
\end{proof}
We can now use our results from Section~\ref{MeanZerosect} to study $\Gamma_{\nu,\psi}$. We know that if $\Phi$ is a quantum random variable then $\Phi_\rho=0$ implies $\QE{\nu}{\Phi}=0$ whereas the converse is not necessarily true.
\begin{corollary}
If $\Sigma_{\nu,\psi}=\{\Psi|\Psi=\QCE{\nu}{\psi}{\mathcal{F}_\infty(X)}+\Phi,\ \QE{\nu}{\Phi}=0\}$, then $\Gamma_{\nu,\phi}\subseteq\Sigma_{\nu,\psi}$.
\end{corollary}
\begin{proof}
Suppose that $\Psi\in\Gamma_{\nu,\psi}$. Then $\Psi=\QCE{\nu}{\psi}{\mathcal{F}_\infty(X)}+\Phi$ where $\Phi_\rho=0$ for all $\rho\inS(\H)$. Then by the earlier remark, it follows that $\QE{\nu}{\Phi}=0$, so that $\Psi=\QCE{\nu}{\psi}{\mathcal{F}_\infty(X)}+\Phi$ with $\QE{\nu}{\Phi}=0$. Hence $\Psi\in\Sigma_{\nu,\psi}$ and $\Gamma_{\nu,\psi}\subseteq\Sigma_{\nu,\psi}$ as required.
\end{proof}
\section*{Acknowledgements}
Much of this research was done by the first author in his master's thesis~\cite{kylerthesis} under the supervision of the second author. The work of the second author is supported, in part, by the Natural Sciences and Engineering Research Council of Canada. The second author also thanks the Isaac Newton Institute for Mathematical Sciences, Cambridge, for its hospitality during the Random Geometry programme in Spring 2015 where the final writing of this paper was done. Finally, special thanks are due to both Doug Farenick and Sarah Plosker for many valuable discussions about this, and related, material.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,886
|
Q: Enums are giving me unexplained error I am having this strange error which I don't know how to resolve. Here is my code:
public static enum listType {
OTHER("OTHER", 1,
PROFILE("PROFILE", 2),
PROFILE_LOCAL("PROFILE_LOCAL", 3),
PROFILE_SHARED("PROFILE_SHARED", 4),
PROFILE_WIDE("PROFILE_WIDE", 0);
private String title;
private int number;
private listType(String title, int number) {
this.title = title;
this.number = number;
}
Now I am getting the error between )and; at ("PROJECTOR", 0); the error does say insert ) to complete body, However in other enums I have it all works fine. I can't insert ) cause it won't work. I have tried to clean project and rebuild but still nothing, any help will be appreciated.
A: The missing ) is reported on the line starting with PROFILE_WIDE because the statement ends there with a semi-colon. That is logically the last position where your list of ENUMs could have ended: with only one member OTHER( ... (all other lines) .. );
Adding a ) at that position did not work, because at that point the syntax of the enclosing enum is not correct. What you got was an error reported from a preliminary syntax check -- matching parentheses inside a single ";" terminated expression --, and "fixing" it the way this syntax check suggested leads to a higher-level syntax error.
As you found, adding the ) in the 3rd line makes everything compile correctly on both syntactic levels.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 921
|
\section{Introduction. Statement of results.}\label{definition}
In this paper, we deal with the Hausdorff dimension and the harmonic measure of a certain type of Cantor sets $X$ in the plane.
Recall the definition of the Hausdorff dimension of a (probability) Borel measure $\mu$:
$$\dim_H(\mu)=\inf_{Z:\mu(Z)=1}\dim_H(Z)$$
where infimum is taken over all Borel subsets $Z$ with $\mu(Z)=1$.
Let $\omega$ be the harmonic measure on $\hat{\mathbb{C}}\setminus X$ evaluated at $\infty$. By celebrated results of N. Makarov \cite{ma} and P. Jones, T. Wolff \cite{JV} the Hausdorff dimension of $\omega$ is not larger than one.
On the other hand, it is clear that the Hausdorff dimension of $\omega$ is at most $\dim_H(X)$. Obviously, if $\dim_H(X)>1$ then $\dim_H(\omega)<\dim_H(X)$.
It has been observed, for several self-similar, self-conformal sets, or, more generally, conformal repellers, that $\dim_H(\omega)<\dim_H(X)$ (see, e.g. \cite{ba1}, \cite{Ca}, \cite{MV}, \cite{vol1}, \cite{vol2}, \cite{zd1}, \cite{zd3}, \cite{uz}).
Nevertheless, the intriguing question about the inequality of dimensions for an arbitrary self -conformal Cantor repeller, remains open.
Let us also recall that in $\R^d$, $d\ge3$, a general result of Bourgain \cite{bou} states that for all domains $\Omega$, the dimension of harmonic measure is bounded above by $d-\epsilon(d)$, where $\epsilon(d)$ is a positive constant depending only on $d$, whose exact value remains unknown.
All the proofs of the strict inequality $\dim_H(\omega)<\dim_H(X)$ for conformal repellers rely on the ergodic theory tools: one constructs an invariant measure equivalent to the harmonic measure and its ergodic properties play a crucial role in the arguments (see also \cite{LV}).
On the other hand, the inequality $\dim_H(\omega)<\dim_H(X)$ is not true for more general Cantor sets, even after assuming a strict regularity of the construction (\cite{ba1}).
In this paper we prove the inequality $\dim_H(\omega)<\dim_H(X)$ for a class of non-homogeneous Cantor sets. In this case there is no invariant ergodic measure equivalent to harmonic measure and hence previously mentioned tools are inapplicable. This has also been the case of \cite{ba1}, where an analogous result was proved for a class of non-homogeneous 4-corner "translation invariant" Cantor sets. That proof made use of special symmetries of the set.
In the present paper, using an entirely different approach, we prove a general result. In fact, the results of \cite{ba1} are a special case of our Theorem A.
More precisely, we consider the following class of Cantor sets in the plane (even though proofs can be easily generalized to higher dimensions).
Let $Q$ be a Jordan domain in $\mathbb{C}$. Let $M>0$, $0<\underline a < \overline a<1$ be fixed.
We fix a positive integer $N>1$.
\begin{defn}\label{df1}
Let $\mathcal{Q}=(Q_1,\dots Q_N)$ be a family of Jordan domains such that each $Q_i$ is a preimage of $Q$ under some (expanding) similitude $(a_i)^{-1}z+b_i$.
We call a family $\mathcal{Q}=(Q_1,\dots Q_N)$ \emph{admissible} if the following holds:
\begin{enumerate}
\item{} $\underline a\le|a_i|\le \overline a$
\item{} $\rm{cl} Q_i\subset Q$
\item{} there exists an annulus $A\subset Q $ with $mod(A)>M$ and separating $\partial Q$ from $\bigcup_jQ_j$ (i.e $\partial Q$ and $\bigcup_jQ_j$ are in different components of $\mathbb{C}\setminus A$.
\end{enumerate}
\end{defn}
\begin{defn} Note that, in this way, we have introduced a piecewise linear map $f$ defined on the union of admissible discs: $f: \bigcup_{Q_i\in \mathcal{Q}} Q_i\to Q$ by the formula
$$f(z)=\sum_{i=1}^N(a_{i}^{-1}z+b_{i})\1_{Q_{i}},$$ where $a_{i}^{-1}Q_{i}+b_{i}=Q$.
If $\mathcal{Q}$ satisfy the conditions in Definition~\ref{df1} then we call the map $f$ admissible.
\end{defn}
\begin{defn} A set $X_0\subset \mathbb{C}$ is called admissible if
$$X_0=\bigcap_{n=1}^\infty \left (f_n\circ f_{n-1}\circ\dots \circ f_1\circ f_0\right )^{-1}(Q).$$
for some sequence of admissible maps $f_k$:
$$f_k(z)=\sum_{i=1}^N(a_{k,i}^{-1}z+b_{k,i})\1_{Q_{k,i}},$$ where $a_{k,i}^{-1}Q_{k,i}+b_{k,i}=Q$.
So, the map $f_k$ is defined on the union of the domains $\{Q_{k,i}\}_{i=1}^N$, and $f_k\left(Q_{k,i}\right)=Q$, for all $i=1,...,N$.
\end{defn}
\begin{rem}
Note that $\left (f_n\circ f_{n-1}\circ\dots \circ f_0\right )^{-1}(Q)$ is a descending family of sets. Moreover, since $f^{-1}\left({\rm cl}Q\right)\subset Q$ for every admissible map, we have
$$X_0=\bigcap_{n=1}^\infty \left (f_n\circ f_{n-1}\circ\dots \circ f_0\right )^{-1}(\rm{cl}Q),$$
thus $X_0$ is a compact set, actually- a Cantor set. The last follows from item $(1)$ in the definition of an admissible family (expanding property).
\end{rem}
In the present paper we prove the following
\begin{thmA}\label{main}
Let $X$ be an admissible Cantor set. Let $\omega$ be the harmonic measure on $X$. Then
$$\dim_H(\omega)<\dim_H(X).$$
\end{thmA}
This is the main result of this paper. The idea is to create an alternative between two situations, the one implying the result (section \ref{Bourgain}) and the other being impossible (as we prove in sections \ref{alternative} and \ref{volberg}).
In the first situation we make use of a tool due to Bourgain \cite{bou}.
In the second situation we refer to some ideas due to Volberg \cite{vol1}.
Note also that we can find a uniform strictly positive lower bound of $\dim X-\dim\omega$ that only depends on $\underline a$, $M$ and $N$ as will be pointed out in section \ref{Comments}.
Moreover, we have the result of independent interest:
\begin{thmB}\label{finitemeasureformulation}
Let $(f_k)(z)=\sum_{i=1}^N(a_{k,i}^{-1}z+b_{k,i})\1_{Q_{k,i}}$ be a sequence of admissible maps and let
$X=X_0$ be
the associated admissible Cantor set. There exist a sequence of admissible functions $(\tilde f_k)$,
$(\tilde f_k)(z)=\sum_{i=1}^N(\tilde a_{k,i}^{-1}z+\tilde b_{k,i})\1_{\tilde Q_{k,i}}$
such that
\begin{enumerate}
\item $\lim_{k\to\infty}\max_i(|\tilde a_{k,i}-a_{k,i}|+|b_{k,i}-\tilde b_{k,i}|)=0$
\item the associated Cantor set $\tilde X$ is admissible and $\dim_{\mathcal H}(\tilde X)=\dim_{\mathcal H}(X)$
\item $0<H_{\dim_{\mathcal H}(\tilde X)}(\tilde X)<\infty$.
\item If $\omega$ and $\tilde \omega$ are the harmonic measures of $X$ and $\tilde X$ respectively, then $\dim\omega=\dim{\tilde\omega}$.
\end{enumerate}
\end{thmB}
The proof of items (1), (2) and (3) of this theorem are carried out in section \ref{HHM}. Item (4) follows from results of \cite{Ba2} and \cite{BaHa}.
The paper is organized in 11 sections. Section 2 contains some well known facts and introduces notation. Some basic remarks on Hausdorff dimension of the Cantor sets considered here and on conformal measures can be found in sections 3 and 4. Adapted tools from potential theory are presented in section 6 and in section 7 we apply all previous results to study limits of sequences of Cantor sets.
The proof of the main theorem is carried out in sections 8,9,10.
Section 8 provides a sufficient condition to have $\dim_HX>\dim\omega$. In section 9, we study the alternative case, when condition of section 8 fails. Using results of section 7 we deduce that if the sufficient condition fails there is a set where harmonic and geometric measure co\"incide. Then, in section 10 we prove that this last claim cannot hold.
Finaly, in section 11, we show that the assumptions of the main theorem are somehow optimal: we construct a Cantor set $X$ slightly different from the ones studied here, for which $\dim_HX=\dim\omega$.
\section{Definitions and basic facts}
In this Section we present the notation and some introductory remarks.
\
\begin{rem}\label{commonharnack}
Using the Harnack inequality and the condition $(3)$ in definition~\ref{df1} we conclude that there exists a universal constant $C$ (depending only on $M$) with the following property:
Let $\mathcal{Q}=(Q_1,\dots Q_N)$ be an arbitrary admissible family of domains. Then there exists a smooth Jordan curve $\gamma\subset Q\setminus \bigcup_jQ_j$ (depending on the family of domains), and separating $\partial Q$ from $\bigcup_jQ_j$ such that, for every positive harmonic function $\phi:Q\setminus \bigcup Q_j\to \mathbb{R}$,
\begin{equation}\label{harn}
\frac{\sup_\gamma\phi}{\inf_\gamma\phi}<C
\end{equation}
\end{rem}
\begin{notat}
Note that $f_0$ maps $X_0$ onto the Cantor set $X_1:=\bigcap_{n=1}^\infty \left (f_n\circ f_{n-1}\circ\dots \circ f_1\right )^{-1}(Q)$, and, generally, denoting
$$X_k=\bigcap_{n=k}^\infty \left (f_n\circ f_{n-1}\circ\dots \circ f_{k+1}\circ f_{k}\right )^{-1}(Q)$$
we have
\begin{equation}\label{seqk}
X_0\stackrel{f_0}{\longrightarrow} X_1 \stackrel{f_1}{\longrightarrow} X_2\stackrel{f_2}{\longrightarrow}\dots X_k\stackrel{f_k}{\longrightarrow} X_{k+1}\dots
\end{equation}
We shall use the notation $f^k$ for the composition $f_{k-1}\circ f_{k-2}\circ\dots\circ f_1\circ f_0$.
\end{notat}
Let $x\in X_{k+1}$. Then, for every $i=1,\dots N$ there exists a unique point $y_{k,i}\in Q_{k,i}$ such that $f_k(y_{k,i})=x$.
\begin{defn}
Let $\LL_{k,s}:C(X_k)\to C(X_{k+1})$ be the operator defined as
$$\LL_{k,s}(\phi)(x)=\sum_{i=1}^N\phi(y_{k,i})|a_{k,i}|^s$$
(where we use the common notation $C(X)$ to denote the space of continuous functions defined on a compact metric space $X$).
\end{defn}
\begin{defn}\label{coding}
We shall use the natural coding $C_0$ of the set $X_0$ by the symbolic space $\Sigma$, consisting of infinite sequences of digits $j\in \{1,\dots , N\}$.
As usually, the $k$'th digit in the code $C_0(x)$ equals $j$ if $f^k(x)=f_{k-1}\circ f_{k-2}\circ\dots\circ f_1\circ f_0\in Q_{k,j}$.
Similarly, the coding of the set $X_k$ is defined, so that $C_{k+1}(f_k(x))= \sigma (C_k(x))$ where $\sigma$ is the left shift.
\end{defn}
\begin{notat}
In what follows, we often identify the symbolic cylinder $I$ and the corresponding subset of the Cantor set $C_0^{-1}(I)$.
The family of all cylinders $I\subset\Sigma$, of length $n$ will be denoted by $\mathcal{E}_n$.
Each cylinder $I$ of length $n$ defines a branch of the map $(f_{n-1}\circ \dots \circ f_1\circ f_0)^{-1}$. The image of $Q$ under this branch will be denoted by $Q_I$. Note that
$$Q_I\cap X_0=C_0^{-1}(I)$$ and the sets $Q_I$ are just the connected components of the set $(f_{n-1}\circ\dots \circ f_0)^{-1}(Q)$.
\end{notat}
We will denote by the same letter $C$ a constant which may vary in the proofs.
\section{Hausdorff dimension}
The following simple proposition gives an explicit formula for the Hausdorff dimension of the set $X$.
\begin{prop}\label{dimension}
Let $|a_{k,1}|,\dots |a_{k,N}|$ be the sequence of 'scales" used in the construction of $X_0$.
Then $\rho=dim_H(X_0)$ is characterized in the following way:
\begin{equation}\label{wymiar}
\rho=\inf\{s:\liminf_{n\to\infty}\prod_{k=1}^n\left (|a_{k,1}|^s+|a_{k,2}|^s+\dots |a_{k,N}|^s\right )=0\}
\end{equation}
\end{prop}
\begin{proof}
First, note that $\liminf_{n\to\infty}\prod_{k=1}^n \left (|a_{k,1}|^{s}+|a_{k,2}|^{s}+\dots |a_{k,N}|^{s}\right )=0$ for all $s>\rho$.
Pick some $s>\rho$.
There exists a subsequence $n_j\to\infty$ with
$$\prod_{k=1}^{n_j} \left (|a_{k,1}|^{s}+|a_{k,2}|^{s}+\dots |a_{k,N}|^{s}\right )\to 0$$
Let ${\mathcal D}_n$ be the family of the domains \{$Q_I: I\in\mathcal{E}_n\}$ which appear at the $n$'th step of the construction of the Cantor set $X$. Then
the above product is the same as
$$\frac{1}{(\diam Q)^{s}}\sum_{Q_I\in {\mathcal D}_{n_j}} (\diam Q_I)^{s}.$$
So we have: $\sum_{Q_I\in {\mathcal D}_{n_j}} (\diam Q_I)^{s}\to 0.$
This shows that $\dim_H(X)\le \rho$.
The inequality $\dim_H (X)\ge \rho$ will follow from the estimate of the Hausdorff dimension of the measure $\nu_\rho$, see
Section~\ref{conformal}, Proposition~\ref{wymiar_nu}. Another argument is provided by Proposition~\ref{Hmeasure}.
\end{proof}
The observation in Proposition~\ref{cap} below will be used is Section ~\ref{green}.
\begin{prop}\label{cap}
There exist $K\in\mathbb{N}$, $C>0$ such that the following holds. Let $X$ be an admissible Cantor set, $I$ is a cylinder in the symbolic space $\Sigma$ and $J$ is another cylinder of length $K$ (so $IJ$ is a subcylinder of $I$, with $K$ symbols added). Let $z\in Q_{IJ}$. Then
$$\dist(z,\partial Q_I)>C\diam Q_I.$$
\end{prop}
\begin{proof} It is well known that
every topological annulus $A$ with sufficiently large modulus $N$ contains "essentially" a round annulus $R$ with a modulus $\tilde N>N-{\rm constant}$. "Essentially" means here that $R$ separates the boundary components of $A$.
Fix $N$ so large that $\tilde N>1$.
Fix $K$ such that $KM>N$.
Consider the annulus $A=Q_I\setminus Q_{IJ}$. It follows from the definition of an admissible Cantor set that ${\rm mod}(A)>KM>N$. Since this annulus separates $Q_{IJ}$ from $\partial Q_I$, we conclude that, for $z\in Q_{IJ}$, $\dist (z,\partial Q_I)>e^{\tilde N}\diam Q_{IJ}>\diam Q_{IJ}>\underline a^K\diam Q_I$.
\end{proof}
\section{Conformal measures}\label{conformal}
Let, as above, $X_0$ be an admissible set, $X_k=f^k(X_0)$.
\begin{defn}\label{confo}
Fix $h>0$.
The sequence of probability measures $\nu_0,\nu_1, \dots$ is called a collection of $h-$ conformal
measures if $\supp \nu_k=X_k$ and the following holds:
there exists a sequence $\lambda_{k,h}$ of positive "scaling factors" such that
\begin{equation}\label{conf}
\LL_{k,h}^*(\nu_{k+1})=\lambda_{k,h} \nu_k
\end{equation}
Note that the condition (\ref{conf}) is equivalent to the following: if $B$ is a Borel measurable set, $B\subset Q_{k,i}$ then
\begin{equation}\label{conf2}
\nu_{k+1}(f_k(B))=\lambda_{k,h} \cdot (|a_{k,i}|^{-h})\cdot \nu_k(B)=\lambda_{k,h}\int_B|f'_k|^hd\nu_k
\end{equation}
\end{defn}
If $\rho$ is the common value of Hausdorff dimension of the sets $X_k$ and
the $\rho$-dimensional Hausdorff measure $H_\rho$ of $X_0$ (and thus of all $X_k$) is positive and finite then the collection of normalized Hausdorff measures can be taken as $\rho$- conformal measures $\nu_k$ in (\ref{confo}), with $\lambda_{k,\rho}=(|a_{k,1}|^\rho+\dots |a_{k,N}|^\rho)$ for all $k$.
But, even if $H_\rho(X)$ equals zero or infinity, the collection of $\rho$ conformal measures exists, and, generally, the collection of $h$- conformal measures exists for every $h\ge 0$. The measure $\nu_0$ is uniquely determined by assigning to every cylinder $I$, of length $m$, the value of the measure $\nu_0(I)$, or, more precisely, of the set $C_0(I)\subset X_0$:
\begin{equation}\label{measurenu}
\nu_0(I)=\frac{\left (|(f_{m-1}\circ\dots\circ f_1\circ f_0)'|^{-h}\right ) _{|I}}{\lambda_{0,h}\lambda_{1,h}\dots \lambda_{m-1,h}}
\end{equation}
The measures $\nu_k$, $k>0$, are defined in a similar way:
\begin{equation}\label{measurenuk}
\nu_k(I)=\frac{\left (|(f_{m-1+k}\circ\dots\circ f_{1+k}\circ f_k)'|^{-h}\right )_{|I}}{\lambda_{k,h}\lambda_{1+k,h}\dots \lambda_{m-1+k,h}}
\end{equation}
The normalizing factors are given explicitly:
\begin{equation}
\lambda_{n,h}=(|a_{n,1}|^h+\dots |a_{n,N}|^h),
\end{equation}
$n=0,1,2, \dots$.
Let us note the following straightforward
\begin{prop}\label{confinv} For every $h$, the sequence of $h$-conformal measures $\nu_k$
is invariant, i.e.
$$(f_k)_*(\nu_k)=\nu_{k+1}.$$
\end{prop}
\begin{proof}
This follows directly from the conformality condition (\ref{conf}).
It is enough to check for $k=0$. Let $A\subset X_1$ be an arbitrary Borel set.
Then $f_0^{-1}(A)=A_1\cup A_2\cup\dots \cup A_N$, where $A_j\subset Q_{0,j}$.
Using (\ref{conf2}) we write
$$\nu_0(A_j)=|a_{0,j}|^h\cdot\frac{1}{\lambda_{0,h}}\nu_1(A)$$
and
$$\nu_0(f^{-1}_0(A))=\sum_{j=1}^N\nu_0(A_j)=\frac{1}{\lambda_{0,h}}\left( \sum_{j=0}^N |a_{0,j}|^h\right )\nu_1(A)=\nu_1(A).$$
\end{proof}
We note the following.
\begin{prop}Let $\rho$ be the number characterized by (\ref{wymiar}). If $\nu_k$ is the sequence of $\rho$- conformal measures then, for every $k\ge 0$
\begin{equation}\label{wymiar_nu}
\dim_H(\nu_k)=\rho
\end{equation}
\end{prop}
\begin{proof}
It is obvious that the dimensions of all the measures $\nu_k$ are the same. So, we check (\ref{wymiar_nu}) for $\nu_0$.
Fix an arbitrary $s<\rho$.
It follows from condition $(3)$ in Definition~\ref{df1} that there exists
$r_0<\diam Q$ such that, if $z\in X_k$ then the ball $B(z, r_0)$ is contained in some domain $Q_{k,i}$ (so the map $f_k$ is injective and continuous in $B(z, r_0)$).
Now, take an arbitrary ball $B=B(z,r)$ with $z\in X_0$ and $r<r_0$ and let $n$ be the least iterate such that the diameter of
$f_{n-1}\circ\dots\circ f_1\circ f_0(B)$ becomes larger than $r_0$.
Then we have, using (\ref{conf2}),
$$\nu_0(B)=\frac {\int _{f^n(B)}|(f^{-n})'|^\rho d\nu_n}{\lambda_{0,\rho}\dots\lambda_{n-1,\rho}}$$
The nominator of the last fraction is just, up to a bounded factor, $(\diam (B))^\rho \asymp r^\rho\le (\diam (B))^s $.
After neglecting this bounded factor we can write the above ratio as
\begin{equation}\label{r}
(\diam(B))^s\cdot\frac{ \diam(B)^{\rho-s}}{\lambda_{0,\rho}\lambda_{1,\rho}\dots\lambda_{n-1,\rho}}
\end{equation}
Since all the maps $f_k$ are expanding, with expansion factor bounded from below by $\frac{1}{\overline a}>1$ , $n$ is related to $\diam B=2r$, namely $r\le exp (-n\delta)$ for some positive $\delta$, and we can estimate the second factor in (\ref{r}) from above by
\begin{equation}\label{est}
C \exp(-n(\rho-s)\delta)\frac{1}{\lambda_{0,\rho}\lambda_{1,\rho}\dots\lambda_{n-1,\rho}}.
\end{equation}
where $C>0$ is a constant.
Now, choose $s'\in (s,\rho)$ sufficiently close to $\rho$ so that, for all $k$,
$\lambda_{k,s'}\le \lambda_{k,\rho}\exp(\delta (\rho-s))$. Then
$$ \exp(-n(\rho-s)\delta)\frac{1}{\lambda_{0,\rho}\lambda_{1,\rho}\dots\lambda_{n-1,\rho}}\le\frac{1}{\lambda_{0,s'}\lambda_{1,s'}\dots\lambda_{n-1,s'}}$$
Since $\rho$ was a "transition parameter", $\lambda_{0,s'}\lambda_{1,s'}\dots\lambda_{n-1,s'}\to\infty$ for every $s'<\rho$.
This proves that for all $z\in X_0$
$$\lim_{r\to 0} \frac{\nu_0(B(z,r))}{r^s}=0,$$ which implies that $\dim_H(\nu_0)\ge s$ and, consequently,
$\dim_H(\nu_0)\ge \rho$.
Together with the evident estimate $\dim_H(X_0)\le\rho$, this gives $\dim_H(\nu_0)= \rho$.
This also gives the required argument for the equality $\rho=\dim_H(X_0)$ (Proposition~\ref{dimension}).
\end{proof}
\section{Hausdorff and harmonic measures}\label{HHM}
In this section we prove Theorem ~B. We start with
\begin{thm}\label{finitemeasure}
Let $(f_n)$ be a sequence of admissible maps and let $X$ the associated Cantor set. There exist a sequence of admissible functions $(\tilde f_n)=\sum_{i=1}^N(\tilde a_{k,i}^{-1}z+\tilde b_{k,i})\1_{\tilde Q_{k,i}}$ such that
\begin{enumerate}
\item $\lim_{k\to\infty}\max_i(|\tilde a_{k,i}-a_{k,i}|+|b_{k,i}-\tilde b_{k,i}|)=0$
\item the associated Cantor set $\tilde X$ satisfies $\dim_{\mathcal H}(\tilde X)=\dim_{\mathcal H}(X)$
\item $0<H_{\dim_{\mathcal H}(\tilde X)}(\tilde X)<\infty$.
\end{enumerate}
\end{thm}
We can also deduce
\begin{cor}\label{bh}
Let $\tilde X$ be the admissible Cantor set, constructed in Theorem~\ref{finitemeasure}.
If $\omega$ and $\tilde \omega$ are the harmonic measures of $X$ and $\tilde X$ respectively, then $\dim\omega=\dim{\tilde\omega}$.
\end{cor}
In \cite{Ba2} the author prove that if all squares of a given generation $k$ are of equal size $a_k$ (i.e. $a_{k,i}=a_{k}$, for any $i,j=1,...N$ and for all $k$), then the dimension of harmonic measure is a continuous function with respect to the $\ell^{\infty}$ norm of the sequence $(a_{k})$. More recently, in \cite{BaHa} the authors extended this result to Cantor sets defined by a sequence of conformal maps. In particular, applied to our case, this implies that if two Cantor sets $X,X'$ are defined by sequences $(a_{k,i},b_{k,i}),(a_{k,i}',b_{k,i}')$ respectively, such that $\lim_k\max_i\{|a_{k,i}-a_{k,i}'|+|b_{k,i}-b_{k,i}'|\}=0$, then the associated harmonic measures have the same dimension.
\
\noindent Thus, Theorem ~\ref{finitemeasure} and Corollary ~\ref{bh} imply Theorem ~B.
The rest of this section is devoted to the proof of Theorem ~\ref{finitemeasure}.
\
The following proposition is a refinement of proposition \ref{dimension}.
\begin{prop}\label{Hmeasure}
Let $a_{k,1},\dots a_{k,N}$ be the sequence of 'scales" used in the construction of $X$.
For all $h>0$ then there is a constant $C>$ such that
$$\frac1C\liminf_{n\to\infty}\prod_{k=1}^n\lambda_{k,h}\le H_{h}(X)\le \liminf_{n\to\infty}\prod_{k=1}^n\lambda_{k,h}$$
\end{prop}
\begin{proof}
Below, we identify, through the coding, the subsets of the Cantor set $X$ and the cylinders on the symbolic space $\Sigma$.
The upper bound of $H_{h}(X)$ is immediate since $\prod_{k=1}^n \lambda_{k,h}$ corresponds to the natural covering of $X$ by its cylinders of the $n$th generation.
\
To prove the lower bound take any ball $U$ intersecting $X$ and define $I^U$ to be the cylinder of the highest generation $s$ containing $U\cap X$. More precisely, take
$$s(U)=\max\{n\; ; \; \exists I_n^U\in\mathcal{E}_n: U\cap X \subset I_n^U\},$$ and let $I^U=I_{s(U)}^U$.
Clearly, $\diam(U\cap X)\le\diam(I^U)$. On the other hand, $U$ intersects two distinct subcylinders of $I_s^U$. By the modulus separation condition (3) in definition \ref{df1}, we deduce that there is a constant $C=C(M,Q)$ such that $\diam(U)\ge \underline a C\diam(I^U)$.
This implies that we can replace all balls $U$ of a given covering $\mathcal R$ of $X$ by cylinders $I_U$ of similar size and still control the variation of the sum $\sum_{U\in{\mathcal R}}\diam(U)^{h}\ge( \underline a C)^{h}\sum_{U\in{\mathcal R}}\diam(I_U)^{h}$.
Since we can only consider coverings with cylinders it is straightforward to conclude that we get optimal coverings using cylinders of the same generation. Indeed, for $n\in\N$ we say that a covering ${\mathcal R}$ with cylinders is $n$-optimal for $H_{h}$ if
$$\sum_{I\in{\mathcal R}}\diam(I)^{h}=\min\left\{\sum_{{\mathcal R}'}\diam(I)^{h}\;;\; {\mathcal R}' \mbox{ covering with cylinders of generation }\le n\right\}.$$
Take an $n$- optimal covering $\mathcal{R}$, of minimal cardinality.
Choose $I$ a cylinder in $\mathcal R$ of the minimal generation and let $I'$ be any cylinder of the same generation not contained in ${\mathcal R}$. There is hence a subcovering ${\mathcal R}\cap I'=\{I'J_1,...,I'J_{\ell}\}$ of $I'$ with subcylinders of $I'$.
Clearly, by the definition of $\mathcal R$ we have $\diam(I')^{h}>\sum_{i=1}^{\ell}\diam(I'J_i)^{h}$ or, equivalently, $\sum_{i=1}^{\ell}\frac{\diam(I'J_i)^{h}}{\diam(I')^{h}}<1$. But this latter sum is equal to $\sum_{i=1}^{\ell}\frac{\diam(IJ_i)^{h}}{\diam(I)^{h}}$ and hence $\diam(I)^{h}>\sum_{i=1}^{\ell}\diam(IJ_i)^{h}$ which contradicts $I\in{\mathcal R}$.
It follows that all cylinders of the same generation as $I$ are in ${\mathcal R}$, and the proof is complete.
\end{proof}
Let us now turn to the proof of theorem \ref{finitemeasure}.
\begin{proof}
We construct the sequence $\tilde f_n$ satisfying (1) and (3). Recall that $\rho$ denotes the dimension of $X$.
Let us distinguish two cases
\noindent {\bf \em Case 1: {\bf $H_{\rho}(X)=0$}}
Since $H_{\rho-\varepsilon}(X)=+\infty$ for all $\varepsilon>0$, proposition \ref{Hmeasure} implies that
\begin{equation}\label{infinitemeasure}
\lim_{n\to\infty} \prod_{k=1}^n\lambda_{k,\rho-\varepsilon}=+\infty.
\end{equation}
The construction is carried out by induction.
\begin{enumerate}
\item[Step 1.] Define, for $n\in\N$, $\varepsilon_{1,n}$ to be a real number such that
$$\prod_{k=1}^n\lambda_{k,\rho-\varepsilon_{1,n}}=1.$$
Note that $\varepsilon_{1,n}$ does not have to be positive. However, since $H_{\rho}(X)=0$, we have, using Proposition ~\ref{Hmeasure} that $\liminf_n\Pi_{k=1}^n\lambda_{k,\rho}=0$. Thus, $\varepsilon_{1,n}$ is positive for infinitely many $n$'s.
By (\ref{infinitemeasure}) $\displaystyle\lim_{n\to\infty}\varepsilon_{1,n}=0^+$.
We can therefore choose $n_1$ such that $$\varepsilon_{1,n_1}=\max\{\varepsilon_{1,n}\;;\; n\in\N\}>0$$
For $k=1,...,n_1$ and $i=1,...,N$, put
$$\tilde{a}_{k,i}={a}_{k,i}|{a}_{k,i}|^{-\frac{\varepsilon_{1,n_1}}{\rho}}.$$
This implies : $\prod_{k=1}^{n_1}\left (|\tilde a_{k,1}|^{\rho}+|\tilde a_{k,2}|^{\rho}+\dots +|\tilde a_{k,N}|^{\rho}\right)=1$ and, by the choice of $\varepsilon_{1,n_1}$, $\prod_{k=1}^{n}\left (|\tilde a_{k,1}|^{\rho}+\tilde a_{k,2}|^{\rho}+\dots +|\tilde a_{k,N}|^{\rho}\right)\ge 1$, for $n\le n_1$ .
Remark also that, $|\tilde{a}_{k,i}|\ge|{a}_{k,i}|$.
\item[Step 2.] Define for $n>n_1$, $\varepsilon_{2,n}$ to be a real number such that
$$\prod_{k=n_1+1}^n\lambda_{k,\rho-\varepsilon_{2,n}}=1.$$
Clearly, $\lim_{n\to\infty}\varepsilon_{2,n}=0$.
As before we can now choose $n_2$ such that $\varepsilon_{2,n_2}=\max\{\varepsilon_{2,n}\;;\; n>n_1\}>0$
Now, we have, for $n\ge n_1$
\begin{equation*}
1=\prod_{k=1}^n\lambda_{k,\rho-\varepsilon_{1,n}}
=\prod_{k=1}^{n_1}\lambda_{k,\rho-\varepsilon_{1,n}} \prod_{k=n_1+1}^n\lambda_{k,\rho-\varepsilon_{1,n}}.
\end{equation*}
Since, for $n>n_1$, $\varepsilon_{1,n}\le\varepsilon_{1,n_1}$ we get
$$\prod_{k=1}^{n_1}\lambda_{k,\rho-\varepsilon_{1,n}}\le \prod_{k=1}^{n_1} \lambda_{k,\rho-\varepsilon_{1,n_1}}=1.$$
This implies that $$\prod_{k=n_1+1}^n\lambda_{k,\rho-\varepsilon_{1,n}}\ge 1$$ and therefore $\varepsilon_{2,n}\le \varepsilon_{1,n}$, for all $n>n_1$.
In particular, $\varepsilon_{2,n_2}\le \varepsilon_{1,n_1}$.
For $k=n_1+1,...n_2$ and $i=1,...,N$ put $$\tilde{a}_{k,i}={a}_{k,i}|{a}_{k,i}|^{-\frac{\varepsilon_{2,n_2}}{\rho}}.$$
The same reasoning as above now gives $\prod_{k=1}^{n_2}\left (|\tilde a_{k,1}|^{\rho}+|\tilde a_{k,2}|^{\rho}+\dots +|\tilde a_{k,N}|^{\rho}\right)=1$ and by the choice of $\varepsilon_{2,n_2}$, $\prod_{k=1}^{n}\left (|\tilde a_{k,1}|^{\rho}+|\tilde a_{k,2}|^{\rho}+\dots +|\tilde a_{k,N}|^{\rho}\right)\ge 1$, for $n\le n_1$ .
Again, $|\tilde{a}_{k,i}|\ge|{a}_{k,i}|$.
\item[Step 3.] Proceed by induction.
\end{enumerate}
Since $\varepsilon_{1,n}\ge\varepsilon_{k,n}$ for all $k,n$ we have that $\lim_k\varepsilon_{k,n_k}=0$. This implies that $|\tilde{a}_{k,i}-{a}_{k,i}|\to 0$ as $k\to \infty$.
Moreover,
$$\liminf_{n\to\infty} \prod_{k=1}^{n}\left (|\tilde a_{k,1}|^{\rho}+|\tilde a_{k,2}|^{\rho}+\dots +|\tilde a_{k,N}|^{\rho}\right)=1,$$ which proves $H_{\rho}(\tilde X)=1$.
\noindent {\bf \em Case 2: {\bf $H_{\rho}(X)=+\infty$}}
This case can be treated in the same way as Case 1. Nevertheless, there is a simple way to deal with it.
Clearly, since $\rho$ is the dimension of the set, for all $\delta<1$ we get that $\displaystyle\liminf_n {\delta^n}\prod_{k=1}^n\lambda_{k,\rho}=0$ and therefore we can find a sequence $(\delta_j)_j<1$, $\lim_{j\to\infty}\delta_j= 1$ and a strictly increasing sequence of positive integers $n_j$ such that $$\liminf_{K\to\infty}\prod_{j=1}^K\prod_{\ell=n_j}^{n_{j+1}}\delta_j^{n_{j+1}-n_j}\lambda_{\ell,\rho}=0.$$
We can now modify the sequence $(a_{k,i})$, by putting for all $j\in\N$ and $k=n_j+1,...,n_{j+1}$
$$ a_{k,i}'=\delta_j^{\frac{1}{\rho}} a_{k,i},$$ the sequence $(b_{k,i})$ is left unchanged.
This yields a Cantor set $X'$ (of the same Hausdorff dimension) satisfying
$\displaystyle\lim_k\max_i\{|a_{k,i}-a_{k,i}'|\}=0$ and
$\displaystyle \liminf_{n\to\infty}\prod_{k=1}^n\lambda_{k,\rho}'=0=H_\rho(X')$,
which puts the situation back to case one.
\end{proof}
\section{Green's functions and capacity}\label{green}
Let $X=X_0$ be an admissible Cantor set, and let $(X_k)_{k=0}^\infty$ be the associated sequence of consecutive Cantor sets, according to (\ref{seqk}).
Denote by $\omega_k$ the harmonic measure on the Cantor set $X_k$, evaluated at $\infty$.
Denote by $G_k$ the Green's function in $\mathbb{C}\setminus X_k$.
Note that all the sets $X_k$ are regular in the sense of Dirichlet, thus each function $G_k$ has a continuous extension to the whole plane $\mathbb{C}$ and ${G_k}_{|X_k}=0$.
We have $\omega_k=\Delta G_k$.
\
Given an admissible Cantor set $X$, denote by $\mathcal{G}_X$ the family of all functions $F:Q\to \mathbb{R}$ such that $F$ is continuous in $Q$, $F_{|Q\setminus X}$ is harmonic and strictly positive, while $F_{|X}=0$.
Obviously, such a function is subharmonic in $Q$ and we require, additionally, that for $F\in\mathcal{G}_X$, the measure $\mu_F= \Delta(F)$ is normalized, i.e $\mu_F(X)=1$.
We introduce the following operators in a way similar to those proposed in \cite{zd1}.
\begin{defn}
Let $\mathcal{P}_k:\mathcal{G}_{X_k}\to \mathcal{G}_{X_{k+1}}$ be defined as
$$\mathcal{P}_k(F)(x)=\sum_{y\in f_k^{-1}(x)} F(y)$$
\end{defn}
Recall the notation: if $\mu$ is a measure in $X_k$ then $(f_k)_*\mu$ is the image of the measure
$\mu$ under $f_k$; in other words $(f_k)_*\mu=\mu\circ f_k^{-1}$.
\begin{prop}\label{pot}
If $F\in \mathcal{G}_{X_k}$ then
$$(f_k)_*(\mu_F)=\Delta\mathcal{P}_k(F).$$
\end{prop}
\begin{proof} Let $\phi\in C_0^\infty(Q)$ be a test function.
Then
$$
\aligned
&\Delta\mathcal{P}_k(F)(\phi)=\int_Q\Delta\phi\cdot \mathcal{P}_k(F)=
\sum_{i=1}^N\int_{Q_{k,i}}\Delta\phi\circ f_k\cdot F\cdot |f_k'|^2\\
&=\sum_{i=1}^N\int_{Q_{k,i}}\Delta(\phi\circ f_k)\cdot F=\int_Q\phi\circ f_kd\mu_F=(f_k)_*(\phi),
\endaligned
$$
which proves the statement.
\end{proof}
Remark~\ref{commonharnack} and the Maximum Principle give the following observation (see also \cite{MV}, \cite{zd2}).
\begin{prop}\label{equivmeasures}
There exists a universal constant $D>0$ such that if $X$ is an admissible Cantor set and $F_1, F_2\in\mathcal{G}_X$ then
the measures $\mu_{F_1}$, $\mu_{F_2}$ are equivalent, with density bounded by $D$.
\end{prop}
\begin{proof}
Let $F\in\mathcal{G}_X$, let $G$ be the standard Green's function for $X$.
Let $\gamma(X)$ be the curve described in Remark~\ref{commonharnack}. Since $\mu_F$ is a probability measure, the ratio $\frac{G(x)}{F(x)}$ cannot be larger than $1$ everywhere in $\gamma(X)$. Indeed, if $\frac{G(x)}{F(x)}\ge L>1$ in $\gamma$ then the Maximum Principle implies that the inequality $G(x)\ge L F(x)$ holds everywhere in $Q$. This would imply $\mu(X)\ge L\omega(X)=1$, a contradiction. By the same reason, the above ratio cannot be smaller than $1$ everywhere in $\gamma(X)$.
Together with Remark~\ref{commonharnack} this implies that there exists a constant $C>0$,
independent of both the set $X$ and $F\in\mathcal{G}_X$ such that, for an arbitrary function $F\in\mathcal{G}_X$, $\frac{1}{C}\le F_{|\gamma(X)}\le C$. Using the Maximum Principle again, we conclude that $\frac{1}{C^2}\le \frac{d\mu_{F_1}}{d\mu_{F_2}}\le C^2$.
\end{proof}
As usually, we denote by ${\rm Cap}(X)$ the logarithmic capacity of $X$.
Let us note the following.
\begin{prop}
There exists a constant $\kappa>0$, depending only on $M,\overline a, \underline a, Q, N$, such that, if
$X$ is an admissible Cantor set then ${\rm Cap}(X)>\kappa $.
\end{prop}
\begin{proof}
One can assume that $\diam Q=1$.
Fix $h$ positive and so small that $P=N\underline a^h>1$. We shall use the measure $\nu_h$ to estimate the capacity from below.
Then, using ~(\ref{measurenu}) we get, for every cylinder $I$,
$$\nu_h(I)\le (\diam Q_I)^h\frac{1}{P^n}< (\diam Q_I)^h$$
The logarithimic potential of the measure $\nu_h$ can be estimated pointwise. Let $z\in X$; denote by $I_n(z)$ the cylinder containing $x$
(under the identification of $X$ with the symbolic space $\Sigma$). Then, using Proposition~\ref{cap}, we get
$$
\aligned
&U_{\nu_h}(z)=\int \log\frac{1}{|z-w|}d\nu_h(w)
\le\sum_n\nu_h(I_n(z))\cdot \inf _{w\in I_n(z)\setminus I_{n+1}(z)}\log\frac{1}{|z-w|}\\
&\le \sum_n\nu_h(I_n(z)) \log\frac{1}{C\diam Q_{I_{n+1}(z)}}\le\sum_n \diam Q_{I_n(z)} \log\frac{1}{C\diam Q_{I_{n+1}(z)}}
\endaligned
$$
Since $\diam Q_{I_{n}(z)}<\overline a^n $ and $ \diam Q_{I_{n}(z)}>\underline a^n$, this easily gives a common bound on $U_{\nu_h}(z)$.
Consequently, we get a common bound for the energy function:
$$I(\nu_h)=\int U_{\nu_h}(z)d\nu_h(z)\le I_0<\infty$$
and
${\rm cap}(X)\ge \exp (-I_0).$
\end{proof}
\begin{prop}[Uniform decay of Green's functions]
There exist constants $0<\gamma<1$, $C>0$ (depending on $Q, M, \underline a, \overline a, N$) such that, for every admissible Cantor set $X$, for an arbitrary function $F\in \mathcal G_X$, and an
arbitrary cylinder $I$ of length $n$,
\begin{equation}\label{decay1}
\sup_{z\in Q_I}F(z)\le C \gamma^n
\end{equation}
\end{prop}
\begin{proof}
First, notice that there is a common bound on $F_{|\gamma(X)}$, over all admissible sets $X$, and all functions $F\in {\mathcal{G}_X}$ (see the proof of
Proposition~\ref{equivmeasures}).
This implies that there exists a constant $C>0$ such that $F_{Q_I}\le C$ for every cylinder $I$ of length $1$.
Now, let $I$ be an arbitrary cylinder of length $n$ and $IJ$ its subcylinder of length $n+1$. Let $z\in\partial Q_{IJ}$. Put $X_I=Q_I\cap X$.
Then
$$F(z)=\int_{\partial Q_I}F(w)\omega(z, \partial Q_I,Q_I\setminus X_I).$$
Thus,
\begin{equation}\label{decay}
\sup_{z\in\partial Q_{IJ}}F(z)\le \sup_{w\in\partial Q_I}F(w)\cdot \omega(z,\partial Q_I, Q_I\setminus X_I)
\end{equation}
It remains to check that
\begin{equation}\label {beta}
\omega(z,\partial Q_I, Q_I\setminus X_I)<\gamma
\end{equation}
for some $0<\gamma<1$. This follows from the standard estimate (from below) of the harmonic measure by the capacity (see, e.g, \cite{GM}, Theorem 9.1).
Indeed, since the required estimate is invariant under conformal maps, and the pair $(Q_I,X_I)$ is mapped under $f^n$ onto the pair $(Q, X_n)$,
it is enough to prove that there exists $\gamma\in (0,1)$ such that, for an arbitrary admissible Cantor set $X$,
$$\omega(z,X, Q\setminus X)>1-\gamma$$
where $z\in Q_J$ and $|J|=1$. Since we have
the estimate of the capacity ${\rm Cap}(X)$ from below by $\kappa$, and since the set $X$ is separated from $\partial Q$ by some annulus with
modulus larger than $M$, the estimate (\ref{beta}) follows.
Thus, (\ref{decay}) implies, by induction, that, if $I$ is a cylinder of length $n$ then
$$\sup_{z\in \partial Q_I}F(z)<C\gamma^n.$$
The required estimate on $\sup_{z\in Q_I}F(z)$ follows now from the Maximum Principle.
\end{proof}
\section { Sequences and convergence of admissible Cantor sets}
Recall that $Q$ is a fixed Jordan domain.
Recall that a non-homogeneous Cantor set is given by a sequence of maps
$f_k(z)=\sum_{i=1}^N(a_{k,i}^{-1}z+b_{k,i})\1_{Q_{k,i}}$, where $a_{k,i}^{-1}Q_{k,i}+b_{k,i}=Q$ and $k=0,1,2\dots$. Obviously, $f_k$ is $N$-to-one and the branches $(f_k)^{-1}_i:Q\to Q_{k,i}$ are given by
$(f_k)^{-1}_i(w)=a_{k,i}(w-b_{k,i})$.
Assume that we are given an infinite sequence of admissible Cantor sets $X^{(0)}, X^{(1)}, \dots, $ $X^{(n)},\dots$
Let us note the following:
\begin{prop}\label{limitcantor1}
Let $X^{(0)}, X^{(1)}, \dots X^{(n)},\dots$ be a sequence of admissible Cantor sets of the same Hausdorff dimension $\rho$. For each $n$ denote by $(^nf_k)_{k=0}^\infty$, the sequence of maps defining the set $X^{(n)}$.
Let $h>0$ be given (not necessarily equal to the Hausdorff dimension of the sets $X^{(n)}$).
For every $n$, let $\{\nu^{(n)}_k\}_{k=0}^\infty$ be the sequence of $h$-conformal measures associated to the set $X^{(n)}$.
Then one can extract a subsequence $n_s$ so that, for all $k\in \mathbb{N}$, and all $i=1,\dots N$ the following holds:
\begin{enumerate}
\item{}
The limit ${\lim_{s\to\infty}} (^{n_s}f_k)^{-1}_i=({}^\infty f_k)^{-1}_i$ exists (which, equivalently, means simply that for all $k$ the coefficients of the piecewise linear map $^{n_s}f_k$ converge to the coefficient of the piecewise linear map ${}^\infty f_k$). The Cantor set $X^{(\infty)}$, built with the maps ${}^\infty f_k$ is admissible.
\item{} For all $k\ge 0$, the following (weak-*) limits exist:
$$\nu_k^{(n_s)}\to \nu_k^{(\infty)}$$
and $\nu_k^{(\infty)}$ is the system of $h$- conformal measures for $X^{(\infty)}$. The corresponding normalizing factors are
$$\lambda_{k,h}^\infty= \lim_{s\to\infty}\lambda_{k,h}^{n_s}.$$
\end{enumerate}
\end{prop}
\begin{proof}
The proof of convergence of the maps uses only the diagonal argument. Note that we do not require (and do not prove) this convergence to be uniform with respect to $k$.
To prove the convergence of the conformal measures, it is enough to recall the explicit formulas (\ref{measurenu}) and (\ref{measurenuk}).
Let us fix an arbitrary cylinder $I$, of length $m$. Then
$$\nu_0^{(n_s)}(I) =\frac{\left (|(^{n_s}f_{m-1}\circ
\dots\circ ^{n_s}f_1\circ ^{n_s}f_0)'|^{-h}\right ) _{|I}}
{\lambda^{n_s}_{0,h}\lambda_{1,h}^{n_s}\dots \lambda_{m-1,h}^{n_s}}$$
and it is clear that the convergence of the coefficients of the maps $^{n_s}f_k$ for $k=0, \dots m-1$ gives the convergence of $\nu_0^{(n_s)}(I)$ to $\nu_0^{(\infty)}(I)$. This easily implies that $\nu_0^{(n_s)}$ converge weakly to $\nu_0^{(\infty)}$, treated as measures in $\Sigma$ and also as measures in $\mathbb{C}$.
The same reasoning applies for the measures $\nu_k^{(n_s)}$.
Here, as usually, we identify, through an appropriate coding, the measures on the Cantor sets $X^{(n_s)}_k$ and the measures on the symbolic space $\Sigma$.
\end{proof}
Now, let $X^{(n)}$ be a sequence of admissible Cantor sets, converging to $X^{(\infty)}$ in the sense of item (1) in Proposition~\ref{limitcantor1}.
\begin{prop}\label{limitcantor2}
Let $X^{(0)}, X^{(1)}, \dots X^{(n)},\dots$ be a sequence of admissible Cantor sets, converging to $X^{(\infty)}$ in the sense of item (1) in Proposition~\ref{limitcantor1}.
Assume that a sequence of subharmonic functions $F^{(n)}:Q\to \mathbb{R}$ is given:
$$F^{(n)}\in\mathcal{G}_{X^{(n)}}.$$
Then one extract a subsequence of the function $F^{(n_s)}$ such that $F^{(n_s)}$ converges uniformly on compact subsets of $Q$ to
$$F^{(\infty)}\in\mathcal G_{X^{(\infty)}}.$$ Moreover, the sequence of measures $\mu_{n_s}=\Delta(F^{(n_s)})$ converges weakly to $\mu^{(\infty)}=\Delta(F^{(\infty)})$.
\end{prop}
\begin{proof}
The proof, again, uses the diagonal argument.
Write $Q\setminus X^{(\infty)}$ as a countable union $\bigcup C_m$ of compact connected subsets of $Q\setminus X^{(\infty)}$, where $C_{m+1}\supset C_m$:
$$C_m=\overline Q'_m\setminus \bigcup_{|J|=m}Q_J $$
where, $Q_J$ correspond to the coding for the limit set $X^{(\infty)}$
and $Q'_m$ is an increasing sequence of topological discs, with $X^{(\infty)}\subset Q'_m\subset \overline Q'_m\subset Q'_{m+1}$ and $\bigcup Q'_m=Q$.
Fix $m$. As $X^{(n)}\to X^{(\infty)}$, the functions $F^{(n)}$ form a uniformly bounded
sequence of harmonic functions in a neighbourhood of $C_m$, starting from some $n=n(m)$.
Thus, one can extract a subsequence converging uniformly in $C_m$ to some function
$F^{(\infty)}$ defined in $C_m$ and harmonic in ${\rm int}(C_m)$.
In the inductive construction, we choose yet another subsequence, converging uniformly in $C_{m+1}$.
The limit must coincide in ${\rm int}(C_m)$ with the previously found limit
$F^{(\infty)}$.
The required subsequence $n_s$ is now chosen according to the Cantor diagonal argument. It is obvious from the construction that $F^{(\infty)}$ is
positive and harmonic in $Q\setminus X^{(\infty)}$. It remains to check that setting $F^{(\infty)}(x)=0$ for $x\in X^{(\infty)}$ gives a
continuous (thus: also subharmonic) extension of $F^{(\infty)}$ to the whole domain $Q$.
Let $I$ be an arbitrary cylinder, denote by $l$ the length of $I$.
Let
$I'$ be the cylinder of length $l-1$ containing $I$, and let $Q_I$ (resp. $Q_{I'}$) be the domain corresponding to $I$ ($I'$), defined by the coding
for $X^{(\infty)}$. Similarly, denote by $Q_I^{(n)}$ (resp. $Q_{I'}^{(n)}$) the domain corresponding to $I$ (resp. $I'$), defined by the coding for $X^{(n)}$.
Then, for large $n_s$, $Q_I\subset Q_{I'}^{(n_s)}$. Let $z\in Q_I$. Using the estimate (\ref{decay1}) we get that
$$F^{(n_s)}(z)\le C\gamma^{l-1}$$
and, therefore,
$$F^{(\infty)}(z)\le C\gamma^{l-1}.$$
Thus $F^{(\infty)}(z)$ tends to $0$ as $z\to X^{(\infty)}$.
The above reasoning shows also that the convergence $F^{(n_s)}\to F^{(\infty)}$ is uniform in each set
$\overline Q'_m$.
Once the convergence $ F^{(n_s)}\rightrightarrows F^{(\infty)}$ has been established, the convergence of the measures $\mu_{n_s}$ is standard: if $\phi\in C^\infty_0(Q)$ then
$$\Delta\tilde G^{(n_s)}(\phi)=\int \Delta\phi \tilde G^{(n_s)}\to \int \Delta\phi \tilde G^{(\infty)}=\Delta\tilde G^{(\infty)}(\phi).$$
\end{proof}
\section{Sufficient condition for the inequality $\dim(X)>\dim(\omega)$}\label{Bourgain}
In this section we show how to adapt the argument proposed by J. Bourgain in \cite{bou} to prove the inequality $\dim(X)>\dim(\omega)$. In this way, we obtain some explicit sufficient condition which guarantees the inequality $\dim(X)>\dim(\omega)$ (see Proposition ~\ref{ineq} below).
Recall that $\omega=\omega_0$ is the standard harmonic measure in $X_0$, evaluated at the point at $\infty$. Similarly, the harmonic measure on the set $X_k$ is denoted by $\omega_k$. We shall use the natural codings $C_0, C_1,\dots$
introduced in Definition~\ref{coding}.
In what follows, we often identify the symbolic cylinder $I$ and the corresponding subset of the Cantor set $Q_I\cap X_0=C_0^{-1}(I)$.
\begin{prop}\label{ineq}
Let $X=X_0$ be the admissible Cantor set. Let, as above, $\omega=\omega_0$ be the harmonic measure on $X_0$, $\rho=dim_H(X)$ and let $\nu=\nu_0$ be the $\rho$-conformal measure on $X_0$. Assume the following:
\begin{itemize}
\item[*]
There exists $K>0$ and $\gamma>1$ such that for every cylinder $I=(I)_n\subset X$ of length $n$ there exists a subcylinder $IJ=(IJ)_{n+K(I)}$, $K(I)\le K$ such that
$$\max \left ( \frac{\omega(IJ)}{\omega (I)}:\frac{\nu(IJ)}{\nu(I)}, \frac{\nu(IJ)}{\nu (I)}:\frac{\omega(IJ)}{\omega(I)}\right )>\gamma.$$
\end{itemize}
Then $\dim_H(\omega)<\dim_H(X)-\delta $ where $\delta$ is a constant depending only on $\underline a$, $K$, $N$, $\gamma$.
\end{prop}
\begin{proof}
Given $I=I_n\in\mathcal{E}_n$, denote by $\mathcal{E}_{n+K(I)}(I)$ the family of all cylinders of generation $n+K(I)$, which are contained in $I$.
First, we check that it follows from (*) that there exists $0<\beta<1$ such that, for every $I=I_n\in\mathcal{E}_n$,
\begin{equation}\label{krok}
\sum_{IJ\in\mathcal{E}_{n+K(I)}(I)}(\omega(IJ))^{\frac{1}{2}}(\nu(IJ))^{\frac{1}{2}}\le \beta \omega(I)^{\frac{1}{2}}\nu(I)^{\frac{1}{2}}
\end{equation}
The constant $\beta$ depends on $K$, $\underline a$, $\overline a$ and $\gamma$.
This can be seen as follows:
Notice that, given two sequences of positive numbers $c_1,\dots c_{\kappa}$ and $d_1,\dots d_{\kappa}$ such that $\sum c_i=\sum d_i=1$ we have, by Schwarz inequality, $\sum c_i^ {\frac{1}{2}}d_i^{\frac{1}{2}}\le 1$ and the equality holds iff the sequences are equal.
Let $\kappa$ be a positive integer and $B_0=\{(p_1,...,p_{\kappa},q_1,...q_{\kappa})\in[0,1]^{2{\kappa}}\;;\; \sum_ip_i=\sum_iq_i=1\}$ and, for $\gamma>1$ take the compact subset $B_{\gamma}$ of $B_0$ :
$$B_{\gamma}=\left\{(p_1,...,p_{\kappa},q_1,...q_{\kappa})\in[0,1]^{2{\kappa}}\;;\; \sum_ip_i=\sum_iq_i=1 \mbox{ and }\exists j\in\{1,..,{\kappa}\} \;;_;p_j \ge\gamma q_j \right\}.$$
The function $(p_1,...,p_{\kappa},q_1,...,q_{\kappa})\mapsto \sum_i\sqrt{p_iq_i}$ being continuous we get that there exists $\beta=\beta(\gamma, {\kappa})<1$ such that
$$\sup_{B_{\gamma}} \sum_i\sqrt{p_iq_i}\le\beta<1.$$
Finally, to get (\ref{krok}), one can now apply the previous to $p_i=\omega(IJ)/\omega(I)$ and $q_i=\nu(IJ)/\nu(I)$.
Now, (\ref{krok}) implies easily that for $n>K$,
\begin{equation}\label{Bourg}
\sum_{I\in\mathcal{E}_n}\omega(I)^{\frac{1}{2}}\nu(I)^{\frac{1}{2}}\le\tilde\beta^n
\end{equation}
with some $\beta<\tilde\beta<1$.
Next, fix some $s>\rho$ such that
\begin{equation}\label{s}
\tilde\beta \underline a^{\rho-s}<1
\end{equation}
Since $s>\rho=\dim_H(X)$, we have
$$\liminf_{n\to\infty}\lambda_{1,s}\lambda_{2,s}\dots \lambda_{n,s}=0.$$ Thus, there exists a sequence
$n_i\to\infty$ such that $\lim_{i\to\infty}\lambda_{1,s}\lambda_{2,s}\dots \lambda_{n_i,s}=0$. Fix such a sequence.
Obviously, one can assume that $\diam X=1$. Now, formula (\ref{measurenu}) gives
$$\nu(I_{n_i})= \frac{(\diam I_{n_i})^\rho}{\lambda_{1,\rho}\lambda_{2,\rho}\dots\lambda_{n_i,\rho}}.$$
Since $\lambda_{k,\rho}\le \underline a^{\rho-s}\lambda_{k,s}$, we can write, for every cylinder $I\in\mathcal{E}_{n_i}$,
$$\nu(I_{n_i})\ge {(\diam I_{n_i})^\rho}(\underline a)^{(s-\rho)n_i}\frac{1}{\lambda_{1,s}\lambda_{2,s}\dots\lambda_{n_i,s}}\ge {(\diam I_{n_i})^\rho}(\underline a)^{(s-\rho)n_i}, $$
for $n_i$ large, since the value of the omitted fraction tends to $\infty$.
Inserting this inequality to (\ref{Bourg}) and using (\ref{s}) we get, for small positive $\varepsilon$,
\begin{equation}\label{bourg2}
\aligned
&\sum_{J\in\mathcal{E}_{n_i}}(\omega(J))^{\frac{1}{2}}(\diam(J))^{\frac{\rho-\varepsilon}{2}}\le\\
&\sum_{J\in\mathcal{E}_{n_i}}(\omega(J))^{\frac{1}{2}}(\nu(J))^{\frac{1}{2}}\underline a^{\frac{\rho-s}{2}n_i}\diam(J)^{-\frac{\varepsilon}{2}}\le\\
&\sum_{J\in\mathcal{E}_{n_i}}(\omega(J))^{\frac{1}{2}}(\nu(J))^{\frac{1}{2}}(\underline a)^{(\frac{\rho-s-\varepsilon}{2})n_i}\le\\
&\tilde\beta^{n_i}(\underline a)^{(\frac{\rho-s-\varepsilon}{2})n_i}=\left (\tilde \beta\underline a^{\rho-s}\underline a^{\frac{s-\rho-\varepsilon}{2}}\right )^{n_i}<\hat\beta^{n_i}
\endaligned
\end{equation}
with some $\hat\beta<1$, if $\varepsilon$ is small (since $s$ has been chosen so that $\tilde\beta \underline a^{\rho-s}<1$).
We shall show that (\ref{bourg2}) implies that $\dim\omega<\rho$.
Denote by ${\mathcal F}_{n_i}$ the family of all cylinders $I\in\mathcal{E}_{n_i}$ for which $\omega(I)<\diam(I)^{\rho-\varepsilon}$, and by
${\mathcal H}_{n_i}$ the family of the remaining cylinders in $\mathcal{E}_{n_i}$.
Then
$$\sum_{I\in {\mathcal H}_{n_i}}(\diam I)^{\rho-\varepsilon}
\le\sum_{I\in {\mathcal H}_{n_i}}\omega(I) \le 1$$
and
$$\sum_{I\in {\mathcal F}_{n_i}}\omega(I)=\sum_{I\in {\mathcal F}_{n_i}}\omega(I)^{\frac{1}{2}}\omega(I)^{\frac{1}{2}}\le
\sum_{I\in {\mathcal F}_{n_i}}\omega(I)^{\frac{1}{2}}\diam(I)^{\frac{\rho-\varepsilon}{2}}\le\hat\beta^{n_i}
$$
Thus, by Borel-Cantelli lemma,
$$\omega\left (\bigcup_{i_0}\bigcap_{i=i_0}^\infty (\bigcup_{I\in{\mathcal H}_{n_i}}I )\right ) =1$$
On the other hand, we see, directly from the definition of Hausdorff measure, that
$(\rho-\varepsilon)$- dimensional Hausdorff measure of the above set
is $\sigma$--finite,
Therefore,
$\dim_H(\omega)\le\rho-\varepsilon$.
\end{proof}
\section{The alternative case}\label{alternative}
We will investigate the case when condition (*) of proposition \ref{ineq} fails. We keep the notation of the previous sections. In particular, $X=X_0$ is an admissible Cantor set of dimension $\rho$. Let $\nu_k$ be the collection of $\rho$-conformal measures associated to $X$.
Note that (although this fact in not used in our proof), we can assume, using Theorem B,
that the starting measures $\nu_k$ are just the normalized $\rho$ dimensional Hausdorff measures.
\begin{prop}\label{impossiblecase2}
Suppose that for all $1>\gamma>0$ and $K\in N$ there exist a cylinder $I$ such that for all subcylinders $IJ$, where $J$ is a word of length $\le K$ we have
\begin{equation}\label{hypothesis2}
\gamma<\left|\frac{\omega(IJ)}{\omega(I)}:\frac{\nu_0(IJ)}{\nu_0(I)}\right|<\frac{1}{\gamma}.
\end{equation}
Then we can construct another admissible Cantor set $\tilde X$ (not necessarily of dimension $\rho$), a $\rho$-conformal measure $\tilde\nu$ on $\tilde X$ and a bounded
subharmonic function $F\in\mathcal{G}_{\tilde X}$ such that $\Delta F=\tilde \nu$.
\end{prop}
\begin{proof}
Let $(\gamma_n)$ be a sequence of numbers in $(0,1)$, such that $\lim_{n\to\infty}\gamma_n=1$.
Under the hypothesis we can find a sequence $(I_n)_n$ of cylinders of size ${k_n}$, such that for every word $J$ of length $\le n$
\begin{equation}\label{simplify}
\gamma_{n}<\left|\frac{\omega(I_nJ)}{\omega(I_n)}:\frac{\nu_0(I_nJ)}{\nu_0(I_n)}\right|<\frac{1}{\gamma_n}.
\end{equation}
For any cylinder $I$ of length $k$, denote by $f_I$ the linear map $f_{k-1}\circ\dots\circ f_0$ mapping $Q_I$ onto $Q$.
Consider the functions $G_{k_n}$ defined in $Q$ by
$$G_{k_n}(x)=\frac{1}{\omega(I_n)}G(f_{I_n}^{-1}(x)).$$
Observe that $G_{k_n}\in{\mathcal G}_{X_{k_n}}$. Denote $\mu_{k_n}=\Delta G_{k_n}$. Thus, $\mu_{k_n}$ is a probability measure on $X_{k_n}$.
Let $J$ be a cylinder, identified, through the coding, with the appropriate subset of $X_{n_k}$.
Then
$$\mu_{k_n}(J)=\frac{\omega(I_{k_n}J)}{\omega(I_{k_n})}$$
The formula~(\ref{simplify}) can be now rewritten as follows: for every cylinder $J$ of length $\le n$:
\begin{equation}\label{simplify2}
\gamma_{n}<\left|\mu_{k_n}(J):\nu_{k_n}(J)\right|<\frac{1}{\gamma_n}.
\end{equation}
We can now apply Propositions~\ref{limitcantor1} and~\ref{limitcantor2} to the sequence of admissible Cantor sets $X^{(n)}:=X_{k_n}$,
the associated $\rho$-conformal
measures $\nu_0^{(n)}:=\nu_{k_n}$ (and $\nu_m^{(n)}:=\nu_{k_n+m}, m=1,2\dots$ ) and the sequence of functions $$F^{(n)}:=G_{k_n}\in\mathcal{G}(X_{k_n})=\mathcal{G}(X^{(n)}).$$
We obtain an admissible Cantor set $\tilde X$ and a function $\tilde G\in{\mathcal G}_{\tilde X}$ such that $\Delta \tilde G=\tilde \mu$,
$\tilde \mu$ being the limit of (a subsequence of) the measures $\mu_{n_k}$. Moreover, the measures $\nu_{k_n}$ converge weakly to the $\rho$- conformal measure $\tilde\nu$ on $\tilde X$.
On the other hand, the relation (\ref{simplify2}), implies that, for every cylinder $J$,
$$\frac{\mu_{k_n}(J)}{\nu_{k_n}(J)}\to 1$$
(where, again $J$ is identified with an appropriate subset of $X_{k_n}$).
This implies (cf . proposition \ref{limitcantor1}) that $\tilde\mu$ is a $\rho$-conformal measure on $\tilde X$, which completes the proof.
\end{proof}
\section{Rigidity argument}\label{volberg}
In this section we prove the following result which implies that the ``alternative case'' considered in the previous section cannot hold.
\begin{prop}\label{vol}
Let $X=X_0$ be an admissible Cantor set, and let $(\nu_k)_{k=0}^\infty$ be the collection of associated $\rho$ conformal measures, where $\rho$ is not necessarily equal to the Hausdorff dimension of the sets $X_k$. Further, let $\tilde G\in\mathcal{G}_X$
and let $\tilde\omega=\Delta \tilde G$.
Then the measures $\tilde\omega$ and $\nu=\nu_0$ do not coincide.
\end{prop}
\begin{proof}
Consider, again, the sets
\begin{equation}
X=X_0\stackrel{f_0}{\longrightarrow}X_1\stackrel{f_l}{\longrightarrow}X_2 \stackrel{f_2} {\longrightarrow}\dots
\end{equation}
and the family of functions $\tilde G_j$ defined inductively by setting $\tilde G_0=\tilde G$, $\tilde G_{k+1}=\mathcal{P}_k (\tilde G_k)$, and the corresponding measures $\tilde\omega_0=\tilde\omega=\Delta\tilde G_0$, $\tilde\omega_k=\Delta\tilde G_k$.
The proof of Proposition~\ref{vol} will be divided into two parts.
\subsection{Non-real case}
\begin{lem}\label{ra}
Assume that none of the sets $X_0, X_1, X_2\dots$ is contained in a set of zeros of a harmonic function defined in $Q$. If $\tilde\omega=\nu$ then for every cylinder $I\in\mathcal{E}_k$ there exists a constant $\alpha_I$ such that the equality
\begin{equation}
\tilde G_k\circ f^k=\tilde G_0\cdot \alpha_I
\end{equation}
holds everywhere in $Q_I$.
\end{lem}
\begin{proof}{\em of the lemma}
Since $\tilde\omega_k$ is the image of $\tilde\omega_0$ under the map $f^k$, $\nu_k$ is the image of $\nu_0$ under $f^k$ and also $\tilde \omega_0=\nu_0$, we have: $\tilde \omega_k=\nu_k$.
Consider now two measures in $Q_I$: $(\tilde\omega_0)_{|Q_I}$ and $\tilde \omega_k\circ f^k_{Q_I}$.
We have
$$\tilde\omega_k\circ f^k_{|Q_I}=\nu_k\circ f^k_{Q_I}=(\alpha_I\cdot \nu_0)_{|Q_I}$$
where $\alpha_I=|(f^k)'|^\rho_{|Q_I}\cdot \lambda_{0,\rho}\cdot\dots \cdot\lambda_{k-1,\rho}$ .
But $(\tilde\omega_0)_{|Q_I}=\Delta((\tilde G_0)_{|Q_I}$ and
$(\tilde\omega_k\circ f^k)_{|Q_j}=\Delta((\tilde G_k\circ f^k)_{|Q_I})$.
Since the measures are equal in $Q_I$, we get
\begin{equation}\label{functionH}
(\tilde G_k\circ f^k) _{|Q_I}=(\tilde G_0)_{|Q_I}\cdot \alpha_I+H
\end{equation}
where $H$ is a harmonic function in $Q_I$.
On the other hand, both $ \tilde G_k\circ f^k$ and $\tilde G_0$ are equal to $0$ in $Q_I\cap X=I$ and by assumption the set $X_k$ (thus: also $X\cap Q_I=I$) is not contained in a set of zeros of a harmonic function. We deduce that $H$ must be equal to $0$ and
the lemma follows.
\end{proof}
We continue the proof of Proposition ~\ref{vol}. We keep the assumption of Lemma~\ref{ra}.
Consider two cylinders $I$, $I'$ of the same length $k$. Then $f^k(I)=f^k(I')=X_k$. Denote by $f^{-k}_{I'}$ the
branch of $f^{-k}$ mapping $X_k$ to $I'$ (and $Q$ to $Q_{I'}$).
Let $g=g_{II'}=f^{-k}_{I'}\circ f^k:Q_I\to Q_{I'}$.
Then, by lemma~\ref{ra}, everywhere in $Q_I$,
\begin{equation}\label{dd}
\frac{\alpha_I}{\alpha_{I'}}\tilde G_0\circ g=\tilde G_0
\end{equation}
Now consider two cases.
\begin{enumerate}
\item{case 1:} There exists $D>0$ such that for every $k\in\mathbb{N}$, for all $I,I'\in\mathcal{E}_k$,
$$\frac {{\rm diam}Q_I}{\rm{diam} Q_{I'}}<D$$
\item{case 2:} the opposite
\end{enumerate}
First, we deal with case 2. In this case, we can choose the cylinders $I, I'$ so that $g$ is a strong contraction; since it is a linear map, it is actually defined everywhere in $\mathbb{C}$ and we have ${\rm cl}g(Q)\subset Q$, so
$$\bigcup_k g^{-k}(Q)=\mathbb{C}.$$
Now, two functions: $\frac{\alpha_{I'}}{\alpha_I}\tilde G_0\circ g$ and $\tilde G_0$ are defined and subharmonic in $Q$, harmonic in an open connected dense set $Q\setminus (X\cup g^{-1}(X))$. Since they coincide in an open set $Q_I$ (see (\ref{dd})), they coincide everywhere in $Q$. So, the formula
$$\frac{\alpha_{I'}}{\alpha_I}\tilde G_0\circ g$$ gives an extension of $\tilde G_0$ to a subharmonic function defined in $g^{-1}(Q)$ and, in the same way, to a subharmonic function defined everywhere in $\mathbb{C}$.
Now, choosing another pair of cylinders, we can produce another relation of the type (\ref{dd}) and another extension of $\tilde G_0$, say
$$\frac{\alpha_{J'}}{\alpha_J}\tilde G_0\cdot h=\tilde G_0.$$
By the same argument as above, these two extensions must coincide.
We use the same letter $\tilde G_0$ to denote this, just described, extension.
In the reasoning below we use the following argument from A.Volberg's paper \cite{vol1}.
Denote
$$Z=\{z\in \mathbb{C}:\tilde G_0(z)=0\},$$
in particular,
\begin{equation}\label{zero}
Z\cap Q=X
\end{equation}
The set $Z$ is invariant under the action of both contractions $h$ and $g$, and, consequently, the action of the group generated by them. It is easy to see that this group contains arbitrarily small translations. Thus, there exists such a small translation $T$ that $T(X)\subset Q$. This would imply $T(X)\subset X$, a contradiction.
So, we are left with Case 1.
Given $k\in \mathbb{N}$, we consider all cylinders of length $k$. There are $N^k$ of them, and, by the assumption,
\begin{equation}\label{sim}
\frac {{\rm diam}Q_I}{\rm{diam} Q_{I'}}<D
\end{equation}
for $I, I'\in \mathcal{E}_k$.
For $I, I'\in\mathcal{E}_k$ let, as above $g_{II'}=f^{-k}_{I'}\circ f^k:Q_I\to Q_{I'}$.
Using (\ref{sim}) and the fact that ${\rm card}(\mathcal{E}_k)=N^k$ it is easy to see the following.
{\bf Claim.} Let $\delta={\rm dist}(X,\partial Q)$.
There exists $0<b_0<\delta$ and a sequence $k_n\to\infty$ such that for every $k_n$ one can find two cylinders $I, I'\in \mathcal{E}_{k_n}$ such that, putting
$$g_{II'}=\gamma_nz+b_n$$
we have
\begin{equation}\label{lim}
\gamma_n\to 1, ~~~~~~~~~~~~~~~~~~b_n\to b_0.
\end{equation}
The functions $\tilde G_0$ and $\tilde G_0\cap g_{II'}$ are continuous in $R:=Q\cap g_{II'}^{-1}(Q)$ and harmonic in the open connected dense set $R\setminus (X\cup g_{II'}^{-1}X)$. Since they coincide in an open set $Q_I$, they coincide everywhere in $R$.
For $n$ sufficiently large we have $X\subset R$ and $g^{-1}_{II}(X)\subset R$. Since both sets can be defined as sets of zeros of $\tilde G_0$ and $\tilde G_0\circ g_{II'}$ respectively, they must coincide.
Passing to a limit in (\ref{lim}), we see that $X$ would be invariant under a (small) translation; again a contradiction. This ends the proof of Proposition~\ref{vol} in the first case.
\subsection{Real case}
This case can be reduced to the previous one. We briefly describe the procedure: the previous proof goes through unchanged, until the formula (\ref{functionH}). Now, we cannot conclude that $H=0$. However,
(\ref{functionH}) implies that some $X_k$ is contained in a set of zeros of a harmonic function $H$.
Replacing $X_0$ by $X_k$, we can assume that $k=0$.
\begin{prop}
Le $X=X_0$ be an admissible Cantor set. Assume that there exists a harmonic function $H$ in $Q$ such that $X\subset\{z:H(z)=0\}$. Then there exists $k\ge 0$ such that $X_k$ is contained in a straight line.
\end{prop}
\begin{proof} Denote by $l= \{z\in Q:H(z)=0\}$. Note that, after diminishing slightly the set $Q$ so that it still contains the whole set $X$,
we can assume that $l$ is a union of finitely many real analytic arcs $l=l_1\cup\dots\cup l_r$, and that the set of intersections $l_j\cap l_j$ is finite.
One can also assume that each such arc has infinitely many intersections with the set $X$.
Let $x\in X$ be an intersection point of some arcs, say $x\in l_1\cap l_2\cap X$.
Let $I$ be a cylinder containing $x$, let $I'$ be another cylinder of the same length and
let $x'=g_{II'}(x)$.
We claim that $x$ is an isolated point in either $l_1\cap X$ or $l_2\cap X$. Indeed,
otherwise take $x'= g_{II'}(x)$ and observe that the set $X$ in a neighborhood of $x'$ (more precisely: the set $X\cap Q_{I'}$) would be contained in a union of two intersecting arcs, and not contained in one arc. Since the total number of intersections of the arcs $l_1,\dots l_r$ is finite, and the number of possible choices of $x'$ is infinite, we get a contradiction.
\
Therefore, one can assume that $X$ is contained in a union of a finite number of analytic arcs $l_1, \dots l_r$, which do not intersect.
Pick a point $x\in X$ and a cylinder $I$ containing $x$, of sufficiently high generation $k$ so that the neighborhood $Q_I$ of $x$ intersects only one curve $l_j$. Then $f^k(Q_I)=Q$, $f^k(l_j\cap Q_I)$ is an analytic arc $L\subset Q$, and $X_k\subset L$.
\
The conclusion is that, replacing $X=X_0$ by some $X_k$, one can assume that $X$ is contained in one analytic arc $L$. We claim that $L$ is, actually,
a straight line. To check it, first notice that $g_{II'}(L\cap I)=L\cap I'$, thus
\begin{equation}\label{L}
g_{II'}(L\cap Q_I)=L\cap Q_{I'}
\end{equation}
Assume first that there are arbitrarily strong contractions among the maps $g_{II'}$. Then, for such a strong contraction, (\ref{L}) implies that
$g_{II'}(L)\subset L$.
If $L$ is not a straight line then there are three points in $L$ which are non-collinear.
Applying the maps (contracting similitudies) $g_{II'}$ and using the fact $g_{II'}(L)\subset L$ we conclude that the curve $L$ would not be differentiable, a contradiction.
If there are no strong contractions among the maps $g_{II'}$ (case one in the proof of part 1) then, as before, one can produce arbitrarily small translations $\tau$ such that $\tau(L)\cap Q\subset L$. Thus, $L$ is a straight line.
\end{proof}
Composing the maps $f_k$ with rotations, we can assume that all the sets $X_k$ are contained in the real line $\mathbb{R}$. Thus, since all
the functions $H$ in the formulas (\ref{functionH}) must be equal to $0$ in $\mathbb{R}$, $H(\overline z)=-H(z)$ and we can symmetrize all the
formulas (\ref{functionH}) by taking $\hat G_k(z)=\tilde G_k(z)+\tilde G (\overline z)$. Then we get, instead of (\ref{functionH}),
$$(\hat G_k\circ f^k) _{|Q_I}=(\hat G_0)_{|Q_I}\cdot \alpha_I$$
and the proof of the previous case applies.
\end{proof}
\noindent{\bf Final conclusion-proof of Theorem A. }
\begin{proof}{\em of Theorem A} is now clear.
Indeed, either harmonic and $\rho$-conformal measure of $X$ satisfy relation $(*)$ and hence $\dim\omega<\dim_H X$ by proposition \ref{ineq} or $(*)$ fails and we get a contradiction by combining propositions \ref{vol} and \ref{impossiblecase2}.
\end{proof}
\section{Further comments and remarks}\label{Comments}
In this paper the number of subdomains associated to an admissible map is fixed (equal to some $N$, cf section \ref{definition}). Modulo some technical but small modifications the proofs can be carried out if we consider sequences of admissible funtions ($f_n)$ with varying multiplicities $2\le N_n\le N$.
We can also easily modify the proof to get a uniform bound on $\dim X-\dim\omega$. To see this, observe that the difference $\dim X-\dim\omega$ depends only on $\gamma$ and $K$ in proposition \ref{ineq}. Therefore, we need to show that $\gamma$ and $K$ can be chosen uniformly for $\underline a$, $M$ and $N$ fixed. But then, if the uniformity of (*) fails,
for all $\gamma>1$ and $K>0$ there exists a set $X$ and a cylinder $I$ as in proposition \ref{impossiblecase2}. Using once again the diagonal argument (proposition \ref{limitcantor2}) we return to the situation of section \ref{volberg} and deduce the contradiction.
Nevertheless, the hypothesis on the upper bound of multiplicities (and hence lower bound $\underline a$ of contracting ratios) cannot be omitted as shows the following proposition.
\begin{prop}
There exists a (unbounded) sequence $N_n$ and a sequence of admissible functions $(f_n)$ of multiplicities $N_n$ such that the dimension of harmonic measure $\omega$ of the Cantor set $X$ associated to $(f_n)$ is equal to the Hausdorff dimension of the set.
\end{prop}
Let us give a sketch of the proof of this statement.
\begin{proof} Consider, for instance, the self-similar triadic linear Cantor set $X_0$ that we identify with the symbolic dyadic tree.
If $\sigma$ is the left shift, $I\in{\mathcal E}_n$ a cylinder of length $n$ and $K$ any set, we will write $IK$ for the set $\sigma^{-n}(K)\cap I$. So, $IK$ is a subset of $I$.
It is well known that the dimension $\tau$ of the harmonic measure $\omega_{X_0}$ of $\R^2\setminus X_0$ is strictly smaller than the Hausdorff dimension of the set $X_0$. Take $K_0\subset X_0$ to be a compact set of dimension $\tau$ and of harmonic measure $\omega_{X_0}(K_0)>\frac12$.
Then, we can find a finite covering ${\mathcal J}_1$ of $K_0$ with cylinders $(I^1_j)_j$ with $I^1_j\in {\mathcal J}_1\subset {\mathcal E}_1\cup...\cup{\mathcal E}_{N_1}$ such that $\sum_j\diam (I^1_j)^{\tau+\frac{\tau}{2}}<\frac{1}{2}$.
Choose $K_1\supset K_0$ compact of dimension $\tau$ and such that
$\omega_{X_0}(K_1)>\frac34$. Since $\dim_H(I\cap K_0)\le\tau$ for all cylinders $I$, we can augment $K_1$ with all images $\sigma^{n}({K_0})$, $n=1,..., N_1$. We can therefore assume that $I\cap K_0\subset IK_1$ for all $I\in {\mathcal J}_1$ (but still $\dim_H(K_1)=\tau$).
There is a finite collection ${\mathcal J}_2$ of cylinders
$(I^2_j)_j$ with $I^2_j\in {\mathcal J}_2\subset {\mathcal E}_1\cup...\cup{\mathcal E}_{N_2}$
covering $K_1$ and verifying
$$\sum_j\diam (II^2_j)^{\tau+\frac{\tau}{4}}<\frac{1}{2^2}\diam(I)^{\tau+\frac{\tau}{ 2}},$$
for any cylinder $I\in\mathcal{J}_1$.
We proceed by induction.
Assume we have constructed ${\mathcal J}_n\subset {\mathcal E}_1\cup...\cup{\mathcal E}_{N_{n}}$, a finite collection of cylinders covering a compact set $K_{n-1}$ satisfying
\begin{itemize}
\item $K_0\subset\cdots\subset K_{n-1}$ and $I\cap K_{n-2}\subset IK_{n-1}$ for all $I\in{\mathcal J}_{n-1}$
\item $\dim K_{n-1}=\tau$
\item $\omega_{X_0} (K_{n-1})>(1-\frac{1}{2^{n-1}})$
\item $\sum_{J\in{\mathcal J}_n}\diam (IJ)^{\tau+\frac{\tau}{2^{n}}}<\frac{1}{2^n}\diam (I)^{\tau+\frac{\tau}{2^{n-1}}}$, for all $I\in{\mathcal J}_{n-1}$.
\end{itemize}
Take $K_n\supset K_{n-1}$, a compact set of dimension $\tau$ , such that $I\cap K_{n-1}\subset IK_n$, for all $I\in {\mathcal J}_n$ and verifying
$$\omega_{X_0}( K_n)>(1-\frac{1}{2^n}).$$
There is a finite collection ${\mathcal J}_{n+1}$ of cylinders
$(I^{n+1}_j)_j$ with $I^{n+1}_j\in {\mathcal J}_{n+1}\subset {\mathcal E}_1\cup...\cup{\mathcal E}_{N_{n+1}}$ such that the sets $(I^{n+1}_{j})_j$ cover $K_n$ and verify
$$\sum_j\diam (II^{n+1}_{j})^{\tau+\frac{\tau}{2^{n+1}}}<\frac{1}{2^{n+1}}\diam(I)^{\tau+\frac{\tau}{ 2^n}},$$
for every cylinder $I$ from $\mathcal{J}_n$.
Note that by Harnack's principle there exist a constant $C>0$ such that, for all cylinders $I$,
$$\omega_{X_0}(IK_n)>\left(1-C\frac{1}{2^n}\right)\omega_{X_0}(I).$$
Consider the Cantor set
$$X=\bigcap_{n\in\N}\bigcup_{I_1\in{\mathcal J}_1}...\bigcup_{I_n\in{\mathcal J}_n} I_1...I_n.$$
Note that $K_0\subset X\subset X_0$. Moreover, by construction, the Hausdorff dimension of $X$ is less or equal to $\tau$ and since $K_0\subset X_0$ it is equal to $\tau$. On the other hand, by the monotonicity of the measure, $\omega_X(A)\ge\omega_{X_0}(A)$, for all $A\subset X$.
We only need to show that $\dim\omega_X=\tau$. Suppose that $\dim\omega_X<\tau$. Then, there exists $A\subset X$ such that $\dim A<\tau$ and $\omega_X(A)=1$. We deduce that
$\omega_X(X\setminus A)=0$ and a fortiori, $\omega_{X_0}(X\setminus A)=0$. Therefore,
$\omega_{X_0}(K_0)=\omega_{X_0}(K_0\cap A)$ and $\dim(K_0\cap A)<\tau$ which is absurd.
\end{proof}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,791
|
\section{Introduction}
An important deficiency in the string theory of black holes is the fact
that the simplest solutions, {\it e.g.,\,}\ Schwarzschild or Kerr, are
rather different than those whose entropy has been statistically
reproduced. The original black hole studied by Strominger and Vafa
\cite{SV} was charged and supersymmetric.
The charges of a black hole serve as tags that help
identify its microscopic constituents in string theory. In addition, when
the solution is supersymmetric, the phase space of the system is
drastically constrained and subject to powerful non-renormalization
theorems and state-counting techniques. Neutral black holes, however,
carry a minimal set of quantum numbers---mass and angular
momentum--- and so it seems hard to restrict the phase space to a sector
which is simple enough to count the microstates.
Nevertheless, we will argue that there exist vacuum black holes in M
theory that can be mapped to well-defined bound states of D-branes in
string theory, and which, in certain limits, become asymptotically flat
black holes. We start with dyonic solutions of five-dimensional
Kaluza-Klein (KK) theory \cite{gw}. Ref.~\cite{itz} showed that in a
certain limit, akin to the decoupling limit in AdS/CFT, these solutions
include the five-dimensional neutral rotating black hole of Myers and
Perry \cite{MP}, which is asymptotically flat. Reversing this procedure,
one can view the KK black hole as the Myers-Perry black hole placed at
the tip of a Taub-NUT geometry. A similar connection between four and
five dimensional black holes has been discussed recently for
supersymmetric solutions with additional charges and self-dual angular
momentum \cite{Gaiotto:2005gf}. Here we are considering the simplest
case of a black hole with generic $J_1$ and $J_2$ and no extra charges.
Taking the product with a flat $T^6$, we obtain a solution to M theory,
whose IIA reduction has D0 and D6 charge.
Even in the extremal limit, this black hole is not supersymmetric
\cite{KO}, in accord with the absence of supersymmetric bound states of
D0 and D6 branes. There are, however, non-supersymmetric, quadratically
stable, D0-D6 bound states \cite{wati}, and these will serve as a basis
to our microscopic picture. We will provide a simple string
description that exactly reproduces the entropy and mass of the extremal black
hole.
The entropy of some nonsupersymmetric, extremal black holes has been
reproduced before \cite{nosusy}. However, that required black holes with
four charges (in four dimensions) while we have only two. More
importantly, unlike previous examples our solutions are pure vacuum in
higher dimensions. The entropy of neutral black holes can be understood
in terms of a correspondence principle \cite{Horowitz:1996nw}, but that
does not reproduce the precise coefficient.
Earlier work attempting to obtain statistically the precise entropy of
asymptotically flat neutral black holes by different means include
\cite{argurio,ddbar}. Previous attempts at providing a microscopic
description of D0-D6 black holes include
\cite{sheinblatt,larsen2}.
\section{Kaluza-Klein and Myers-Perry black holes}
We begin by reviewing the KK black holes (for a detailed description,
see \cite{gw}).
These black holes are characterized (in four dimensions) by their mass
$M$, angular momentum $J$, and electric
and magnetic
charges $Q$
and $P$. They satisfy the inequality
\begin{equation}\label{bound}
2G_4M\geq \left(Q^{2/3}+P^{2/3}\right)^{3/2}\,,
\end{equation}
which, at slow rotation $G_4 J<PQ$, is saturated in the extremal limit
independently of $J$. When
$P=Q$ and $J=0$ the
four-dimensional geometry becomes exactly the same as the
Reissner-Nordstrom black hole.
In the five-dimensional vacuum solution, let $y$ be the compact KK
dimension, $y\equiv y+2\pi R$. This circle is fibered over the two-spheres
of spherical symmetry. Since $S^1$ bundles over $S^2$ are labeled by an
integer, the magnetic
charge must be quantized in terms of the radius $R$. The electric charge
is also quantized, since it corresponds
to momentum in the $y$-direction. More precisely,
\begin{equation}\label{nsix}
\qquad Q=\frac{2G_4 N_0}{R}, \qquad P=\frac{N_6 R}{4}
\end{equation}
for integers $N_0$ and $N_6$ (the reason for this notation will become
clear below).
The five-dimensional interpretation of these solutions is quite
interesting. In the absence of magnetic charge, the horizon has topology
$S^1\times S^2$, where $S^1$ is the KK circle, and the
solution is a black string boosted along $y$. However, the topology
changes when $P\neq 0$. If $N_6=1$, the $y$-circle and spherical $S^2$
combine into a topological $S^3$. In the extremal limit with $Q=0$ and $J=0$, the
solution becomes the KK monopole. The geometry can be
described as a `cigar' fibered on the orbital $S^2$.
If we add electric charge or energy above extremality, we find a finite
black hole horizon at the tip of the cigar. So magnetically charged
KK black holes are five-dimensional black holes with horizon
topology $S^3$, localized inside a Taub-NUT geometry.
The electric charge does not
correspond to a boost, but rather to \textit{rotation} of the black hole
aligned with the KK circle. A component of the rotation that
is not aligned with the five-dimensional fiber gives rise to
four-dimensional rotation.
If the size of the black hole is much smaller than the
KK radius $R$, then finite-size effects become
negligible and we recover the five-dimensional Myers-Perry black hole,
as explained in \cite{itz}.
In this limit, the four-dimensional mass is dominated by the mass
of the KK monopole, and
the excitation energy above the KK monopole
is equal to the ADM mass of the five-dimensional black hole.
The angular momenta in five
dimensions are related to the electric
charge and four-dimensional angular momentum as
$J_1+J_2=N_0 N_6^2$ and $J_1-J_2=2J N_6$.
The identification with a five-dimensional black hole is a local one.
Globally,
the asymptotic spatial geometry is actually the orbifold
$\mathbf{R}^4/\mathbf{Z}_{N_6}$. So only
configurations with $N_6=1$ give rise to globally
asymptotically flat solutions. When $N_6>1$ the black hole sits at the
tip of a conical space.
The entropy of the KK black hole is particularly simple in the
extremal limit \cite{larsen2},
\begin{equation}\label{Sbh}
S=\frac{A_{(4)}}{4G_4}=2\pi\sqrt{\frac{P^2Q^2}{G_4^2}-J^2}
=2\pi \sqrt{\frac{N_0^2 N_6^2}{4}-J^2}\,.
\end{equation}
This is independent of the circle radius $R$, so it also corresponds to
the entropy of the extremal
Myers-Perry black hole after the limit of infinite radius $R$ is taken.
It was noted in \cite{Dhar:1998ip,larsen2} that the entropy depends only
on the integer normalized charges. This is a strong indication that a
microscopic counting of the states is possible.
\section{Microscopic description}
In order to count the microstates of the KK black hole, we take
the product with $T^6$ (with volume $(2\pi)^6 V_6$) and view it as a
vacuum solution in M theory with the KK circle being the M theory
circle. By the usual relation between M theory and IIA string theory,
$R=g l_s$. The electric and magnetic charges now correspond to D0 and
D6-branes, and $N_0$ and $N_6$ are simply the net number of D-branes.
The quantization condition (\ref{nsix}) can now be written
\begin{equation}
Q= 2G_4 M_0 N_0\,, \qquad P= 2G_4 M_6 N_6
\end{equation}
where the masses of individual D0 and D6-branes are
\begin{equation}
M_0 = {1\over g l_s }\,, \qquad M_6 = {V_6\over g l_s^7 }\,
\end{equation}
and $G_4=g^2 l_s^8/8V_6$. So the ADM mass of our extremal black hole is
\begin{equation}\label{bhmass}
M= [(M_0 N_0)^{2/3} + (M_6 N_6)^{2/3}]^{3/2}\,.
\end{equation}
To reproduce this mass and the entropy formula (\ref{Sbh}) we will pass
to a T-dual configuration where the microscopic description becomes more
transparent. For simplicity we consider first the case without
four-dimensional rotation, $J=0$, and will discuss $J\neq 0$ near the end.
We first recall the situation for the supersymmetric four-charge
black holes in Type II string theory compactified on $T^6$. There are
many possible choices for the charges, all related by U-duality. For our
purposes, the most useful is in terms of four stacks of D3-branes
\cite{Balasubramanian:1996rx}. Any two stacks intersect over a line, and
all four intersect at a point. The orientation of the first three stacks
can be chosen arbitrarily, but to preserve supersymmetry, the
orientation of the last set of D3-branes is then fixed. We are
interested in the case where the number of branes in each stack is the
same, say $N$. The moduli of the $T^6$ then remain constant, and the
solution reduces to the product of
$T^6$ and extreme Reissner-Nordstrom,
\begin{equation}
ds^2 = -\left(1+{r_1\over r}\right)^{-2} dt^2 + \left(1+{r_1\over
r}\right)^2(dr^2 + r^2 d\Omega_2)\,.
\end{equation}
Assuming a square torus with equal size circles and $V_6=(V_3)^2$, the
constant $r_1$ is related to the number of 3-branes $N$ via
\begin{equation}
r_1= \frac{g N l_s^4}{2V_3}\,.
\end{equation}
The ADM mass is
\begin{equation}
M = {r_1\over G_4} = {4N V_3\over g l_s^4 }
\end{equation}
which is just the mass of the four stacks of $N$ 3-branes wrapped around
the torus, and the black hole entropy is
\begin{equation}\label{S3branes}
S = {A\over 4G_4} = 2\pi N^2\,.
\end{equation}
Although the explicit counting of states for supersymmetric
four-dimensional black holes is easier to carry out with a different
choice of charges \cite{Maldacena:1996gb}, the fact that it reproduces
(\ref{S3branes}) and is related by U-duality ensures that the D3-branes
also contain precisely the right number of states (at large $N$) to
reproduce the black hole entropy. Furthermore, since the entropy is
independent of the moduli of the torus, it seems clear that the states
are associated with the intersection point of the branes.
Bound states of four D0-branes and four D6-branes were described by
Taylor in terms of a gauge theory configuration on the worldvolume on
the 6-brane \cite{wati}. He pointed out that after applying T-duality
along three cycles on the torus, this configuration was equivalent to
four D3-branes in a configuration very similar to the one described
above. However, there are two important differences. The orientations
correspond to broken supersymmetry, and the branes are wrapping the
diagonals of the torus. To be explicit, consider first a square $T^2$ with
coordinates $(x_1,x_2)$. The two diagonals are given by $x_2 = \pm x_1$
which we will call the $+$ and $-$ cycle. If we orient the cycles so
that $x_2$ always increases, then a configuration of two strings
wrapping both diagonals has net winding number two around $x_2$ and zero
winding around $x_1$ (see Fig.~\ref{fig:diagonals}).
\begin{figure
\begin{center}\leavevmode %
\epsfxsize=4.5cm
\epsfbox{diagonals.eps}
\end{center}
\caption{\small Branes wrapping the diagonals of a torus. There are two
intersection points, at the origin and at the middle of the square. We
assume that each
intersection contributes a microscopic entropy equal
to that of a supersymmetric intersection of branes.
}
\label{fig:diagonals}
\end{figure}
Now view $T^6$ as the product of three $T^2$'s,
with coordinates $(x_1,x_2), (x_3,x_4)$ and $(x_5,x_6)$ respectively.
The 3-branes are all wrapped around one diagonal of each $T^2$ and
oriented so that the even coordinates always increase. So the
configuration can be labelled by specifying which of the diagonals is
wrapped on each $T^2$. If we T-dualize in the $2,4,6$ directions, the
configuration dual to the four D0-branes and
four D6-branes is
\begin{equation}\label{config}
(+++), (+--),(-+-),(--+)
\end{equation}
where the first entry corresponds to the first torus, etc. By
construction, each brane wraps the cycle (246) once, and since each
entry has an even number of minus signs, each brane also wraps the cycle
(135) once. It is easy to check that the net winding about any other
3-cycle (such as (146)) is zero. So even though there are four 3-branes,
the net nonzero charges are just (135) and (246), which is what one
expects after three T-dualities of D0-D6.
If we replace the single 3-brane around each cycle in (\ref{config}) by
$N$, we obtain a configuration with charge $4N$ around each of the
cycles (135) and (246). This reproduces the ADM mass of the black hole.
After three T-dualities, we get $N_0=N_6=4N$ and hence (\ref{bhmass})
becomes $ M = 2^{3/2} 4N V_3/ g l_s^4$ \footnote{Starting with a symmetric $T^6$
for the intersecting 3-branes and applying three T-dualities results
in a torus with volume $V_6= l_s^6$ (and changes the string coupling to $g l_s^3/V_3$). So the 6-brane and 0-brane have equal
mass.}. This is equal to the mass of the 3-branes since the length of each
leg is $\sqrt 2$ larger than before, so the volume of
each 3-brane is $2^{3/2}$ greater.
The fact that the mass does not saturate a BPS bound is just a
reflection of the fact that the branes are wrapping cycles of larger
volume.
What about the entropy? At first sight there appears to be a
discrepancy. If we make the reasonable assumption that the entropy of
the intersecting 3-branes is unaffected by the change in orientation
and rotation of the branes we would expect $S= 2\pi N^2$ as in
\reef{S3branes}. However, since
$N_0 = 4N$ and $N_6=4N$ (and $J=0$) the black hole entropy \reef{Sbh} is
\begin{equation}
S_{bh} = 16\pi N^2
\end{equation}
which is larger by a factor of eight. However, rotating the branes
increases the number of intersection points \footnote{We thank Juan
Maldacena for suggesting this.}. On a $T^2$, the diagonals have two
intersection points (see Fig.~\ref{fig:diagonals}). Since the branes
have two intersection
points on each of the three $T^2$'s, there are a total of eight
intersection points.
The total Hilbert space is a
tensor product of the states at each intersection point and hence
\begin{equation}
S_{branes}=8\times 2\pi N^2\,.
\end{equation}
Thus, a simple weak coupling calculation reproduces
the black hole entropy exactly.
It is easy to generalize this to the case of unequal charges (in terms
of gauge fields on the 6-brane, this was done in
\cite{Dhar:1998ip,larsen2}). The configuration of branes is again given
by (\ref{config}) where $\pm$ now refer to more general cycles than just
the diagonal. Let $\pm$ denote the cycles $x_2= \pm k x_1/l$ for
relatively prime integers $k,l$ (and similar cycles on the other two
$T^2$'s with the same integers $k,l$). The configuration of branes
(\ref{config}) now has charge $4k^3$ along (246) and charge $4l^3$ along
(135). The mass of each brane is now $(k^2 +l^2)^{3/2}$ larger just from
the increase in area of the three-cycle being wrapped. This agrees with
the ADM mass since with $N$ branes wrapped around each of the cycles,
$N_0=4 k^3 N$, $N_6=4 l^3N$ so (\ref{bhmass}) yields
\begin{equation}
M_{bh} = \frac{4N(k^2 + l^2)^{3/2}\;V_3}{g l_s^4 } = M_{branes}\,.
\end{equation}
In retrospect, the presence of $3/2$ in the exponent of the black hole mass
is an indication of
a microscopic description in terms of 3-branes.
The entropy also comes out exactly right since the $+$ and $-$ cycles
now have $2kl$ intersection points on each $T^2$ (see
Fig.~\ref{fig:unequal}). So the
collection of
3-branes has a total of $(2kl)^3$ intersection points. The entropy is
thus
\begin{eqnarray}\label{Sexact}
S_{branes} &=& (2kl)^3 \times 2\pi N^2 = \pi (4Nk^3)(4N l^3)
\nonumber\\
&=&\pi N_0 N_6= S_{bh}\,.
\end{eqnarray}
\begin{figure
\begin{center}\leavevmode
\epsfxsize=11cm
\epsfbox{unequal.eps}
\end{center}
\caption{\small Generalization to unequal
charges and non-trivial moduli. The branes wrap a rational direction
$k/l$ of the torus (in the figure, $k=3$, $l=1$), so there are $2kl$
intersection points on each $T^2$. In the limit to the
five-dimensional Myers-Perry black hole, the torus shrinks along
$x_{2}$. }
\label{fig:unequal}
\end{figure}
The limiting case of a Myers-Perry black hole described above
requires only a slight generalization (see Fig.~\ref{fig:unequal}).
We want to take $R\rightarrow \infty$ keeping $G_5$, $G_{11}$, $N_0$ and
$N_6$ fixed. In the D0-D6 frame, this corresponds to taking
$g\rightarrow \infty$ keeping $V_6$ fixed in Planck units. Since the
eleven-dimensional Planck length $l_p$ is given by $l_p= g^{1/3} l_s$,
when we T-dualize along a direction, the new circle has length $\tilde L
\sim l_s^2/L \sim g^{-2/3}$. Thus, after T-duality in the $2,4,6$
directions, the size of these three circles goes to zero in the limit.
(The IIB string coupling remains finite since $\tilde g \sim g l_s^3/L^3
\sim l_p^3/L^3$). We again obtain 3-branes wrapping the cycles
(\ref{config}), but they become essentially parallel as we approach the
Myers-Perry black hole, all wrapping the (135) cycle with a positive
orientation. Since the entropy is moduli-independent, the equality
between statistical and black hole entropies holds as in \reef{Sexact}.
Finally, to allow for $J\neq 0$ we assume that $J$ is evenly distributed
among the $(2kl)^3$ intersections of 3-branes so each one carries
angular momentum $J_0 = J/(2kl)^3$. In the $(0,4)$ theory that describes
the four-charge system, to account for $J_0$ we align the polarization of
$J_0^2/N^3$ fermionic left-moving excitations (out of $N$) while the
right-movers remain unexcited. The entropy is then $2\pi \sqrt{N^3(N -
J_0^2/N^3)}$. (This is the nonsupersymmetric analog of \cite{HLM}).
Assuming that this applies to each intersection, the mass formula is not
modified
but the total microscopic entropy becomes
\begin{equation}
S_{branes}=(2kl)^3 \times 2\pi \sqrt{N^4 - J_0^2}
=2\pi \sqrt{\frac{N_0^2 N_6^2}{4} - J^2}\,.
\end{equation}
This reproduces \reef{Sbh} and hence also the entropy of the extremal
rotating Myers-Perry black holes with generic angular momenta.
\section{Discussion}
It seems remarkable that a simple system of D-branes is able to
reproduce the mass and entropy of a vacuum black hole which is far from
being supersymmetric. It is not clear to us why this is working so well,
but it hints at further simplifications that might be possible for
neutral black holes. In particular, it is intriguing that we need to use
four different sets of branes, even though (from the IIA standpoint)
there are only two charges. This is reminiscent of
earlier suggestions that neutral black holes should be viewed as
collections of branes and anti-branes \cite{Horowitz:1996ay,ddbar}.
There is a mysterious duality invariant formula which reproduces the
entropy of all nonextremal black holes (including Schwarzschild) in
terms of branes and antibranes. It is not yet clear how to derive this
in string theory, but the above construction seems a
step in the right direction.
Various other open questions remain: (1) We have only considered extreme
KK black holes. Can one count the microstates of near-extremal
solutions? This would bring us a little closer toward understanding
Schwarzschild. (2) Can one reproduce the entropy of KK black holes with
$N_6=1$? The constructions above seem to require that $N_6\ge 4$
(actually $N_6\gg 4$, since assigning an entropy $2\pi N^2$ to each
intersection is justified only for large $N$). (3) Can one replace $T^6$
with general Calabi-Yau spaces and still count the entropy? Since mirror
symmetry can be viewed as T-duality on a $T^3$, an initial collection of
D0 and D6-branes goes over to a collection of D3-branes under this
symmetry. This suggests that the above construction may have a natural
generalization. (4) Rather than working with D3-branes, can one
understand the entropy of the above black holes directly in M theory (in
terms of gravitons and perhaps branes) or in terms of D6-branes with
flux corresponding to D0-branes? This latter case corresponds to
counting the number of instantons (of a certain type) in six-dimensional Yang-Mills
theory.
\begin{acknowledgments}
We thank the KITP, Santa Barbara for the stimulating program ``Scanning
New Horizons: GR Beyond 4 Dimensions" where this work was begun. We also
thank R.~Kallosh, F.~Larsen, J.~Polchinski, S.~Trivedi, and especially
J.~Maldacena for discussions. GH thanks the IAS Princeton for their
hospitality. RE was supported in part by DURSI 2005 SGR 00082, CICYT FPA
2004-04582-C02-02 and EC FP6 program MRTN-CT-2004-005104. GH was
supported in part by NSF grants PHY-0244764 and PHY-0555669.
\end{acknowledgments}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,960
|
Q: Rails 3 and delayed_job on production Using delayed_job gem with my rails 3 app on development working great but when I tried to use it on production using capistrano it gives me these error
script/delayed_job: Permission denied
I am using their method
and I followed These Railscast video
http://railscasts.com/episodes/171-delayed-job-revised
https://github.com/collectiveidea/delayed_job/wiki/Rails-3-and-Capistrano
A: Without knowing anything more (what users are you using? how do the permissions on the file look like?), I can't give you a better solution than to try
chmod a+x script/delayed_job
to give everyone execution permission for the script/delayed_job file...
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,288
|
Q: setInterval stopping prematurely I'm trying to increment the width of a div every half a second. It should continue to expand as long as it is less than 100% wide. However, it is stopping at 10%:
JSFIDDLE
$('.button').click(function(){
var progress = setInterval(function(){
if( $(".bar").css('width') < '100%') {
$('.bar').animate({ width: '+=10%' });
} else {
clearInterval(progress);
}
}, 500)
});
Would anyone know why?
A: Why are you playing with % of width. Just make it simple.
For example :
increase the width by 10px each time while width is less than 100.
$('.button').click(function(){
var progress = setInterval(function(){
var width=$(".bar").width();
if( width < 100) {
$('.bar').animate({ width: width + 10 });
} else {
clearInterval(progress);
}
}, 500)
});
Check fiddle here
A: Try this, simply compares bar to it's parent width:
$('.button').click(function () {
var $bar = $(".bar"),
parentW = $bar.parent().width();
var progress = setInterval(function () {
if ($bar.width() < parentW) {
$bar.animate({
width: '+=10%'
});
} else {
clearInterval(progress);
}
}, 500)
});
The problem with approach you had is css('width') returns string width including px and comparing that to string 100% wasn't working .
Easy check to see what it looks like:
console.log($bar.css('width'))
Note that jQuery.width() returns numerical value of pixel width
DEMO
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,642
|
<?php
namespace Payplug\Resource;
use Payplug;
/**
* A Payment refund.
*/
class Refund extends APIResource implements IVerifiableAPIResource
{
/**
* The factory method that constructs the API resource.
*
* @param array $attributes the default attributes.
*
* @return Refund The new resource.
*/
public static function fromAttributes(array $attributes)
{
$object = new Refund();
$object->initialize($attributes);
return $object;
}
/**
* Creates a refund on a payment.
*
* @param string|Payment $payment the payment id or the payment object
* @param array $data API data for refund
* @param Payplug\Payplug $payplug the client configuration
*
* @return null|Refund the refund object
* @throws Payplug\Exception\ConfigurationNotSetException
*/
public static function create($payment, array $data = null, Payplug\Payplug $payplug = null)
{
if ($payplug === null) {
$payplug = Payplug\Payplug::getDefaultConfiguration();
}
if ($payment instanceof Payment) {
$payment = $payment->id;
}
$httpClient = new Payplug\Core\HttpClient($payplug);
$response = $httpClient->post(
Payplug\Core\APIRoutes::getRoute(Payplug\Core\APIRoutes::REFUND_RESOURCE, null, array('PAYMENT_ID' => $payment)),
$data
);
return Refund::fromAttributes($response['httpResponse']);
}
/**
* Retrieves a refund object on a payment.
*
* @param string|Payment $payment the payment id or the payment object
* @param string $refundId the refund id
* @param Payplug\Payplug $payplug the client configuration
*
* @return null|Payplug\Resource\APIResource|Refund the refund object
*
* @throws Payplug\Exception\ConfigurationNotSetException
*/
public static function retrieve($payment, $refundId, Payplug\Payplug $payplug = null)
{
if ($payplug === null) {
$payplug = Payplug\Payplug::getDefaultConfiguration();
}
if ($payment instanceof Payment) {
$payment = $payment->id;
}
$httpClient = new Payplug\Core\HttpClient($payplug);
$response = $httpClient->get(
Payplug\Core\APIRoutes::getRoute(
Payplug\Core\APIRoutes::REFUND_RESOURCE, $refundId, array('PAYMENT_ID' => $payment)
)
);
return Refund::fromAttributes($response['httpResponse']);
}
/**
* Lists the last refunds of a payment.
*
* @param string|Payment $payment the payment id or the payment object
* @param Payplug\Payplug $payplug the client configuration
*
* @return null|Refund[] an array containing the refunds on success.
*
* @throws Payplug\Exception\ConfigurationNotSetException
* @throws Payplug\Exception\UnexpectedAPIResponseException
*/
public static function listRefunds($payment, Payplug\Payplug $payplug = null)
{
if ($payplug === null) {
$payplug = Payplug\Payplug::getDefaultConfiguration();
}
if ($payment instanceof Payment) {
$payment = $payment->id;
}
$httpClient = new Payplug\Core\HttpClient($payplug);
$response = $httpClient->get(
Payplug\Core\APIRoutes::getRoute(Payplug\Core\APIRoutes::REFUND_RESOURCE, null, array('PAYMENT_ID' => $payment))
);
if (!array_key_exists('data', $response['httpResponse']) || !is_array($response['httpResponse']['data'])) {
throw new Payplug\Exception\UnexpectedAPIResponseException(
"Expected API response to contain 'data' key referencing an array.",
$response['httpResponse']
);
}
$refunds = array();
foreach ($response['httpResponse']['data'] as &$refund) {
$refunds[] = Refund::fromAttributes($refund);
}
return $refunds;
}
/**
* Returns an API resource that you can trust.
*
* @param Payplug\Payplug $payplug the client configuration.
*
* @return Payplug\Resource\APIResource The consistent API resource.
*
* @throws Payplug\Exception\UndefinedAttributeException when the local resource is invalid.
*/
function getConsistentResource(Payplug\Payplug $payplug = null)
{
if (!array_key_exists('id', $this->_attributes)) {
throw new Payplug\Exception\UndefinedAttributeException('The id of the refund is not set.');
}
else if (!array_key_exists('payment_id', $this->_attributes)) {
throw new Payplug\Exception\UndefinedAttributeException('The payment_id of the refund is not set.');
}
return Payplug\Resource\Refund::retrieve($this->_attributes['payment_id'], $this->_attributes['id'], $payplug);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,515
|
Q: Create a way to easily select by attributes for a complicated union file I have polygon feature class created from a union of 3 other feature classes. These are 5 ownership types, 3 classes of slope % and a multi-ring (3 distances; 50,75 and 100) road buffer. So any given polygon could have 1 of 45 combinations of features; 3*5*3 =45. I'd like to make a simple tool where a user could pick the combinations of features to quickly kick out only those areas as a mask to be used in another program.
A: To do this I would use the Select tool:
Extracts features from an input feature class or input feature layer,
typically using a select or Structured Query Language (SQL) expression
and stores them in an output feature class.
You would just need to ensure that your users knew how to use the Query Builder.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,978
|
Хосе Луис Родригес Перальта (; ) — мексиканский футболист, защитник. Участник летних Олимпийских игр 1948 года.
Биография
Хосе Родригес родился 5 ноября 1922 года в мексиканском городе Тампико.
Играл в футбол на позиции защитника. В 1945—1949 годах выступал в чемпионате Мексики за «Монтеррей». Входил во впервые сформированный состав команды.
В 1948 году вошёл в состав сборной Мексики по футболу на летних Олимпийских играх в Лондоне. Играл на позиции защитника в единственном матче мексиканцев, в котором они в 1/8 финала проиграли сборной Южной Кореи (3:5), мячей не забивал.
О дальнейшей жизни данных нет.
Примечания
Футболисты Мексики
Футболисты на летних Олимпийских играх 1948 года
Игроки сборной Мексики по футболу
Игроки ФК «Монтеррей»
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,030
|
Lista obiektów Nowego Katalogu Ogólnego (NGC) o numerach 6001-7000. Ten katalog astronomiczny obejmuje głównie gromady gwiazd, mgławice i galaktyki.
6001 – 6100
6101 – 6200
6201 – 6300
6301 – 6400
6401 – 6500
6501 – 6600
6601 – 6700
6701 – 6800
6801 – 6900
6901 – 7000
Bibliografia
The NGC/IC Project
6001
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,373
|
import unittest
import unittest.mock
from GitManager.repo import implementation
class TestLocalRepository(unittest.TestCase):
def test_eq(self):
""" Checks that equality between LocalRepositories works properly """
self.assertEqual(
implementation.LocalRepository('/path/to/clone'),
implementation.LocalRepository('/path/to/clone'),
'equality between two LocalRepositories'
)
self.assertNotEqual(
implementation.LocalRepository('/home/user/example'),
implementation.LocalRepository(
'/home/user/example/.git'),
'difference between two LocalRepositories')
def test_path(self):
""" Tests that the path property works as intended """
self.assertEqual(
implementation.LocalRepository('/path/to/clone').path,
'/path/to/clone',
'path of a simple repository'
)
self.assertEqual(
implementation.LocalRepository(
'/home/user/example').path,
'/home/user/example',
'path of a simple git repository'
)
def test_str(self):
""" Tests that the str() of a remoteRepository works properly """
self.assertEqual(
str(implementation.LocalRepository(
'/path/to/clone')),
'/path/to/clone',
'str() of a simple repository'
)
self.assertEqual(
str(implementation.LocalRepository(
'/home/user/example')),
'/home/user/example',
'str() of a simple git repository'
)
def test_repr(self):
""" Tests that the repr() of a remoteRepository works properly """
self.assertEqual(
repr(implementation.LocalRepository(
'/path/to/clone')),
'<LocalRepository /path/to/clone>',
'str() of a simple repository'
)
self.assertEqual(
repr(implementation.LocalRepository(
'/home/user/example')),
'<LocalRepository /home/user/example>',
'repr() of a simple git repository'
)
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_remotes(self, run_gitrun: unittest.mock.Mock):
""" checks that remotes properly works as intended """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# set the return value
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="origin\nupstream".encode("utf-8"))()
self.assertEqual(repo.remotes, ["origin", "upstream"], "Remotes are "
"parsed "
"properly")
run_gitrun.assert_called_with('remote', 'show', '-n',
cwd='/path/to/repository')
run_gitrun.return_value.wait.assert_called_with()
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_get_remote_url(self, run_gitrun: unittest.mock.Mock):
""" checks that get_remote_url function works as intended """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# throw an error for the remote
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="fatal: No such remote 'example'\n".encode("utf-8"))()
run_gitrun.return_value.success = False
# check that an error is thrown if we look for a remote that doesn't
# exist
with self.assertRaises(ValueError):
repo.get_remote_url("example")
run_gitrun.assert_called_with('remote', 'get-url', 'example',
cwd='/path/to/repository')
# thrown no error
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="git@example.com:example/repo\n".encode("utf-8"))()
run_gitrun.return_value.success = True
# check that we can actually get the remote url
self.assertEqual(repo.get_remote_url('origin'),
'git@example.com:example/repo', 'getting a remote '
'url')
# check that the git run has been called
run_gitrun.assert_called_with('remote', 'get-url', 'origin',
cwd='/path/to/repository')
@unittest.mock.patch('GitManager.utils.run.GitRun')
@unittest.mock.patch('os.path.isdir')
def test_exists(self, os_path_isdir: unittest.mock.Mock,
run_gitrun: unittest.mock.Mock):
""" checks that exists method makes an external call """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# setup mocks so that the path does not exist
os_path_isdir.return_value = False
self.assertFalse(repo.exists(), 'non-existence of a repository')
os_path_isdir.assert_called_with('/path/to/repository')
run_gitrun.assert_not_called()
# setup mocks so that the path exists but the --show-toplevel fails
os_path_isdir.reset_mock()
os_path_isdir.return_value = True
run_gitrun.reset_mock()
run_gitrun.return_value.success = False
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="/path/to\n".encode("utf-8"))()
self.assertFalse(repo.exists(),
'non-existence of a repository when toplevel fails')
os_path_isdir.assert_called_with('/path/to/repository')
run_gitrun.assert_called_with('rev-parse', '--show-toplevel',
cwd='/path/to/repository')
run_gitrun.reset_mock()
run_gitrun.return_value.success = True
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="/path/to\n".encode("utf-8"))()
self.assertFalse(repo.exists(),
'non-existence of a repository when not toplevel')
os_path_isdir.assert_called_with('/path/to/repository')
run_gitrun.assert_called_with('rev-parse', '--show-toplevel',
cwd='/path/to/repository')
# setup mocks so that the path exists and is toplevel
os_path_isdir.reset_mock()
os_path_isdir.return_value = True
run_gitrun.reset_mock()
run_gitrun.return_value.success = True
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="/path/to/repository\n".encode("utf-8"))()
self.assertTrue(repo.exists(),
'existence of a repository when not toplevel')
os_path_isdir.assert_called_with('/path/to/repository')
run_gitrun.assert_called_with('rev-parse', '--show-toplevel',
cwd='/path/to/repository')
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_ref_parse(self, run_gitrun: unittest.mock.Mock):
""" checks that ref_parse function works as intended """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# set the return value
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="aaaaaa\n".encode("utf-8"))()
self.assertEqual(repo.ref_parse("master"), "aaaaaa", "parsing master "
"works properly")
run_gitrun.assert_called_with("rev-parse", "master",
cwd='/path/to/repository')
run_gitrun.return_value.wait.assert_called_with()
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_symbolic_ref(self, run_gitrun: unittest.mock.Mock):
""" checks that symbolic_ref properly works as intended """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# set the return value
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="refs/heads/master\n".encode("utf-8"))()
self.assertEqual(repo.symbolic_ref("HEAD"), "refs/heads/master",
"parsing symbolic ref works properly")
run_gitrun.assert_called_with("symbolic-ref", "-q", "HEAD",
cwd='/path/to/repository')
run_gitrun.return_value.wait.assert_called_with()
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_upstream_ref(self, run_gitrun: unittest.mock.Mock):
""" checks that upstream_ref properly works as intended """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# set the return value
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="origin/master\n".encode("utf-8"))()
self.assertEqual(repo.upstream_ref("refs/heads/master"),
"origin/master",
"parsing upstream ref works properly")
run_gitrun.assert_called_with("for-each-ref",
"--format=%(upstream:short)",
"refs/heads/master",
cwd='/path/to/repository')
run_gitrun.return_value.wait.assert_called_with()
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_gc(self, run_gitrun: unittest.mock.Mock):
""" checks that gc method makes an external call """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# and make sure that the return value is True
run_gitrun.success = True
# assert that we can garbage collect
self.assertTrue(repo.gc(),
'running garbage collection on a repository')
# check that we called the fetch --all command properly
run_gitrun.assert_called_with('gc', cwd='/path/to/repository',
pipe_stderr=True, pipe_stdin=True,
pipe_stdout=True)
# reset the mock
run_gitrun.reset_mock()
run_gitrun.success = True
self.assertTrue(repo.gc('--aggresive'),
'running aggressive housekeeping on a repository')
# check that we called the fetch --all command properly
run_gitrun.assert_called_with('gc', '--aggresive',
cwd='/path/to/repository',
pipe_stderr=True, pipe_stdin=True,
pipe_stdout=True)
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_fetch(self, run_gitrun: unittest.mock.Mock):
""" checks that fetch method makes an external call """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# and make sure that the return value is True
run_gitrun.success = True
# assert that we can fetch
self.assertTrue(repo.fetch(), 'fetching a repository')
# check that we called the fetch --all command properly
run_gitrun.assert_called_with('fetch', '--all', '--quiet',
cwd='/path/to/repository',
pipe_stderr=True, pipe_stdin=True,
pipe_stdout=True)
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_pull(self, run_gitrun: unittest.mock.Mock):
""" checks that pull method makes an external call """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# and make sure that the return value is True
run_gitrun.success = True
# assert that we can pull
self.assertTrue(repo.pull(), 'pulling a repository')
# check that we called the pull command properly
run_gitrun.assert_called_with('pull', cwd='/path/to/repository',
pipe_stderr=True, pipe_stdin=True,
pipe_stdout=True)
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_push(self, run_gitrun: unittest.mock.Mock):
""" checks that push method makes an external call """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# and make sure that the return value is True
run_gitrun.success = True
# assert that we can push
self.assertTrue(repo.push(), 'push a repository')
# check that we called the push command properly
run_gitrun.assert_called_with('push', cwd='/path/to/repository',
pipe_stderr=True, pipe_stdin=True,
pipe_stdout=True)
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_local_status(self, run_gitrun: unittest.mock.Mock):
""" checks that local_status method makes an external call """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# mock the exists function
repo.exists = unittest.mock.MagicMock(return_value=False)
# local status and non-existence
self.assertEqual(repo.local_status(), None, "local_status of "
"non-existing "
"repository")
# reset the mock and change the return value to True
repo.exists.reset_mock()
repo.exists.return_value = True
# setup the return value of the git run
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="".encode("utf-8"))()
# check that the local_status did print correctly
self.assertEqual(repo.local_status(), "", "Reading status works "
"properly")
# check that we called the status command
run_gitrun.assert_called_with('status', '--porcelain',
cwd='/path/to/repository')
@unittest.mock.patch('GitManager.utils.run.GitRun')
@unittest.mock.patch(
'GitManager.repo.implementation.LocalRepository.ref_parse',
side_effect=["aaaaaa", "bbbbbb", "aaaaaa", "bbbbbb", "aaaaaa",
"bbbbbb", "aaaaaa", "bbbbbb", "aaaaaa", "bbbbbb"]
)
@unittest.mock.patch(
'GitManager.repo.implementation.LocalRepository.upstream_ref',
side_effect=["origin/master", "origin/master", "origin/master",
"origin/master", "origin/master"]
)
@unittest.mock.patch(
'GitManager.repo.implementation.LocalRepository.symbolic_ref',
side_effect=["refs/heads/master", "refs/heads/master",
"refs/heads/master", "refs/heads/master",
"refs/heads/master"]
)
@unittest.mock.patch(
'GitManager.repo.implementation.LocalRepository.exists'
)
def test_remote_status(self,
LocalRepository_exists: unittest.mock.Mock,
LocalRepository_symbolic_ref: unittest.mock.Mock,
LocalRepository_upstream_ref: unittest.mock.Mock,
LocalRepository_ref_parse: unittest.mock.Mock,
run_gitrun: unittest.mock.Mock):
""" Tests that the remote_status command works properly """
# create a repository
repo = implementation.LocalRepository('/path/to/repository')
# if we want to update, we should have called with 'remote' 'update'
run_gitrun.return_value.success = False
self.assertEqual(repo.remote_status(update=True), None)
run_gitrun.assert_called_with('remote', 'update',
cwd='/path/to/repository')
# reset all the mocks
LocalRepository_exists.reset_mock()
LocalRepository_symbolic_ref.reset_mock()
LocalRepository_upstream_ref.reset_mock()
LocalRepository_ref_parse.reset_mock()
run_gitrun.reset_mock()
run_gitrun.return_value.success = True
# merge base is aaaaaa (local)
LocalRepository_exists.return_value = False
self.assertEqual(repo.remote_status(), None)
# reset all the mocks
LocalRepository_exists.reset_mock()
LocalRepository_symbolic_ref.reset_mock()
LocalRepository_upstream_ref.reset_mock()
LocalRepository_ref_parse.reset_mock()
run_gitrun.reset_mock()
# merge base is local
LocalRepository_exists.return_value = True
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="aaaaaa\n".encode("utf-8"))()
self.assertEqual(repo.remote_status(update=False),
implementation.RemoteStatus.REMOTE_NEWER)
run_gitrun.assert_called_with("merge-base", "aaaaaa", "bbbbbb",
cwd="/path/to/repository")
# reset all the mocks
LocalRepository_exists.reset_mock()
LocalRepository_symbolic_ref.reset_mock()
LocalRepository_upstream_ref.reset_mock()
LocalRepository_ref_parse.reset_mock()
run_gitrun.reset_mock()
# merge base is local
LocalRepository_exists.return_value = True
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="bbbbbb\n".encode("utf-8"))()
self.assertEqual(repo.remote_status(),
implementation.RemoteStatus.LOCAL_NEWER)
run_gitrun.assert_called_with("merge-base", "aaaaaa", "bbbbbb",
cwd="/path/to/repository")
# reset all the mocks
LocalRepository_exists.reset_mock()
LocalRepository_symbolic_ref.reset_mock()
LocalRepository_upstream_ref.reset_mock()
LocalRepository_ref_parse.reset_mock()
run_gitrun.reset_mock()
# merge base is ????
LocalRepository_exists.return_value = True
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="cccccc\n".encode("utf-8"))()
self.assertEqual(repo.remote_status(update=False),
implementation.RemoteStatus.DIVERGENCE)
run_gitrun.assert_called_with("merge-base", "aaaaaa", "bbbbbb",
cwd="/path/to/repository")
# reset all the mocks
LocalRepository_exists.reset_mock()
LocalRepository_symbolic_ref.reset_mock()
LocalRepository_upstream_ref.reset_mock()
LocalRepository_ref_parse.reset_mock()
run_gitrun.reset_mock()
# both refs are equal
LocalRepository_ref_parse.side_effect = ["aaaaaa", "aaaaaa"]
LocalRepository_exists.return_value = True
run_gitrun.return_value.stdout = unittest.mock.mock_open(
read_data="aaaaaa\n".encode("utf-8"))()
self.assertEqual(repo.remote_status(update=False),
implementation.RemoteStatus.UP_TO_DATE)
run_gitrun.assert_called_with("merge-base", "aaaaaa", "aaaaaa",
cwd="/path/to/repository")
class TestRemoteRepository(unittest.TestCase):
""" Tests that implementation works properly """
def test_eq(self):
""" Checks that equality between RemoteRepositories works properly """
self.assertEqual(
implementation.RemoteRepository('git@github.com:hello/world.git'),
implementation.RemoteRepository('git@github.com:hello/world.git'),
'equality between two RemoteRepositories'
)
self.assertNotEqual(
implementation.RemoteRepository('git@github.com:hello/world.git'),
implementation.RemoteRepository(
'https://github.com/hello/world.git'),
'difference between two RemoteRepositories'
)
def test_url(self):
""" Tests that the URL property works as intended """
self.assertEqual(
implementation.RemoteRepository(
'git@github.com:hello/world.git').url,
'git@github.com:hello/world.git',
'URL of a simple repository'
)
self.assertEqual(
implementation.RemoteRepository(
'https://github.com/hello/world.git').url,
'https://github.com/hello/world.git',
'URL of a simple git repository'
)
def test_matches(self):
""" Tests that the matches() of a remoteRepository works properly """
repo = implementation.RemoteRepository(
'git@github.com:hello/world.git')
self.assertTrue(repo.matches('world'), 'matching by a simple name')
self.assertTrue(repo.matches('hello/world'), 'matching by path')
self.assertTrue(repo.matches('w*'), 'matching by simple pattern')
self.assertTrue(repo.matches('h*/w*'), 'matching by complex pattern')
self.assertTrue(repo.matches('github.com/hello'),
'matching at the beginning')
self.assertTrue(repo.matches('hello'), 'matching in the middle')
self.assertTrue(repo.matches('git@github.com:hello/world.git'),
'matching full url')
self.assertFalse(repo.matches('wirld'), 'not matching non-pattern')
self.assertFalse(repo.matches('hello/wirld'),
'not matching non-pattern')
self.assertFalse(repo.matches('*/wirld'), 'not matching non-pattern')
self.assertFalse(repo.matches('git@github.com:halo/world.git'),
'not matching full url')
def test_str(self):
""" Tests that the str() of a remoteRepository works properly """
self.assertEqual(
str(implementation.RemoteRepository(
'git@github.com:hello/world.git')),
'git@github.com:hello/world.git',
'str() of a simple repository'
)
self.assertEqual(
str(implementation.RemoteRepository(
'https://github.com/hello/world.git')),
'https://github.com/hello/world.git',
'str() of a simple git repository'
)
def test_repr(self):
""" Tests that the repr() of a remoteRepository works properly """
self.assertEqual(
repr(implementation.RemoteRepository(
'git@github.com:hello/world.git')),
'<RemoteRepository git@github.com:hello/world.git>',
'str() of a simple repository'
)
self.assertEqual(
repr(implementation.RemoteRepository(
'https://github.com/hello/world.git')),
'<RemoteRepository https://github.com/hello/world.git>',
'repr() of a simple git repository'
)
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_exists(self, run_gitrun: unittest.mock.Mock):
""" checks that exists method makes an external call """
run_gitrun.return_value.success = True
# checking for existence should make an external call
self.assertTrue(implementation.RemoteRepository(
'git@github.com:hello/world.git').exists(),
'successfully checks existence using an external call')
run_gitrun.assert_called_with('ls-remote', '--exit-code',
'git@github.com:hello/world.git')
@unittest.mock.patch('GitManager.utils.run.GitRun')
def test_clone(self, run_gitrun: unittest.mock.Mock):
""" checks that clone method makes an external call """
run_gitrun.return_value.success = True
remote = implementation.RemoteRepository(
'git@github.com:hello/world.git')
local = implementation.LocalRepository('/path/to/clone')
# checking for existence should make an external call
self.assertTrue(remote.clone(local), 'successfully clones a '
'repository')
run_gitrun.assert_called_with('clone',
'git@github.com:hello/world.git',
'/path/to/clone', pipe_stderr=True,
pipe_stdin=True, pipe_stdout=True)
def test_components(self):
""" Checks that the components method works properly"""
def assert_components(url, components):
return self.assertEqual(
implementation.RemoteRepository(url).components(), components)
# git@github.com url
g_h_w_c = ['github.com', 'hello', 'world']
assert_components('git@github.com:hello/world.git', g_h_w_c)
assert_components('git@github.com:hello/world', g_h_w_c)
assert_components('git@github.com:hello/world/', g_h_w_c)
assert_components('git@github.com:hello/world//', g_h_w_c)
assert_components('ssh://git@github.com/hello/world.git', g_h_w_c)
assert_components('ssh://git@github.com/hello/world', g_h_w_c)
assert_components('ssh://git@github.com/hello/world/', g_h_w_c)
assert_components('ssh://git@github.com/hello/world//', g_h_w_c)
# https://github.com/user/repo
assert_components('https://github.com/hello/world.git', g_h_w_c)
assert_components('https://github.com:hello/world', g_h_w_c)
assert_components('https://github.com:hello/world/', g_h_w_c)
assert_components('https://github.com:hello/world//', g_h_w_c)
# user@server.com url
s_c_u_r = ['server.com', 'user', 'repository']
assert_components('user@server.com:repository', s_c_u_r)
assert_components('user@server.com:repository/', s_c_u_r)
assert_components('user@server.com:repository//', s_c_u_r)
assert_components('user@server.com:repository.git', s_c_u_r)
assert_components('ssh://user@server.com/repository', s_c_u_r)
assert_components('ssh://user@server.com/repository/', s_c_u_r)
assert_components('ssh://user@server.com/repository//', s_c_u_r)
assert_components('ssh://user@server.com/repository.git', s_c_u_r)
def test_humanish_part(self):
""" Checks that the get_humanish_part method works properly"""
self.assertEqual(
implementation.RemoteRepository(
'git@github.com:hello/world.git').humanish_part(),
'world')
self.assertEqual(
implementation.RemoteRepository(
'git@github.com:hello/world').humanish_part(),
'world'
)
self.assertEqual(
implementation.RemoteRepository(
'git@github.com:hello/world/').humanish_part(),
'world'
)
self.assertEqual(
implementation.RemoteRepository(
'git@github.com:hello/world//').humanish_part(),
'world'
)
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,140
|
\subsection{Notation}
$\mathcal{S}^{d}_+$ is the set of all positive definite matrices.
In the scope of this work, we only focus on Gaussian distributions which belong to the family of parameterized probability distributions $z_{h,\mathbf{a}, \mathbf{A}}$ having a location vector $\mathbf{a} \in \mathbb{R}^d$ which represents the shift of the distribution, a scale parameter $\mathbf{A} \in \mathcal{S}^{d}_+ $, that represents the statistical dispersion of the distribution, and a characteristic generator function $h$. This is the same hypothesis that have been used throughout the thesis. Specifically, for Gaussian distributions, the scale parameter coincides with the covariance matrix $var(z_{h,\mathbf{a}, \mathbf{A}}) = \mathbf{A}$. From now on, we denote Gaussian
distributions (or embeddings) as $z_{(h,\mathbf{a}, \mathbf{A})} = \mathcal{N}(\mathbf{a}, \mathbf{A})$.
Finally, when stated $\Tr$, we indicate the trace operator, that is the sum of the diagonal elements of a matrix.
\subsection{Mathematical Background}
We need to explicitly describe few mathematical concepts useful the understanding of our hypothesis.
First, as briefly mention in \Cref{sec:intro}, one key element in our approach is the notion of centroid of a set from eq. \eqref{eq:centroid_setext}. In fact an (aggregation) function that takes as input a set must respect the property of permutation to the order of objects in the set. Good candidate functions are the sum or the average, hence the centroid is a good representation a given set.
Then, we need to go from the point-vector representation of an item to a distributional representation. We choose to focus on Gaussian distributions, particularly diagonal Gaussian distributions. For doing this we propose to use a set encoder, that takes as input a vector, mainly the centroid of the set and outputs a location vector $\mu$ and the diagonal of a covariance matrix $d_{\Sigma}$. These concepts have been fully described in \cref{sec:set_eloe_chap}.
Finally, we need to describe the scoring function that takes as input two Gaussian representation of sets and assess which one is the better representation. As stated earlier, the best set is the one that have the smallest dispersion. We propose to use the $\Tr$, trace operator for scoring the sets. This is derived from the optimal transport theory, in particular the Wasserstein distance which have been fully explained in \cref{para:wasserstein}.
We recall the Wasserstein distance derived from the appropriate simplifications, and relying on the \textit{squared Bures metric} and \textit{squared Hellinger metric}:
\begin{equation}
W^2_2(\alpha, \beta) = \|\mathbf{a}-\mathbf{b}\|^2 + \|\sqrt{\mathbf{d_A}} - \sqrt{\mathbf{d_B}}\|^2
\label{eq:wasserstein_ese}
\end{equation}
\section{GausSetExpander}
In the following section, we present the proposed approach for the Entity Set Expansion task.
We propose an iterative approach that takes as input a small seed set and two candidates term and outputs two scores that indicates the better alternative. In a nutshell, our main hypothesis is that given two candidate entity for expanding the seed set, the better hypothesis will "fit" better the input set. We choose to represent this fit by considering the dispersion that the addition of a new term will cause to an existent set. A well suited mathematical notion for representing the dispersion around a center is the covariance matrix of a Gaussian distribution. That is the first key element of GausSetExpander. \Cref{algo:gausset_algo} describes our approach.
\begin{figure}[h]
\centering
\includegraphics[scale=0.25]{images/model1.png}
\caption{Overview of the set encoder that takes as input a set of terms and outputs a tuple of mean vector and a covariance matrix for the Gaussian distribution.}
\label{fig:model_gaussetext}
\end{figure}
\paragraph{Set Encoder}
Given a seed set, the first step is to encode it as a tuple of location vector and covariance matrix in order to have the the dispersion before adding the candidate terms for the expansion. For this,
we represent the element-wise embedding function $\phi(.)$ as a deep neural network, as illustrated in Figure~\cref{fig:model_gaussetext}.
This network takes as input a set, very similarly to the model proposed in \cref{sec:set_eloe_chap}. We again choose to only focus on diagonal covariance matrices for the sake of simplicity.
A first deep encoder, namely a 2-layer MLP with Relu, $\phi_\theta(\cdot)$ maps these random inputs into $d$-dimensional outputs. These are then aggregated to $\mu_\theta(\cdot)$ and fed to produce the variance $\Sigma_\theta(\cdot)$, a function which is again represented with a deep forward network.
\begin{algorithm}[h]
\SetKwInOut{KwIn}{Input}
\SetKwInOut{KwOut}{Output}
\KwIn{corpus $D$, vocabulary $V$, set $S_0$}
\For{$t \leftarrow 0$ \KwTo $|D|$}{
Get i, j, $c_i$, $c_j$ from $D$ \\
Embed i and j to $\mathbf{e_i} \in \mathbb{R}^d$, $\mathbf{e_j} \in \mathbb{R}^d$ \\
Embed terms in $S_0$ to $[\mathbf{e_0}, ..., \mathbf{e_n}]$ \\
Encode $S_0 \leftarrow \mathcal{N}(\mu_0, \Sigma_0)$: \newline
$c(S_0) = \frac{1}{n} \sum_i e_i$
$\mu_0$= $\phi_\mu(c(S_0))$ \newline
$\Sigma_0$ = $e^{\frac{1}{2} \phi_\Sigma(c(S_0))}$ \\
Append $\mathbf{i}$ and $\mathbf{j}$ to $S_0$: \newline
$S_t^{'} = [S_0, e_i]$ and $S_t^{''} = [S_0, e_j]$ \\
Compute cosine similarity: \newline
$sim_1 = \cos(c(S_0), c(c_i))$ and $sim_2 =\cos(c(S_0), c(c_j))$ \\
\eIf{$sim_1$ > $sim_2$}{$l = 1$}{$l = -1$}
Encode $S_t^{'}$ and $S_t^{''}$: \newline
$S_t^{'} \leftarrow \mathcal{N}(\mu_t^{'}, \Sigma_t^{'})$ \newline
$S_t^{''} \leftarrow \mathcal{N}(\mu_t^{''}, \Sigma_t^{''})$ \\
Compute the score: \newline
$score(i|S_0) = W((\mu_0, \Sigma_0), (\mu_i, \Sigma_i))$ \newline
$score(j|S_0) = W((\mu_0, \Sigma_0), (\mu_j, \Sigma_j))$ \newline
$l(i,j|S_0) = \max(0, s(i|S_0) - s(j|S_0) )$
}
\caption{GausSetExpander}
\label{algo:gausset_algo}
\end{algorithm}
\paragraph{Scoring function}
After describing the step to obtain the Gaussian representation of the sets, we illustrate the scoring process. In practice, most of the proposed methods for solving the ESE problem return a (top-k) ranking of the vocabulary rather than a fixed set. Then the evaluation is done on the returned ranking. Ideally, all terms that belongs to the semantic class identified by the seed set should be ranked higher.
From the Wasserstein distance in eq. \eqref{eq:wasserstein_ese}, we derive the scoring function for two given expanded sets encoded as Gaussian distributions is:
\begin{equation}
score(S_i, S_j) = W((\mu_i, \Sigma_i), (\mu_j, \Sigma_j))
\label{eq:score}
\end{equation}
This scoring function can be interpreted as point-wise mutual information between the the candidate item and the set to be expanded.
We stated earlier that GausSetExpander proceeds in stages. It first encodes the seed set $S_0$ as a tuple of vectors $\mathbf{\mu}_0$ and $\mathbf{\sigma}_0$.
Then, it appends each candidate to the seed set $S_0$ to obtain two new sets $S'$ and $S''$, which are transformed in Gaussian distributions as well.
It is worth mentioning, that the weights of the set encoder shared are shared among candidates. Finally, the scoring function proceeds as follows:
\begin{gather}
score(i|S_0) = W((\mu_0, \Sigma_0), ((\mu', \Sigma'))) \\
score(j|S_0) = W((\mu_0, \Sigma_0), ((\mu'', \Sigma'')))
\end{gather}
where $(\mu_0, \Sigma_0)$ are the parameters of the Gaussian encoding of $S_0$. $(\mu', \Sigma')$ and $(\mu'', \Sigma'')$ are the parameters for $[S_0, i]$ and $[S_0, j]$. $[]$ indicates the concatenation.
\begin{figure*}[h]
\centering
\subfloat[The expanded set $S'$ has a small increase in the dispersion after adding the entity $i$ to the seed set $s_0$. ]{\includegraphics[scale=0.15]{images/gaus_exp_pos.png}\label{fig:gaus_ext_pos}}
\hspace{3cm}%
\subfloat[The expanded set $S''$ has a greater increase in the dispersion after adding the entity $j$ to the seed set $s_0$.]{\includegraphics[scale=0.15]{images/gaus_exp_neg.png}\label{fig:gaus_ext_neg}}
\caption{Illustration of the hypothesis under GausSetExpander. Given two candidate entity for the seed set $S_0$, the better candidate is the one that will induce the least increase in the dispersion of the generated set. }
\label{fig:mesh1}
\end{figure*}
\paragraph{Loss function}
For learning the score, we utilize the large-margin classification loss:
\begin{equation}
l(i,j|S_0) = \max(0, s(i|S_0) - s(j|S_0) + \Delta(i,j) )
\end{equation}
Loosely speaking, this loss function is to ensure that $s(i|S_0)) > s(j|S_0) + \Delta(i,j)$ whenever $i$ should be preferred for expanding $S_0$ over $j$.
\paragraph{Weak supervision} Per definition, the ESE tasks is challenging for the lack of proper supervision under the form of ground truth labels. Several works in the area rely on pre-trained language models or pre-trained embedding models to deliver the semantic, syntactic and background knowledge to provide weak labels. We proceed in the same manner, and use Glove \cite{Pennington2014GloVe:Representation} for encoding the terms extracted from the corpus. Moreover, we assume that the seed terms are part of vocabulary. In order to generate, weak labels for training the scoring function we leverage the distributional hypothesis of the language models. In fact, that hypothesis states that similar words appear in similar contexts. For this reason, given two candidates terms $i$ and $j$, we extract their context as well, $c_i,$ and $c_j$ respectively. Then we use the simple cosine similarity to induce a weak label $l$:
\begin{equation}
l(S_0, c_i, c_j) = \left\{
\begin{array}{ll}
+1 & \mbox{if } R(c_i|S_0) > R(c_j|S_0) \\
-1 & \mbox{if } R(c_i|S_0) < R(c_j|S_0)
\end{array}
\right.
\label{eq:label}
\end{equation}
The function $R$ is defined as follows:
\begin{equation}
R(c, S_0) = \max_{x \in c_i, s \in S_0}\cos(x, s_i)
\end{equation}
\begin{algorithm}[h]
\SetKwInOut{KwIn}{Input}
\SetKwInOut{KwOut}{Output}
\KwIn{corpus $D$, vocabulary $V$, set $S_0$}
\For{$t \leftarrow 0$ \KwTo $|D|$}{
Get i, j, $c_i$, $c_j$ from $D$ \\
Embed i and j to $\mathbf{e_i} \in \mathbb{R}^d$, $\mathbf{e_j} \in \mathbb{R}^d$ \\
Embed terms in $S_0$ to $[\mathbf{e_0}, ..., \mathbf{e_n}]$ \\
Encode $S_0$ as its centroid: \newline
$c(S_0) = \frac{1}{n} \sum_i e_i$
Append $\mathbf{i}$ and $\mathbf{j}$ to $S_0$: \newline
$S_t^{'} = [S_0, e_i]$ and $S_t^{''} = [S_0, e_j]$ \\
Compute cosine similarity: \newline
$sim_1 = \cos(c(S_0), c(c_i))$ and $sim_2 =\cos(c(S_0), c(c_j))$ \\
\eIf{$sim_1$ > $sim_2$}{$l = 1$}{$l = -1$}
Encode $S_t^{'}$ and $S_t^{''}$ as their centroids: \newline
$S_t^{'} \leftarrow c(S_t^{'})$ \newline
$S_t^{'} \leftarrow c(S_t^{''})$ \\
Compute the score: \newline
$score(i|S_0) = l^2(c(S_0) - c(S_t^{'}))$ \newline
$score(j|S_0) = l^2(c(S_0) - c(S_t^{'}))$ \newline
$l(i,j|S_0) = \max(0, s(i|S_0) - s(j|S_0) )$
}
\caption{CentroidSetExpander}
\label{algo:centroset_algo}
\end{algorithm}
\section{Introduction}
\label{sec:intro}
The Entity Set Expansion (ESE) task aims at expanding a small seed set e.g \{Paris, Berlin\} to a larger set of entities that belong to the same semantic class, in this example $\textit{Capitals}$. Loosely speaking, the goal is to find all other entities in a given corpus that complete the original set of seed entities. From a theoretical point of view, the ESE task can be seen as a problem of generalization from few examples. The steps that are necessary to successfully solve this problem are: (1) identifying the semantic class from the given seed set; (2) identifying similar examples from a large pool of items that fit the semantic class. This task is useful to several downstream applications such as question answering \cite{wang2008automatic}, taxonomy construction \cite{shen2018hiexpan}, relation extraction \cite{lang2013graph} and query suggestion \cite{cao2008context}.
In this paper, we propose an approach to tackle the ESE task based on pretrained embedding models and optimal transport techniques. It has been vastly shown that pretrained embedding models contain semantic, syntactic and knowledge background that could facilitate transfer learning tasks.
One of the major challenges of ESE is the limited (or lack thereof) of supervision. In fact, generally the set seed is too small for fine-tuning and sometimes, the ground truth semantic class is an open set.
From the literature, we can identify two main ways of solving the ESE task: a pattern-based approach and a distributional approach. The first has as an objective mining revealing textual patterns in the corpus that signal the semantic class and extracting the correct entity from this patterns. On the other hand, the distributional approach relies on the assumption that similar words appear in similar contexts. These methods operate by representing each term in the vocabulary as an embedding vector that summarizes all the contexts the term appears within a large corpus, and then look for terms
with vectors that are similar to those of the seed
entity. One of the main critiques to these approaches is that they consider
all occurrences of a term in the corpus when calculating its representation, including many contexts
that are irrelevant or non-informative to the concept which cause noise in the corpus.
We propose an algorithm that is more similar to the distributional approach, although our algorithm has some notable differences. \emph{GausSetExpander}, proceeds in an iterative manner across all terms in a vocabulary V, extracted from a corpus D. Given an initial set seed $S_0$, and two terms embedded with a pretrained embedding model, we aim at finding the term that completes better the original set seed $S_0$. For doing this, we simply produce two different expanded sets from the simple concatenation of the previously two random terms to the seed set and we evaluate the better expansion.
The evaluation of the new obtained sets relies on techniques related to optimal transport, elliptical embeddings and sets clustering, which are concepts this thesis is treating.
In a nutshell, we leverage the fact that one of the most common statistics of a set $S$ is its centroid $c(S) \in \mathbb{R}^d$ , represented as:
\begin{equation}
c(S) = \frac{1}{|S|}\sum_{i \in S} x_i
\label{eq:centroid_setext}
\end{equation}
This implies that, it is possible to approximate a set of vectors to its centroid. Another key element of our approach comes from the literature of Gaussian embeddings, which has demonstrated to provide a better and richer representation for items in the latent space. In fact, a Gaussian embedding is characterized by two parameters, the mean vector which represents the location in the latent space and the covariance matrix, which encodes the dispersion around the location vector, that is to say the uncertainty of the representation.
We hypothesise that given an original seed set and two candidate items to expand it, the better entity is the one that causes the less increase in the dispersion of the set. This hypothesis is the key element of GausSetExpander.
Additionally, ESE is a challenging task for the lack of available labels.
For this reason, the pretrained embedding is a key element, because at each iteration we produce a weak label based on the cosine similarity between the centroid of the seed set and each candidate term. Finally, we rank the scores in order to identify candidates term from the vocabulary to generated the expanded set.
To summarize, in this study we propose an iterative approach based on Gaussian representation for the ESE task to expand a seed set. We conduct experiments to verify our hypotheses and show the effectiveness of GausSetExpander.
\section{Problem Formulation}
\input{problem.tex}
\section{Approach}
\input{approach.tex}
\section{Experiments}
\input{experiments.tex}
\section{Related Work}
\input{related_work}
\section{Conclusion}
We introduce an iterative and distributional approach for the task of Entity Set Expansion. Our method is based on encoding sets of vectors as Gaussian distributions. These probabilistic representations are learned from the centroid of the sets and are useful because they allow to represents the sets as a tuple of location and dispersion. In fact, our main hypothesis states that given to ncandidates entities for a set, the best candidate will be the entity that will cause the least increase in the dispersion, represented as the covariance matrix. Finally, we use a scoring function based on the Wasserstein distance. The quantitative evaluation on benchmark datasets demonstrate the effectiveness of our approach.
\section*{Acknowledgments}
This work has been supported by the German Research Foundation as part of the Research Training
Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994/1.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,601
|
package com.ciphertechsolutions.io.ewf;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.zip.Adler32;
import com.ciphertechsolutions.io.device.Device;
/**
* A class to represent the Volume section in the Encase6 format.
*/
public class VolumeSection extends Section {
static final int ADDITIONAL_SECTION_SIZE = 1052;
//TODO: Enum?
//0x00 => removable disk
// 0x01 => fixed disk
// 0x03 => optical disk
// 0x0e => Logical evidence file (LEV or L01)
// 0x10 => memory (RAM/process)
MediaType mediaType;
byte[] unknown = { 0x00, 0x00, 0x00 };
int chunkCount;
// TODO: Have the chunker set these, or be set from these.
int sectorsPerChunk;
int bytesPerSector;
//TODO: Confirm long vs int
long sectorCount;
int cylinders;
// Per cylinder, I think
int heads;
// Per head, I think
int sectors;
int mediaFlag;
int IS_IMAGE_FILE_FLAG = 0x01;
int IS_PHYSICAL_FLAG = 0x02;
int FASTBLOC_USED_FLAG = 0x04;
int TABLEAU_BLOCK_USED_FLAG = 0x08;
// ???
int palmVolumeStartSector;
byte[] unknown2 = {0x00, 0x00, 0x00, 0x00};
int smartLogStartSector;
// I don't think this corresponds exactly to zlib levels, unsure though.
int compressionLevel;
//TODO: Tie this to our error handling in drive reader?
int errorGranularity;
byte[] unknown3 = {0x00, 0x00, 0x00, 0x00};
byte[] fileSetGUID = new byte[16];
// I am not typing this one out.
byte[] unknown4 = new byte[963];
//???
byte[] signature = new byte[5];
int secondaryAdler32 = 0;
protected VolumeSection(long currentOffset, Device disk, byte[] guid) {
this("volume", currentOffset, disk, guid);
}
protected VolumeSection(String typeString, long currentOffset, Device disk, byte[] guid) {
super(currentOffset, typeString);
this.sectionSize += ADDITIONAL_SECTION_SIZE;
this.nextOffset += ADDITIONAL_SECTION_SIZE;
// Maybe leave these as 0?
this.cylinders = 0;
this.heads = 0;
this.sectors = 0;
this.fileSetGUID = guid;
// TODO: Set these more appropriately.
this.mediaType = MediaType.FIXED_STORAGE_MEDIA; // Should match actual media type.
this.compressionLevel = 0x01; //Make configurable maybe?
this.errorGranularity = 1; // Should be tied to our reading granularity.
this.mediaFlag = IS_IMAGE_FILE_FLAG | IS_PHYSICAL_FLAG; //Should match actual.
this.sectorsPerChunk = 1024; //Must match compressed chunker
this.smartLogStartSector = 0; // Measured from the end of media.
this.palmVolumeStartSector = 0; //I have no idea what this is.
this.bytesPerSector = 512; // Compressed chunker needs this too. Almost always 512.
// These will get updated manually later.
this.chunkCount = 0;
this.sectorCount = 0;
}
void correctSizeInformation(int chunkCount, long sectorCount) {
this.chunkCount = chunkCount;
this.sectorCount = sectorCount;
secondaryAdler32 = 0;
}
int getSecondaryAdler32(){
if (secondaryAdler32 == 0) {
Adler32 adlerCalc = new Adler32();
ByteBuffer bytes = getPartialBytes();
bytes.flip();
adlerCalc.update(bytes);
secondaryAdler32 = (int) adlerCalc.getValue();
}
return secondaryAdler32;
}
byte[] getAdditionalBytes() {
ByteBuffer bytes = getPartialBytes();
bytes.order(ByteOrder.LITTLE_ENDIAN);
bytes.putInt(getSecondaryAdler32());
return bytes.array();
}
private ByteBuffer getPartialBytes() {
ByteBuffer bytes = ByteBuffer.allocate(ADDITIONAL_SECTION_SIZE);
bytes.order(ByteOrder.LITTLE_ENDIAN);
bytes.put(mediaType.value);
bytes.put(unknown);
bytes.putInt(chunkCount);
bytes.putInt(sectorsPerChunk);
bytes.putInt(bytesPerSector);
bytes.putLong(sectorCount);
bytes.putInt(cylinders);
bytes.putInt(heads);
bytes.putInt(sectors);
bytes.putInt(mediaFlag);
bytes.putInt(palmVolumeStartSector);
bytes.put(unknown2);
bytes.putInt(smartLogStartSector);
bytes.putInt(compressionLevel);
bytes.putInt(errorGranularity);
bytes.put(unknown3);
bytes.put(fileSetGUID);
bytes.put(unknown4);
bytes.put(signature);
return bytes;
}
enum MediaType {
REMOVABLE_MEDIA(0x00), FIXED_STORAGE_MEDIA(0x01), OPTICAL_DISK(0x03), LOGICAL_EVIDENCE_FILE(0x0E), PHYSICAL_MEMORY(0x10);
final byte value;
MediaType(int value) {
this.value = (byte) value;
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,783
|
TEST(MathMetaMix, is_var_eigen_test) {
using stan::is_var_eigen;
using stan::math::var;
using stan::math::var_value;
EXPECT_TRUE((is_var_eigen<var_value<Eigen::MatrixXd>>::value));
EXPECT_TRUE((is_var_eigen<var_value<Eigen::ArrayXd>>::value));
EXPECT_TRUE((is_var_eigen<var_value<Eigen::VectorXd>>::value));
EXPECT_TRUE((is_var_eigen<var_value<Eigen::RowVectorXd>>::value));
Eigen::MatrixXd A(10, 10);
Eigen::MatrixXd B(10, 10);
EXPECT_FALSE((is_var_eigen<decltype(A * B)>::value));
EXPECT_FALSE((is_var_eigen<Eigen::MatrixXd>::value));
EXPECT_FALSE((is_var_eigen<Eigen::VectorXd>::value));
EXPECT_FALSE((is_var_eigen<Eigen::RowVectorXd>::value));
EXPECT_FALSE((is_var_eigen<Eigen::Matrix<var, -1, -1>>::value));
EXPECT_FALSE((is_var_eigen<Eigen::Matrix<var, -1, 1>>::value));
EXPECT_FALSE((is_var_eigen<Eigen::Matrix<var, 1, -1>>::value));
EXPECT_FALSE((is_var_eigen<double>::value));
EXPECT_FALSE((is_var_eigen<var_value<double>>::value));
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,006
|
'use strict';
var util = require('util');
var Base = require('./base');
module.exports = Isolation;
function Isolation () {
return Base.apply(this, arguments);
}
util.inherits(Isolation, Base);
require('../extend-with-factories')(Isolation);
Isolation.prototype.parse = function (attrs) {
attrs = Base.prototype.parse.call(this, attrs); // always call base parse for loopback toJSON
if (this.id()) {
var qs = {
isIsolationGroupMaster: false,
isolated: this.id(),
githubUsername: attrs.ownerUsername
};
this.instances = this.user.newInstances([], {
qs: qs,
reset: false
});
}
return attrs;
};
Isolation.prototype.urlPath = 'isolations';
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,993
|
package main
import (
"flag"
"os"
"github.com/rosenhouse/jamf/application"
)
var (
TargetBaseURL string
)
func main() {
flag.StringVar(&TargetBaseURL, "t", "http://localhost:8888", "Target server's base URL")
flag.Parse()
app := application.App{
TargetBaseURL: TargetBaseURL,
LogWriter: os.Stderr,
}
app.Run()
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,173
|
\section{Introduction}
It is well known that Asymptotic Giant Branch (AGB) stars rapidly lose their mass due to the strong stellar wind. The dense and strong stellar wind forms a circumstellar envelope (CSE), which is the main site of SiO, H$_{2}$O, and OH masers around oxygen-rich AGB stars. Various SiO maser lines arise from a distance of 2--4 stellar radii and show a ring-like structure with inflow or outflow motions below a dust formation layer \citep{1994ApJ...430L..61D,2003ApJ...599.1372D,2015A&A...576A..70P}. The 22.2 GHz H$_{2}$O maser arises partially in the dust formation layer and partially at greater radii above the dust layer and represents acceleration motions of stellar winds in the CSE \citep{1978ApJ...222..132R,1981ARA&A..19..231R,2003ApJ...590..460I}. Maps of the H$_{2}$O and SiO masers, registered to the stellar continuum, indicate that the star is at the center of shell-like maser emission in the case of W Hya \citep{1990ApJ...360L..51R,2007ApJ...671.2068R}. Therefore, a combined study of the H$_{2}$O and SiO masers enables us to investigate the formation and development of stellar winds.
However, previous VLBI observations of the H$_{2}$O and SiO masers have been performed separately, due to the lack of a simultaneous observation system for the H$_{2}$O and SiO masers. The Korean VLBI Network (KVN) is equipped with a quasi-optics system for simultaneous observations at the K (21--23 GHz), Q (42--44 GHz), W (85--95 GHz), and D (125--142 GHz) bands \citep{2008IJIMW..29...69H}. Therefore, the KVN Key Science Project (KSP) on evolved stars was started for the combined study of H$_{2}$O and SiO masers in the CSE (\url{https://radio.kasi.re.kr/kvn/ksp.php}, \citet{2018IAUS..336..359C}). The first stage of the KSP on evolved stars focused on nine objects for which astrometrically registered maps for both the H$_{2}$O and SiO masers have been successfully obtained.
Here we present the astrometrically registered maps for both the H$_{2}$O and SiO maser lines at three epochs for the semiregular variable star R Crateris (R Crt). R Crt is classified as a SRb star with the spectral type of M7 \citep{2017ARep...61...80S}. SRc type indicates super-giant stars, and SRa type variables show a persistent periodicity with smaller light-amplitudes ($<$2.5 mag in V) than Mira variables. In contrast, SRb type variables show an uncertain or superimposed periodicities, such as one or more overtones. The role of the overtone pulsation mode on the maser properties and the mass loss process has rarely been investigated. In this sense, R Crt is a high priority target for the study of overtone-pulsators emitting SiO, H$_{2}$O and OH masers \citep{2001A&A...378..522E, 2010ApJS..188..209K}. The mass loss rate of R Crt was estimated to be 8.0$\times$10$^{-7}$M$_{\odot}$yr$^{-1}$ \citep{2002A&A...391.1053O}. It has a relatively higher mass loss rate than that of the usual semiregular variables \citep{1998ApJS..117..209K}. The estimated distance to R Crt is some what ambiguous, with estimates ranging from 170 to 300 pc \citep{1998ApJS..117..209K, 1999MNRAS.304..415S, 2001PASJ...53.1231I}.
\section{Observations and Data Reduction}
We performed simultaneous VLBI monitoring observations of H$_{2}$O 6$_{16}$--5$_{23}$ (22.23508 GHz) and SiO v=1, 2, J=1$\rightarrow$0, SiO v=1, J=2$\rightarrow$1, 3$\rightarrow$2 (43.12208, 42.82058, 86.24344, 129.36335 GHz) masers toward R Crt with the KVN, which consists of three 21 m radio telescopes \citep{2011PASP..123.1398L}. The monitoring was carried out at 11 epochs from Oct 2014 to Feb 2016. In this paper, we present three epochs of observations, which show the astrometrically registered maps of the H$_{2}$O and SiO masers. The remaining data will be presented in a forthcoming paper (D. J. Kim et al. in prep.). The correlator coordinates used for R Crt were R.A.=11:00:33.850, Dec.=--18:19:29.60. The size of the synthesized beams are typically 6/3/1.5 mas at the K/Q/W bands, and the system temperatures were up to 220/210/450/800 K (epoch 1), 200/300/800/1200 K (epoch 2), and 140/200/400/300 K (epoch 3) at K/Q/W/D bands respectively. In total 16 Intermediate Frequencies (IFs) were used (6/6/2/2 for K/Q/W/D bands), and each IF has a 16 MHz bandwidth.
The schedule consisted of alternating $\sim$2 min scans between the target source R Crt and a continuum calibrator source J1048-1909 using simultaneous 4-band observations. The angular separation between R Crt and J1048-1909 is 3.06 degrees. J1048-1909 has a positional accuracy of 0.06 mas in R.A. and 0.09 mas in Dec. \citep{2009ITN....35....1M}. A fringe finder, 4C39.25, was also observed for 5 min every hour. Total observation time was about 7 hr for each epoch. We used the Mark5B system for data recording and playback, which has a maximum recording rate of 1 Gbps. The correlation was performed using the DiFX software correlator with a spectral resolution of 512 channels per IF, providing velocity resolutions of 0.42, 0.22, 0.11, and 0.07 km s$^{-1}$ for line observations at K, Q, W, and D bands respectively. We used the Astronomical Image Processing System (AIPS) package for the data reductions.
We applied conventional phase referencing (PR) techniques for the astrometric measurement of the position of the 22.2 GHz H$_{2}$O maser with respect to the external reference source, J1048-1909. Next, we used the Source Frequency Phase Referencing (SFPR) technique to attain a bonafide astrometric registration of the multiple SiO maser lines (42.8, 43.1, and 86.2 GHz). The combination of the simultaneous multi-frequency capability of the KVN and the SFPR analysis results in precise absolute positions of the maser lines, when the external reference source has precise absolute coordinates. The basis of the SFPR calibration strategy is presented in \cite{2011AJ....141..114R}, and the first application to a spectral line is presented in \cite{2014AJ....148...97D}. The positions of the maser spots in the PRed and SFPRed maps were measured using the two-dimensional Gaussian fitting task in AIPS, and artificial components were filtered out by the requirement that they must appear in more than three successive velocity channels in the same maser feature.
\section{Results}
As shown in Figure 1, the H$_{2}$O masers toward R Crt are only distributed in the southern part of the SiO masers and spread out over 100$\times$80 mas with several distinct maser features. The spatial distribution of the H$_{2}$O maser is asymmetric, but SiO masers show a ring-like structure of 30 mas in size. The 129.3 GHz SiO maser map was not obtained. Figure 2 displays the total power (solid line) and correlated flux (dotted line) spectra of each maser lines at three epochs. The fraction of missing flux were obtained from single-dish observations using the KVN Yonsei telescope, and they range from 20 to 80\%. As a general trend, the missing flux increases with resolution. The 86.2 GHz SiO maser shows a higher missing flux rate (up to 80 \%) and a wider velocity width than the other maser lines. The full velocity widths of the H$_{2}$O and SiO maser spectra are comparable (from V$_{LSR}$=3 to 16 km s$^{-1}$) except for the 86.2 GHz SiO maser (from V$_{LSR}$=-2 to 18 km s$^{-1}$). The peak flux of the H$_{2}$O maser ($\sim$hundreds of Jy) is much higher than those of the SiO masers ($\sim$dozens of Jy).
\begin{figure}
\figurenum{1}
\includegraphics[scale=0.45]{figure1.eps}
\caption{Astrometrically registered integrated intensity-velocity maps of the H$_{2}$O and SiO masers.
The peak fluxes of the 22.2/42.8/43.1/86.2 GHz masers are 684.7/7.9/19.5/13.5, 395.5/28.8/2.8/2.8 and 453.8/15.0/1.8/16.0 Jy beam$^{-1}$ km s$^{-1}$ at epoch 1, 2, and 3 respectively. The contour levels are plotted with log scale based on the peak fluxes.}
\end{figure}
\begin{figure*}[ht!]
\centering
\figurenum{2}
\includegraphics[scale=0.4]{figure2.eps}
\caption{Total power (solid) and correlated flux (dashed) spectra of the H$_{2}$O and SiO masers at the 3 epochs of observations. The total power spectra were obtained from the KVN Yonsei telescope. The fraction of missing flux is marked at the top-left corner. Vertical dotted lines indicate the stellar velocity of R Crt (V$_{LSR}$=10.8 km s$^{-1}$)}.
\end{figure*}
Figure 3 shows the position-velocity spot maps of the masers. The majority of maser spots are blue-shifted in both the H$_{2}$O and SiO masers. The high velocity spots of the 86.2 GHz maser appear at epoch 2 (Jan 7, 2016), and their locations are marked with a blue dashed square in Figure 3. The high velocity spots are blue shifted up to 13 km s$^{-1}$ with respect to the stellar velocity of R Crt (V$_{LSR}$=10.8 km s$^{-1}$), which was measured from the CO J=1$\rightarrow$0 and J=2$\rightarrow$1 lines \citep{1994A&A...290..183K}, and they exceed the terminal velocity of R Crt, 10.3 km $s^{-1}$ \citep{1992A&AS...93..121N}. We estimated the central position and size of the ring-like structure of the SiO masers with a least-square minimization fitting method. There was no fit in the case of the 42.8 GHz SiO maser spots due to their insufficient number. Both 43.1 and 86.2 GHz SiO maser spots were used for determining the central star position. Figure 4 and Table 1 show the fitting results. The deduced absolute coordinate of the central star is RA=11:00:33.8201, Dec.=--18:19:29.618, which is the mean value of epochs 2 and epoch 3. Epoch 1 was excluded in the averaging to minimize uncertain factors such as proper motion and annual parallax. The fitted radius of the ring-like structures is in the range of 13.35 to 13.84 mas for the 43.1 GHz SiO maser and 11.76 to 12.72 mas for the 86.2 GHz SiO maser.
\begin{figure*}[ht!]
\figurenum{3}
\centering
\includegraphics[scale=0.4]{figure3.eps}
\caption{The position-velocity spot maps of the H$_{2}$O and SiO masers. The size of a spot is proportional to its flux density. Upper: Maps of the H$_{2}$O maser spots. The red crosses indicate the central positions of the SiO ring-like structures. The black cross represents the position of R Crt measured by {\sc gaia} satellite. Lower: Maps of the 42.8 (triangle), 43.1 (square) and 86.2 (circle) GHz SiO maser spots. The spots inside the dotted-blue squares present the highly blue-shifted components of the 86.2 GHz SiO masers (--2 to 0 km s$^{-1}$).}
\end{figure*}
The H$_{2}$O maser features show coincident spatial distributions over three epochs although they show remarkable changes in their intensities. In the total power spectra, the peak intensity of the blue-shifted components diminish, whilst the red-shifted components increase (Figure 2). The SiO masers present rapid changes in the number of spots, intensities, and positions during a short time interval between epochs 2 and 3 (19 days).
\begin{figure*}[ht!]
\centering
\figurenum{4}
\includegraphics[scale=0.3]{figure4.eps}
\caption{Upper: Spatial distributions of the 42.8 (red), 43.1 (green) and 86.2 GHz SiO (grey) maser spots along with superimposed fitted rings for the 43.1 and 86.3 GHz SiO maser spots, shown by the large green continuous and grey-dashed circles, respectively \citep{2018IAUS..336..359C}. The size of an individual spot is proportional to the flux density. The red crosses mark the central position of the rings. Lower: The number of maser spots according to the distance from the center. Each color indicates the 42.8 (red), 43.1 (green) and 86.2 GHz
SiO (grey) masers.}
\end{figure*}
\floattable
\begin{deluxetable}{cccccccccc}
\tablecaption{The radius and central position of the ring-like structure of the SiO maser lines.}
\tablecolumns{11}
\tablenum{1}
\tablewidth{0pt}
\tablehead{
\colhead{Epoch} & \colhead{Rest frequency} & \multicolumn{2}{c}{Ring radius*} & \colhead{Fitting error} & \multicolumn{2}{c}{Converted coordinate (J2000)}& $\sqrt{\Delta\alpha^{2} + \Delta\delta^{2}}$ \\
&\colhead{(GHz)} & \colhead{(mas)} & \colhead{(AU)} & \colhead{(mas)} & \colhead{R.A.} & \colhead{Dec.} & (mas)
}
\startdata
1& 43.122 & 13.35 &2.27&0.34 & &\\
(May 21, 2015)& 86.243 & 12.72 &2.16&1.08 & 11:00:33.8204 &--18:19:29.619 & 4.1\\
\hline
2& 43.122 & 13.84 & 2.35 &0.20 & & \\
(Jan 7, 2016)& 86.243 & 11.76 & 2.00&0.73 & 11:00:33.8202 &--18:19:29.621 & 3.7 \\
\hline
3& 43.122 & 13.45 & 2.29 & 0.52 & & \\
(Jan 26, 2016)& 86.243 & 11.96 & 2.03& 0.87 & 11:00:33.8201 &--18:19:29.616& 3.9\\
\enddata
\tablenotetext{*}{The reference coordinate of R Crt used in the observations is R.A.=11:00:33.850 Dec.=--18:19:29.60 (J2000).
A distance of 170 pc was assumed to calculate the ring diameter by AU unit \citep{2001PASJ...53.1231I}. Fitting error column indicates RMS values of the fitting.}
\end{deluxetable}
\section{Discussion}
\subsection{Comparison among the SiO v=1, 2 J=1$\rightarrow$0 (43.1, 42.8 GHz) and v=1, J=2$\rightarrow$1 (86.2 GHz) masers} \label{sec:Discussion1}
The most striking results from our observations is the discovery that the 86.2 GHz SiO maser spots are located closer to the central star, with a wider velocity range than those of the 43.1 and 42.8 GHz SiO masers, as shown in Figure 4 and Table 1. This is the opposite of what hitherto has been found. In the case of fundamental-pulsators such as WX Psc, R Leo, and $\chi$ Cyg \citep{2004A&A...426..131S,2007A&A...468L...1S} the 86.2 GHz maser was distributed in the comparable or more distant regions (up to $\sim$30\%) from the central star than the 43.1 and 42.8 GHz masers. Line overlap effects between the H$_{2}$O and SiO emission were proposed for the interpretation of 86.2 GHz maser location \citep{2014A&A...565A.127D}. Radiative Transfer (RT) models based on the large velocity gradient (LVG) assumption, which suppose a localized maser amplification, predict that 86.2 GHz maser would be stronger and located at the distant region from the central star compared to the 43.1 GHz maser with comparable velocity ranges \citep{2002A&A...386..256H,2009MNRAS.394...51G}. Such results are consistent with the previous VLBI observations for the fundamental-pulsators \citep{2004A&A...426..131S,2007A&A...468L...1S}, but not for R Crt.
R Crt is a SRb type variable showing superimposed periodicities in the optical light curve and OH maser variability \citep{2001A&A...378..522E,2003AcA....53..341P}. Our single-dish monitoring of H$_{2}$O and SiO masers also presents a signature of secondary variability (D. J. Kim et al. in prep.). SRb type variables are mainly overtone-pulsators, which have a short optical period with superposed periodicities, whereas other variables (SRa, SRc, and LPV) are characterised by a fundamental pulsation mode. The overtone-pulsator would result in more turbulent environment across the CSE, and it may produce different physical conditions for the masers compared to the fundamental-pulsators. In addition, \cite{1998A&A...334.1037H} detected high velocity components in the 86.2 GHz maser, which exceed the terminal velocity of the host AGB star. A statistical study points out that the high velocity wings of the 86.2 GHz maser lines dominantly appear in the SRb type variables \citep{2015AJ....149..100M}. This tendency is not seen in SRa, SRc, and long period variables.
On the other hand, the non-local radiative transfer (RT) model occasionally shows a slightly wider velocity range of the 86.2 GHz maser than the 43.1 GHz maser, supporting our observational results \citep{2012A&A...545A.136Y,2015AJ....149..100M}. The non-local RT model considers all the velocity coherent regions along the line of sight to reflect the influence of distant maser clumps, which can contribute to the local maser amplification. This process is more likely to happen in overtone-pulsators rather than fundamental-pulsators, due to their complex dynamical environment induced by short and overlapping shock waves. Thus, the non-local RT model would be better at reproducing the 86.2 GHz SiO maser lines with a wider velocity range and maser distributions for SRb type variables than the RT model in fundamental-pulsators based on the LVG assumption \citep{2002A&A...386..256H,2009MNRAS.394...51G}.
However, the non-local RT model has never predicted that the 86.2 GHz maser features could appear at a smaller radius than those of the 43.1 GHz maser. The non-local RT model \citep{2012A&A...545A.136Y} used the hydrodynamic solutions for a fundamental-pulsator. Therefore, further studies of the non-local RT model based on the hydrodynamic CSE model for overtone-pulsators are required for interpreting our results and for evaluating the effect of different pulsation modes on the various SiO maser features. In addition, we need to consider the high fractional missing flux of the 86.2 GHz maser (up to 80 \%) and relatively poor spatial sensitivity of the KVN, because is is possible that the 86.2 GHz maser features in the outer region of the 43.1 GHz maser of R Crt could have been resolved out or undetected due to their weak intensity. Therefore, following-up VLBI observations for R Crt and other SRb type variables are also required to confirm the inner distribution of the 86.2 GHz SiO masers in overtone-pulsators. Additionally adding more antennas and shorter baselines to the KVN would clarify these questions; such an enhancement is being planned.
\subsection{The stellar position and development of asymmetric structures in maser features} \label{sec:Discussion2}
The stellar position is a crucial parameter for analyzing the morphology and dynamics of the SiO and H$_{2}$O masers. Our astrometric observation scheme, using PR and SFPR, has provided the accurate positions of the SiO and H$_{2}$O maser spots. The position of the central star is estimated by a ring-fitting method to the SiO maser features. The fitting results and errors are listed in Table 1. The SFPR technique typically results in a relative positional error of less than 1 mas between the H$_{2}$O and SiO masers \citep{2018NatCo...9.2534Y}. This is dominated by the ring fitting error, which is about 1.87 mas corresponding to three times of the mean RMS value of the fittings. The total relative positional errors between the central star and the H$_{2}$O maser spots will be less than 3 mas, which is 8 times more accurate than the positional error of the central star determined by the three-dimensional velocity field of the 22.2 GHz H$_{2}$O maser \citep{2001PASJ...53.1231I}.
The astrometric performance of the PR observation using the KVN has not yet been explicitly demonstrated, as this aspect is still undergoing commissioning. However if we extrapolate from the expected behavior as found in \citet{2006A&A...452.1099P}, we obtain an astrometrical positional error, for a 3.06$^o$ calibrator-source separation, of about $\sim$2 mas. Combined with the ring fitting errors mentioned above we expect positional accuracies of $\sim$4 mas. In Table 1, we find positional differences of 2 mas (R.A.) and 5 mas (Dec.) between epochs 2 and 3, which must be due to measurement errors as there is only a short time separation of 19 days. This is consistant with the expected astrometric error. Therefore, we estimated the absolute stellar position (R.A.=11:00:33.8201, Dec.=--18:19:29.618) as the mean position obtained from epochs 2 and 3. The observed {\sc gaia} position for their reference epoch (Jun 2015), marked with a black cross in Figure 3, can be compared to our epoch 1 (May 2015) data directly as the proper motions and parallax contributions will be negligible. In this case the {\sc gaia} coordinates are 11:00:33.820623, --18:19:29.6219571, which are offset from our epoch 1 position by 3, 3 mas on the sky.
In Figure 1, the asymmetric spatial distribution of the H$_{2}$O maser with respect to the ring-like structure of the SiO maser cannot be interpreted with a spherical expanding shell structure. The observational results of the Japanese VLBI Network showed a possible bipolar outflow based on the three-dimensional velocity field of the H$_{2}$O maser \citep{2001PASJ...53.1231I}. However, the H$_{2}$O maser in Figure 1 shows a possible one-side outflow toward the southern part of the estimated position of the central star in Figure 3. Also, the H$_{2}$O maser in Figure 2 shows a significant intensity variation between May 2015 and Jan 2016. In the case of the red hypergiant star NML Cyg which shows a bipolar outflow in the H$_{2}$O maser feature, the central star is located close to the prominent blue-shifted H$_{2}$O maser features and not close to the center between the red-shifted and blue-shifted outflow features \citep{2012A&A...544A..42Z}. It is still uncertain what kind of physical process causes the development of the asymmetric structure of the H$_{2}$O maser. To investigate the development process of the highly asymmetric one-side outflow features of the H$_{2}$O maser from the ring-like SiO maser features in R Crt, we may need to measure the proper motion of both the H$_{2}$O and SiO masers through intensive VLBI monitoring observations.
\section{Summary} \label{sec:Summary}
Simultaneous VLBI monitoring observations of the H$_{2}$O 6$_{16}$--5$_{23}$ and SiO v=1, 2, J=1$\rightarrow$0 and v=1, J=2$\rightarrow$1, J=3$\rightarrow$2 masers toward the semiregular variable R Crt were performed with the KVN from Oct 2014 to Feb 2016.
We obtained high precision ``bonafide'' astrometrically registered multi-frequency maps of the 22.2 GHz H$_{2}$O and 43.1/42.8/86.2 GHz SiO masers at three epochs with the SFPR method. The SiO masers show a ring-like feature, while the H$_{2}$O masers show a very asymmetric one-sided outflow, which is located only at the southern part of the central star. Based on these astrometrically registered maps, we determined the position of the central star with an accuracy of 3 mas to the H$_{2}$O masers. The estimated stellar position is consistent with {\sc gaia} DR2 data. Furthermore, the SiO v=1, J=2$\rightarrow$1 maser spots are distributed in the inner-most region, with a 15 \% smaller radius, compared to those of the SiO v=1, J=1$\rightarrow$0 maser. Some maser spots of the SiO v=1, J=2$\rightarrow$1 maser also show highly blue-shifted components which exceed the terminal velocity of R Crt. We suggest that these features may be related with the characteristics of the overtone pulsation mode of the SRb-type R Crt associated with complex dynamics in its CSE. However, we need to investigate other overtone-pulsators to confirm whether those properties are common.
\acknowledgments
This work was supported by the Basic and Fusion Research Programs (2014-2017). We are grateful to all of the staff members at KVN who helped to operate the array and the single dish telescope and to correlate the data. The KVN is a facility operated by KASI (Korea Astronomy and Space Science Institute), which is under the protection of the National Research Council of Science and Technology (NST). The KVN operations are supported by KREONET (Korea Research Environment Open NETwork) which is managed and operated by KISTI (Korea Institute of Science and Technology Information).
\bibliographystyle{aasjournal}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,606
|
This study focuses on the application of higher order thinking skills in general education popular music studies at the post-secondary level. The literature review explores the history of post-secondary general education music courses (also known as "music appreciation") in the United States. The textbooks and philosophical underpinnings of popular music courses currently in widespread use are reviewed and critiqued.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 7,422
|
Q: How can i take snapshot of command prompt window in full screen mode I need to take snapshot if command prompt window running in full screen mode.
I had tried it using PrintScreen,Ctrl+PrintScreen, Ctrl+Alt+PrintScreen button(s) but nothing seems to work
Also are there any reasons that the print screen button does not work in full screen commandprompt mode? After all, it does for all windows under normal conditions.
Abdul Khaliq
A: In full screen mode all you have is text. There is no graphical `rendering' as such. If you can capture the text, it is enough ... though you can always reconstruct a png image later from the text (if you really have to get an image out of it).
A: Why don't you just use an external screen shot software?
There's many, e.g. greenshot, which is free (is in speech and beer :-)).
A: did you try alt + print screen?
A: Click any window except the command window and then hit PrtScrn.
A: First off all open cmd in full screen mode then click on print screen button after that open paint brush and press ctrl+v (past) you can save it in any where, where ever you want (file type should be .png).
A: I wasn't able to find any of these replies that work, and I can't install unapproved software do to IT policies. Here is what I did:
Right click inside command window. Hit select all. Right click outside of window (on top bar close to the maximize minimize controls. Select edit; select copy. Open a notepad window and paste. The advantage here is you have text that can be copied and pasted back into a command window later. I hope this helps.
A: *
*press ctrl+a //select all
*press ctrl+c // copy all text
*write notepad mytext.txt + press entet // open notepad
*press ctrl+v //paste in text in notepad
*press ctrl+s // Save file
*press ctrl+w // Close notepad.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,326
|
.class public final Landroid/view/SurfaceControl$PhysicalDisplayInfo;
.super Ljava/lang/Object;
.source "SurfaceControl.java"
# annotations
.annotation system Ldalvik/annotation/EnclosingClass;
value = Landroid/view/SurfaceControl;
.end annotation
.annotation system Ldalvik/annotation/InnerClass;
accessFlags = 0x19
name = "PhysicalDisplayInfo"
.end annotation
# instance fields
.field public appVsyncOffsetNanos:J
.field public density:F
.field public height:I
.field public presentationDeadlineNanos:J
.field public refreshRate:F
.field public secure:Z
.field public width:I
.field public xDpi:F
.field public yDpi:F
# direct methods
.method public constructor <init>()V
.registers 1
.prologue
.line 488
invoke-direct {p0}, Ljava/lang/Object;-><init>()V
return-void
.end method
.method public constructor <init>(Landroid/view/SurfaceControl$PhysicalDisplayInfo;)V
.registers 2
.param p1, "other" # Landroid/view/SurfaceControl$PhysicalDisplayInfo;
.prologue
.line 491
invoke-direct {p0}, Ljava/lang/Object;-><init>()V
.line 492
invoke-virtual {p0, p1}, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->copyFrom(Landroid/view/SurfaceControl$PhysicalDisplayInfo;)V
.line 491
return-void
.end method
# virtual methods
.method public copyFrom(Landroid/view/SurfaceControl$PhysicalDisplayInfo;)V
.registers 4
.param p1, "other" # Landroid/view/SurfaceControl$PhysicalDisplayInfo;
.prologue
.line 519
iget v0, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->width:I
iput v0, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->width:I
.line 520
iget v0, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->height:I
iput v0, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->height:I
.line 521
iget v0, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->refreshRate:F
iput v0, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->refreshRate:F
.line 522
iget v0, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->density:F
iput v0, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->density:F
.line 523
iget v0, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->xDpi:F
iput v0, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->xDpi:F
.line 524
iget v0, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->yDpi:F
iput v0, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->yDpi:F
.line 525
iget-boolean v0, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->secure:Z
iput-boolean v0, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->secure:Z
.line 526
iget-wide v0, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->appVsyncOffsetNanos:J
iput-wide v0, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->appVsyncOffsetNanos:J
.line 527
iget-wide v0, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->presentationDeadlineNanos:J
iput-wide v0, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->presentationDeadlineNanos:J
.line 518
return-void
.end method
.method public equals(Landroid/view/SurfaceControl$PhysicalDisplayInfo;)Z
.registers 8
.param p1, "other" # Landroid/view/SurfaceControl$PhysicalDisplayInfo;
.prologue
const/4 v0, 0x0
.line 501
if-eqz p1, :cond_46
.line 502
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->width:I
iget v2, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->width:I
if-ne v1, v2, :cond_46
.line 503
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->height:I
iget v2, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->height:I
if-ne v1, v2, :cond_46
.line 504
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->refreshRate:F
iget v2, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->refreshRate:F
cmpl-float v1, v1, v2
if-nez v1, :cond_46
.line 505
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->density:F
iget v2, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->density:F
cmpl-float v1, v1, v2
if-nez v1, :cond_46
.line 506
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->xDpi:F
iget v2, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->xDpi:F
cmpl-float v1, v1, v2
if-nez v1, :cond_46
.line 507
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->yDpi:F
iget v2, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->yDpi:F
cmpl-float v1, v1, v2
if-nez v1, :cond_46
.line 508
iget-boolean v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->secure:Z
iget-boolean v2, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->secure:Z
if-ne v1, v2, :cond_46
.line 509
iget-wide v2, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->appVsyncOffsetNanos:J
iget-wide v4, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->appVsyncOffsetNanos:J
cmp-long v1, v2, v4
if-nez v1, :cond_46
.line 510
iget-wide v2, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->presentationDeadlineNanos:J
iget-wide v4, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->presentationDeadlineNanos:J
cmp-long v1, v2, v4
if-nez v1, :cond_46
const/4 v0, 0x1
.line 501
:cond_46
return v0
.end method
.method public equals(Ljava/lang/Object;)Z
.registers 3
.param p1, "o" # Ljava/lang/Object;
.prologue
.line 497
instance-of v0, p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;
if-eqz v0, :cond_b
check-cast p1, Landroid/view/SurfaceControl$PhysicalDisplayInfo;
.end local p1 # "o":Ljava/lang/Object;
invoke-virtual {p0, p1}, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->equals(Landroid/view/SurfaceControl$PhysicalDisplayInfo;)Z
move-result v0
:goto_a
return v0
.restart local p1 # "o":Ljava/lang/Object;
:cond_b
const/4 v0, 0x0
goto :goto_a
.end method
.method public hashCode()I
.registers 2
.prologue
.line 515
const/4 v0, 0x0
return v0
.end method
.method public toString()Ljava/lang/String;
.registers 5
.prologue
.line 533
new-instance v0, Ljava/lang/StringBuilder;
invoke-direct {v0}, Ljava/lang/StringBuilder;-><init>()V
const-string/jumbo v1, "PhysicalDisplayInfo{"
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->width:I
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(I)Ljava/lang/StringBuilder;
move-result-object v0
const-string/jumbo v1, " x "
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->height:I
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(I)Ljava/lang/StringBuilder;
move-result-object v0
const-string/jumbo v1, ", "
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->refreshRate:F
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(F)Ljava/lang/StringBuilder;
move-result-object v0
const-string/jumbo v1, " fps, "
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
.line 534
const-string/jumbo v1, "density "
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
.line 534
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->density:F
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(F)Ljava/lang/StringBuilder;
move-result-object v0
.line 534
const-string/jumbo v1, ", "
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
.line 534
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->xDpi:F
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(F)Ljava/lang/StringBuilder;
move-result-object v0
.line 534
const-string/jumbo v1, " x "
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
.line 534
iget v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->yDpi:F
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(F)Ljava/lang/StringBuilder;
move-result-object v0
.line 534
const-string/jumbo v1, " dpi, secure "
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
.line 534
iget-boolean v1, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->secure:Z
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Z)Ljava/lang/StringBuilder;
move-result-object v0
.line 535
const-string/jumbo v1, ", appVsyncOffset "
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
.line 535
iget-wide v2, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->appVsyncOffsetNanos:J
.line 533
invoke-virtual {v0, v2, v3}, Ljava/lang/StringBuilder;->append(J)Ljava/lang/StringBuilder;
move-result-object v0
.line 536
const-string/jumbo v1, ", bufferDeadline "
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
.line 536
iget-wide v2, p0, Landroid/view/SurfaceControl$PhysicalDisplayInfo;->presentationDeadlineNanos:J
.line 533
invoke-virtual {v0, v2, v3}, Ljava/lang/StringBuilder;->append(J)Ljava/lang/StringBuilder;
move-result-object v0
.line 536
const-string/jumbo v1, "}"
.line 533
invoke-virtual {v0, v1}, Ljava/lang/StringBuilder;->append(Ljava/lang/String;)Ljava/lang/StringBuilder;
move-result-object v0
invoke-virtual {v0}, Ljava/lang/StringBuilder;->toString()Ljava/lang/String;
move-result-object v0
return-object v0
.end method
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,914
|
Q: Prove the language $\{a^k b^l : k \neq l \}$ is not regular Prove that the following language is not regular:
$$L=\{a^k b^l : k,l \ge0, k\ne l\}$$
The problem is that I should use "distinguished states" not the pumping lemma, which is usually used for such types of problems.
Please help!
A: You could cheat. If $L$ is regular, then its complement (in the regular language given by $a^* b^*$) is also regular. That complement is $\{ a^k b^k \mid k \geq 0 \}$ and you can show that is non-regular using the pumping lemma.
To totally avoid the pumping lemma, you could use the Myhill-Nerode Theorem; I'm paraphrasing the formulation below from Wikipedia. This may be what you were looking for, as it uses the concept of distinguishing extension.
Myhill-Nerode Theorem. Let $L \subseteq \Sigma^*$. For $x, y \in \Sigma^*$, a distinguishing extension (w.r.t. $L$) for $x$ and $y$ is a $z \in \Sigma^*$ for which one, but not both, of $xz$ and $yz$ is in $L$. On $\Sigma^*$, define an equivalence relation $\sim_L$ such that $x \sim_L y$ if and only if there is no distinguishing extension (w.r.t. $L$) for $x$ and $y$. Then $L$ is regular if and only if $\sim_L$ has finitely many equivalence classes.
You only need the $\Rightarrow$-part of this theorem, which boils down to the following.
Theorem. Let $L \subseteq \Sigma^*$. Suppose there is an infinite sequence $x_0, x_1, x_2, \dots$ of elements of $\Sigma^*$ such that for every $i \neq j$ there is a distinguishing extension $z$ (w.r.t. $L$) for $x_i$ and $x_j$. Then $L$ is not regular.
(The proof of this is essentially the argument given by Dennis Meng in his answer.)
For your language $L = \{ a^k b^l \mid k, l \geq 0, k \neq l \}$, the elements $a$, $aa$, $aaa$, $\dots$ are pairwise distinguishable, as $b^k$ is a distinguishing extension for $a^k$ and $a^l$ if $k \neq l$.
A: Here's another approach via proof-by-contradiction.
Assume for the sake of contradiction that $L$ was regular. Then, there must be a DFA that accepts the language. Let $n$ be the number of states this DFA has.
Now, consider the first $n+1$ strings of the following sequence:
$$a, aa, aaa, aaaa, ...$$
By the pigeonhole principle, we know there exist two of those strings which have the same end state when run on our DFA. Let $x,y$ be such that those two strings are $a^x$ and $a^y$, and $x < y$.
Now, consider the strings $a^xb^x$ and $a^yb^x$. Because of what we just showed, these two strings must also have the same end state when run on our DFA. Do you see how to derive the contradiction from there?
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,552
|
{"url":"https:\/\/techwhiff.com\/learn\/help-please\/6892","text":"###### Question:\n\nYou are investigating an elevatoraccident which happened in a tall building.An elevator inthis building is attached to a strong cablewhich runs over apulley attached to a steel support in the roof.The other endof the cable is attached to a block of metal called a counterweightwhich hangsfreely.An electric motor on the side of theelevator drives the elevator up or down by exerting a force on theside of the elevator shaft.You suspect thatwhen theelevator was fully loaded, there was too large a force on themotor.A fully loaded elevator at maximum capacity weighs2400 lbs.The counter weightweighs 1000 lbs.Theelevator always starts from rest at its maximum acceleration of g\/4wheter it is going up or down.(a) What force does the wallof theelevator shaft exert on the motor if the elevator startsfrom rest and goes up? (b) What force does the wall of the elevatorshaft exert on the motor if the elevatorstarts from rest andd goesdown?\n\n#### Similar Solved Questions\n\n##### A) If there is a vacuum above the piston, what is the combined weight of the...\na) If there is a vacuum above the piston, what is the combined weight of the piston and masses, in Newtons? b) How many moles of N2 gas are in the sample? c) (fill in the blank): In step 2, the pin is reinserted and the gas is heated until the pressure is 3.0 atmospheres. The new temperature and vol...\n##### Is Spring 19 5.1 For a device operating on a cycle has to comply with either the Kelvin-Planck Statement or the Clausius Statement of the Second Law of Thermodynamics (a) True(b) False (a) It gai...\nis Spring 19 5.1 For a device operating on a cycle has to comply with either the Kelvin-Planck Statement or the Clausius Statement of the Second Law of Thermodynamics (a) True(b) False (a) It gained 1000 KJ by heat (b) It gained 1000KJ by net work input only (c) It gained 5.2 The internal energy of ...\n##### Please show the steps to finding the answer using a *Financial Calculator*! Thank you. 8) Crout...\nPlease show the steps to finding the answer using a *Financial Calculator*! Thank you. 8) Crout Company has outstanding perpetual bonds that pay annual coupon of 3% annually. Crout has assessed that the required rate of return on these bonds today is 5.6%. At what price are the bonds expected to tra...\n##### How do you solve 8| x | + 19= 11?\nHow do you solve 8| x | + 19= 11?...\n##### The market for cantaloupe has the following demand and supply schedules a. Graph the demand and...\nThe market for cantaloupe has the following demand and supply schedules a. Graph the demand and supply curves. What is the equilibrium price and quantity in this market? b. What happens, if the price of cantaloupe is $12\/t? c. What happens if the price of cantaloupe is$22? Price Quantity demande...\n##### What is the maximum volume (in L) of 2.0 M HCl(aq) that the buffer prepared from...\nWhat is the maximum volume (in L) of 2.0 M HCl(aq) that the buffer prepared from 37.4 g of NH4Cl and 1.25 L of 0.25 M NH3 can tolerate without showing a pH change greater than 0.25 units? pKa(NH4+) = 9.25....\n##### Please solve, show work, and give detail explanation Exam 2 Review Problems Chapters 9 and 6...\nPlease solve, show work, and give detail explanation Exam 2 Review Problems Chapters 9 and 6 10. If you dispose of an asset and its book value is greater than its salvage value, then you will have to pay taxes on the difference between the book value and salvage value. a. True b. False 11. Long-ter...\n##### Consider the function defined as f(x) = e^x + e^-x\/2, x > 0. Find the inverse of f?\nConsider the function defined as f(x) = e^x + e^-x\/2, x > 0. Find the inverse of f?...\n##### 1) Complete the reaction and draw the mechanism 0 [H*) -H,O \u043a\u043e\u043d \/ Heat ETTI \u041d,0...\n1) Complete the reaction and draw the mechanism 0 [H*) -H,O \u043a\u043e\u043d \/ Heat ETTI \u041d,0 HANT...\n##### 1. Order: D5W 500 ml IV infuse over 12 hours. Find the pump setting in ml\/hr....\n1. Order: D5W 500 ml IV infuse over 12 hours. Find the pump setting in ml\/hr. 4 1.6 h 2. Order: NS 1,000 mL IV infuse over 24h. Find the drip rate in drops per minute if the drop factor is 15gtt\/mL. 3. A continuous IV is infusing at 120 ml\/h. How many milliliters will infuse in two hours and thirty ...\n##### Explain newtons first law of motion\nExplain newtons first law of motion...\n##### Help me finish the vertical analysis schedule Operating data for Joshua Corporation are presented below. 2020...\nHelp me finish the vertical analysis schedule Operating data for Joshua Corporation are presented below. 2020 2019 Sales revenue Cost of goods sold Selling expenses Administrative expenses Income tax expense Net income $748,000 466,752 119,680 64,328 38,148 59,092$596,000 394,552 75,096 48,872 20...\n##### 1. Refer to the Figure below. The external cost associated with producing the good is: The...\n1. Refer to the Figure below. The external cost associated with producing the good is: The following Figure shows the marginal internal cost [Cl], the marginal total cost [CZ], and the demand curve [D], associated with a particular good. Price 8 7 C2 5 -C1 D 8 10 15 17 Quantity a) $5. b)$7. c) \\$8. ...\n##### Question 13 (6 points) Assume the singly linked list is defined as a head reference variable...\nQuestion 13 (6 points) Assume the singly linked list is defined as a head reference variable refers to the first node and a size reference variable refers to the number of nodes (similar like your L.AB4). Provide a method named count that used to count and return the occurrences of the head item (it...","date":"2023-02-04 21:34:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.291373074054718, \"perplexity\": 2550.475563140818}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500154.33\/warc\/CC-MAIN-20230204205328-20230204235328-00249.warc.gz\"}"}
| null | null |
{"url":"http:\/\/lua-users.org\/lists\/lua-l\/2005-05\/msg00094.html","text":"\u2022 Subject: Re: Pushing a UserData pointer?\n\u2022 From: Ben Sunshine-Hill <sneftel@...>\n\u2022 Date: Mon, 16 May 2005 12:13:39 -0700\n\nOn 5\/16\/05, Chris Marrin <chris@marrin.com> wrote:\n> Am I missing something in the public API, or is there really no more\n> efficient way to move between C++ and Lua?\n\nWellllll.\n\n\\begin{hack}\nIf you know that a particular object is being stored in a userdata,\nyou CAN refer to it as a userdata. All it takes is some pointer\narithmetic.\n\nYa see, a userdata, as allocated on the heap, consists of a struct\nUdata (well, actually a union Udata, but only for alignment purposes;\ntreat it as a struct) followed by an arbitrary amount of data. The\nstruct portion is a GCable object, which can be set into a stack\nposition with setuvalue (a macro defined in lobject.h). This function,\ntherefore, should work, though it's untested. Written based on the\n5.0.2 source. (You will, of course, need to modify the include to\ncorrectly penetrate into the internals of Lua; lobject.h is not a\n\n#include <lobject.h>\nvoid pushexistinguserdata(lua_State *L, void *ud)\n{\nlua_lock(L);\nu = luaS_newudata(L, size);\nsetuvalue(L->top, ((Udata*)ud)-1);\napi_incr_top(L);\nlua_unlock(L);\n}\n\nIf you call this with a pointer that is not actually a pointer to a\npreexisting userdata, horrible horrible things will happen within the\nGC. Use with caution and revulsion.\n\n\\end{hack}\n\nBen","date":"2020-06-06 01:03:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.49880725145339966, \"perplexity\": 7036.698921997448}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590348509264.96\/warc\/CC-MAIN-20200606000537-20200606030537-00022.warc.gz\"}"}
| null | null |
using System;
using System.Linq;
using UnityEngine;
using UnityEditor;
namespace Pica.Attribuite {
/// <summary>
/// Custom inspector for Object including derived classes.
/// </summary>
[CanEditMultipleObjects]
[CustomEditor(typeof(UnityEngine.Object), true)]
public class ObjectEditor : Editor {
public override void OnInspectorGUI() {
// Loop through all methods with no parameters
foreach(var method in target.GetType().GetMethods()
.Where(m => m.GetParameters().Length == 0)) {
// Get the ButtonAttribute on the method (if any)
var ba = (ButtonAttribute)Attribute.GetCustomAttribute(method, typeof(ButtonAttribute));
if(ba != null) {
// Determine whether the button should be enabled based on its mode
GUI.enabled = ba.mode == ButtonMode.AlwaysEnabled
|| (EditorApplication.isPlaying ? ba.mode == ButtonMode.EnabledInPlayMode : ba.mode == ButtonMode.DisabledInPlayMode);
// Draw a button which invokes the method
if(GUILayout.Button(ObjectNames.NicifyVariableName(method.Name))) {
foreach(var target in targets) {
method.Invoke(target, null);
}
}
GUI.enabled = true;
}
}
// Draw the rest of the inspector as usual
DrawDefaultInspector();
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,050
|
When you've got innovative solutions and creative ideas about marketing automation coming at you from all directions, it's easy to lose focus on the basic tactics of your email marketing operations. As Carolyn Acker's recent post discussed, marketers can pay dearly for major errors like sending the wrong email at the wrong time, or to the wrong person. But over time, sub-par email practices probably cost marketers just as much, if not more, than the big embarrassing mistakes we fear!
So let's take this opportunity to revisit these email best practices that we all know, but may not be taking to heart with every press of that SEND button.
#1: Don't be a spammer: intentional or not.
Your prospects won't like you—if they ever even see your emails, which they probably won't. And spamming, whether on purpose or through negligence, reflects badly on your organization. Maintaining a spam-free marketing operation entails focus on two main areas: the CAN-SPAM Act, and spam filters.
The Unsubscribe link must always be easy to find, and not hidden in other text.
You must include the company address in the footer.
Provide contact details: name, phone, email.
Remember, the United States is opt-out; outside the U.S. is opt-in.
Develop content with one eye on spam filters.
Avoid trigger words like free, prize, coupons; using click here more than 2-3 times can trigger spam traps.
Don't use ALL CAPS or excessive punctuation!!!!!
Don't use recipient name in the subject line; it's a common spammer tactic.
Avoid terms of extreme urgency like once in a lifetime, open immediately.
Include only a few links in any one email; excessive links are red flags.
#2: Code like it's 1999.
Email HTML is different from web HTML in a surprising number of ways. The best approach to designing effective, highly deliverable emails is simplicity. Treat your emails like very old-school web pages.
Stick with standard fonts: Arial, Helvetica, Times New Roman.
Keep email width to 650px or less.
Use tables instead of div tags, and use them sparingly.
Always paste plain unformatted text into a WYSIWYG editor like Marketo and Eloqua, even if you have to cut and paste from another document first. Copying text directly from a word processor like Microsoft Word brings unexpected formatting tags forward into your code and can cause many odd and frustrating problems. Always paste the original copy into a text editor like Notepad or Notepad++, or an HTML editor like the code view in Dreamweaver, and from there into the WYSIWYG editor.
#3: Images are great, IF you use them right.
Images are actually valuable analytics tools! Most email clients download them on the click, allowing you to track opens, and of course they help make emails attractive and interesting. Here are a few keys for using them well.
Include meaningful ALT tags and title tags for usability; different browsers use them differently.
Do not put important information into an image that isn't repeated in text; your reader may not see it.
If your main link is a button, make sure to use at least one text link as well; if the button doesn't render correctly for some reason, the reader can still access the linked information.
Use a background color even when using background images, to ensure something will appear even in clients that don't support background images.
#4: Make your emails easy for the reader.
Email effectiveness is all about compelling the reader to action. And to take action, they have to able to easily understand your message. Make an effort to develop clear, readable, and logically structured email layouts.
All links should be the same color; multiple link colors confuse the reader.
Don't make titles or other copy the same color as the links; it irritates the reader who tries to click on them.
Include at least one link "above the fold" so it's easy to find.
Keep font sizes at a minimum of 12px to help with readability; footers are acceptable at 9-10px.
Plain text emails should be no wider than 60-80 characters, and you should use textual dividers to break up the copy: =====, ******, ———-, white space, and the like.
#5: Update your QA procedures regularly.
Things change, browsers are updated, email clients go in and out of fashion. So make sure your QA process stays up to date with the latest technologies.
Always test in every browser and email client that your recipients may be using, and whenever you can, take into account the platform and operating system on which emails are viewed.
On the desktop: Apple O/S popularity is increasing, but September statistics from NetMarketShare show that 89.8% of the desktop O/S market still belongs to Windows.
Smartphone/tablet: Mobile rendering is critical, with 61% of consumers reporting that they read at least some of their email with a mobile device (YesMail Interactive). In August, IDC reported that Android leads the smartphone O/S market, with Apple's iOS still strong at #2. Windows Phone has gained ground, while Blackberry OS and Linux are trailing. This September research report from Millennial Media reveals that among tablets, Apple is winning out. So don't neglect the mobile reader!
Use a top-rated testing client like Litmus and others. Do your research and choose the right client for your circumstances.
After a fix or hack, don't just assume the issue is corrected. Test again!
Make sure to test with images displayed and hidden.
Don't forget to test text-only versions.
#6: There's always more to learn: get help.
DemandGen Campaign Execution Services . . .if you don't already know that!
Lori Mann is an expert in web and email design and deployment. As a DemandGen Production Specialist, she uses her advanced knowledge of HTML, XHTML, CSS, PHP, and MySql to build and deploy emails, landing pages, forms/smartforms, and microsites in clients' Eloqua and Marketo systems. Lori also helps clients make the most of their marketing programs through sophisticated email template development, script authoring for automated form input, and rigorous quality assurance testing.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,286
|
\section{Introduction}
Consider the minimization of a conic quadratic function over a polyhedron, i.e.,
\begin{equation*}
(\ensuremath{\text{CO}}) \ \ \ \min_{x\in \ensuremath{\mathbb{R}}^n }\left\{c'x+\Omega\sqrt{x'Qx}: x \in X \right\},
\end{equation*}
where $c \in \ensuremath{\mathbb{R}}^n, \ Q \in \ensuremath{\mathbb{R}}^{n \times n}$ is a symmetric positive semidefinite matrix, $\Omega>0$, and $X \subseteq \ensuremath{\mathbb{R}}^n$ is a rational polyhedron.
We denote by CDO \ the discrete counterpart of \ensuremath{\text{CO}} \ with integrality restrictions: $X \cap \ensuremath{\mathbb{Z}}^n$. \ensuremath{\text{CO}} \ and CDO \ are frequently used to model utility with uncertain objectives as in parametric value-at-risk minimization \citep{EOO:worst-var}, portfolio optimization \citep{AJ:lifted-polymatroid}, and robust counterparts of linear programs with an ellipsoidal objective uncertainty set \citep{BenTal1998,BenTal1999,book:ro}.
Note that \ensuremath{\text{CO}} \ includes linear programming (LP) and convex quadratic programming (QP) as special cases. The simplex method \citep{Dantzig1955,Wolfe1959,VanDePanne1964} is still the most widely used algorithm for LP and QP, despite the fact that polynomial interior point algorithms \citep{Karmarkar1984,Nesterov1994,Nemirovskii1996} are competitive with the simplex method in many large-scale instances. Even though non-polynomial, the simplex method has some distinct advantages over interior point methods. Since the simplex method iterates over bases, it is possible to carry out the computations with high accuracy and little cost, while interior point methods come with a trade-off between precision and efficiency. Moreover, an optimal basis returned by the simplex method is useful for sensitivity analysis, while interior point methods do not produce such a basis unless an additional ``crashing" procedure is performed \citep[e.g.][]{Megiddo1991}. Finally, if the parameters of the problem change, re-optimization can often be done very fast with the simplex method starting from a primal or dual feasible basis, whereas warm starts with interior point methods have limitations \citep{YW:warmstart,CPT:warmstart}.
In particular, fast re-optimization with the dual simplex method is crucial when solving discrete optimization problems with a branch-and-bound algorithm.
\ensuremath{\text{CO}} \ is a special case of conic quadratic optimization \citep{Lobo1998,Alizadeh2003}, which can be solved by polynomial-time interior points algorithms \citep{Alizadeh1995,Nesterov1998,BTN:ModernOptBook}.
Although \ensuremath{\text{CO}} \ can be solved by a general conic quadratic solver, we show in this paper that iterative QP algorithms scale much better. In particular, simplex-based QP algorithms allowing warm starts perform orders of magnitude faster than interior point methods for \ensuremath{\text{CO}}.
For the discrete counterpart CDO, a number of different approaches are available for the special case with a diagonal $Q$ matrix: \citet{Ishii1981} give a polynomial time for optimization over spanning trees; \citet{Bertsimas2004} propose an approximation algorithm that solves series of linear integer programs; \citet{Atamturk2008a} give a cutting plane algorithm utilizing the submodularity of the objective for the binary case; \citet{AG:mixed-polymatroid} give nonlinear cuts for the mixed 0-1 case;
\citet{Atamturk2009} give a parametric $O(n^3)$ algorithm for the binary case with a cardinality constraint.
Maximization of the same objective over the binaries is \NP-hard \cite{AA:utility}.
The aforementioned approaches do not extend to the non-diagonal case or to general feasible regions, which are obviously \NP-hard
as quadratic and linear integer optimization are special cases.
The branch-and-bound algorithm is the method of choice for general CDO.
However, branch-and-bound algorithms that repeatedly employ a nonlinear programming (NLP) solver at the nodes of the search tree are typically hampered by the lack of effective warm starts. \citet{Borchers1994} and \citet{Leyffer2001} describe NLP-based branch-and-bound algorithms, and they give methods that branch without solving the NLPs to optimality, reducing the computational burden for the node relaxations. On the other hand, LP-based branch-and-bound approaches employ linear outer approximations of the nonlinear terms. This generally results in weaker relaxations at the nodes, compared to the NLP approaches, but allows one to utilize warm starts with the simplex method. Therefore, one is faced with a trade-off between the strength of the node relaxations and the solve time per node. A key idea to strengthen the node relaxations, as noted by \citet{Tawarmalani2005}, is to use extended formulations.
\citet{AN:conicmir} describe mixed-integer rounding inequalities in an extended formulation for conic quadratic integer programming.
\citet{Vielma2015} use an extended formulation for conic quadratic optimization that can be refined during branch-and-bound, and show that an LP-based branch-and-bound using the extended formulations typically outperforms the NLP-based branch-and-bound algorithms.
The reader is referred to \citet{jeff-minlp-review} for an excellent survey of the solution methods for mixed-integer nonlinear optimization.
\ignore{\cite{Vielma2008} use the extended formulation for SOCPs proposed by \cite{BenTal2001} to construct a tight initial LP approximation, and \cite{Hijazi2013} use univariate extended formulations for separable MINLPs.}
In this paper, we reformulate \ensuremath{\text{CO}} \ through the perspective of its objective function and give algorithms that solve a sequence of closely related QPs. Utilizing the simplex method, the solution to each QP is used to warm start the next one in the sequence, resulting in a small number of simplex iterations and fast solution times. Moreover, we show how to incorporate the proposed approach in a branch-and-bound algorithm, efficiently solving the continuous relaxations to optimality at each node and employing warm starts with the dual simplex method. Our computational experiments indicate that the proposed approach outperforms the state-of-the-art algorithms for convex as well as discrete cases.
The rest of the paper is organized as follows. In Section~\ref{sec:formulation} we give an alternative formulation for \ensuremath{\text{CO}} \ using the perspective function of the objective. In Section~\ref{sec:algorithms} we present coordinate descent and accelerated bisection algorithms that solve a sequence of QPs. In Section~\ref{sec:computational} we provide computational experiments, comparing the proposed methods with state-of-the-art barrier and other algorithms.
\section{Formulation}
\label{sec:formulation}
In this section we present a reformulation of \ensuremath{\text{CO}} \ using the perspective function of its objective.
Let $X=\left\{x\in\ensuremath{\mathbb{R}}^{n}:Ax=b, \ x \ge 0 \right\}$ be the feasible region of problem \ensuremath{\text{CO}}.
For convex quadratic $q(x) = x'Q x$, consider the function
$h:\ensuremath{\mathbb{R}}^{n+1}\to \ensuremath{\mathbb{R}}_+ \cup \{\infty\}$ defined as
$$h(x,t)=\begin{cases}\frac{x'Qx}{t} & \text{if }t>0,\\ 0 & \text{if }x'Qx = 0, t =0,\\ +\infty & \text{otherwise.}\end{cases}$$
Observe that
\begin{align*}
\nonumber
&\min \left\{c'x+\Omega\sqrt{x'Qx}: x \in X \right\}\\
\nonumber
=&\min\left\{c'x+\frac{\Omega}{2}h(x,t)+\frac{\Omega}{2}t : x \in X, \ t=\sqrt{x'Qx}\right\}\\
\geq & \ \zeta,
\end{align*}
where
\begin{align*}
(\ensuremath{\text{PO}}) \ \ \ \zeta = \min \left\{c'x+\frac{\Omega}{2}h(x,t)+\frac{\Omega}{2}t: x \in X, \ t\geq 0\right\}.
\end{align*}
\ignore{
The equality in \eqref{eq:redundant} holds since we are only introducing a redundant variable, in \eqref{eq:substitution} we are substituting in the objective, and the inequality in \ensuremath{\text{PO}} \ holds because we relax the non-convex constraint into a nonnegativity constraint. }
We will show that problems \ensuremath{\text{CO}} \ and \ensuremath{\text{PO}} \ have, in fact, the same optimal objective value and that there is a one-to-one correspondence between the optimal primal-dual pairs of both problems.
\begin{proposition}
\label{prop:convexity}
Problem \ensuremath{\text{PO}} \ is a convex optimization problem.
\end{proposition}
\begin{proof}
It suffices to observe that $h$ is the closure of the \emph{perspective function} $t q(x/t)$
of the convex quadratic function $q(x)$, and is therefore convex \citep[e.g.][p. 160]{book:HUL-conv}. Since all other objective terms and constraints of \ensuremath{\text{PO}} \ are linear, \ensuremath{\text{PO}} \ is a convex optimization problem.
\end{proof}
\begin{proposition}
\label{prop:equivalence}
Problems \ensuremath{\text{CO}} \ and \ensuremath{\text{PO}} \ are equivalent.
\end{proposition}
\begin{proof}
If $t >0$, the objective function of problem \ensuremath{\text{PO}} \ is continuous and differentiable, and since the feasible region is a polyhedron and the problem is convex, its KKT points are equivalent to its optimal solutions. The KKT conditions of \ensuremath{\text{PO}} \ are
\begin{align}
Ax&=b, \ x\geq 0, \ t\geq 0 \notag\\
\label{eq:KKT1}-c'-\frac{\Omega }{t}x'Q&=\lambda'A-\mu\\
\label{eq:KKT2}\frac{\Omega}{2t^2}x'Qx-\frac{\Omega}{2}&=0\\
\notag\mu&\geq 0\\
\notag\mu' x&=0,
\end{align}
where $\lambda$ and $\mu$ are the dual variables associated with constraints $Ax=b$ and $x\geq 0$, respectively. Note that $t>0$ and \eqref{eq:KKT2} imply that $t=\sqrt{x'Qx}$. Substituting $t=\sqrt{x'Qx}$ in \eqref{eq:KKT1}, one arrives at the equivalent conditions
\begin{align}
Ax&=b, \ x\geq 0\notag\\
\label{eq:KKT0}-c'-\frac{\Omega}{\sqrt{x'Qx}}x'Q&=\lambda'A-\mu\\
t&=\sqrt{x'Qx}\label{eq:notInteresting}\\
\mu&\geq 0\notag\\
\mu' x&=0\notag.
\end{align}
Ignoring the redundant variable $t$ and equation \eqref{eq:notInteresting}, we see that these are the KKT conditions of problem \ensuremath{\text{CO}}. Therefore, any optimal primal-dual pair for \ensuremath{\text{PO}} \ with $t>0$ is an optimal primal-dual pair for \ensuremath{\text{CO}}. Similarly, we see that any optimal primal-dual pair of problem \ensuremath{\text{CO}} \ with $x'Qx>0$ gives an optimal primal-dual pair of problem \ensuremath{\text{PO}} \ by setting $t=\sqrt{x'Qx}$. In both cases, the objective values match.
On the other hand, if $t=0$, then \ensuremath{\text{PO}} \ reduces to problem
\begin{equation*}
\label{eq:CP0}
\min_{x\in \ensuremath{\mathbb{R}}^{n}}\left\{c'x:Ax=b, x\geq 0,x'Qx=0\right\},
\end{equation*}
which corresponds to \ensuremath{\text{CO}} \ with $x'Qx = 0$, and hence they are equivalent.
\end{proof}
\ignore{
The objective function of problem \ensuremath{\text{PO}} is not differentiable when $t=0$ (and the objective function of problem \CP is not differentiable when $x'Qx=0$), and therefore there may be optimal solutions to both problems that are not KKT points. Using the convention that infeasible solutions correspond to an objective value of $\infty$, we see that when $t=0$ problem \ensuremath{\text{PO}} is equivalent to
\begin{equation*}
\label{eq:CP0}
\min\left\{c'x: x \in X, \ x'Qx=0\right\}.
\end{equation*}
Therefore we see that the set of feasible solutions of problem \ensuremath{\text{PO}} with $t=0$ is the same as the set of feasible solution of \CP with $x'Qx=0$, and that such solutions have the same objective value. Therefore, $(x,t)$ with $t=0$ is optimal for \ensuremath{\text{PO}} if and only if $x'Qx=0$ and $x$ is optimal for \CP. It follows that, in all cases, the set of optimal solutions of \CP and \ensuremath{\text{PO}} are essentially the same.
}
Since they are equivalent optimization problems, we can use \ensuremath{\text{PO}} \ to solve \ensuremath{\text{CO}}. In particular, we exploit the fact that, for a fixed value of $t$, \ensuremath{\text{PO}} \ reduces to a QP.
\section{Algorithms}
\label{sec:algorithms}
For simplicity, assume that $\ensuremath{\text{PO}}$ has an optimal solution; hence, $X$ is nonempty and may be assumed to be bounded.
Consider the one-dimensional optimal value function
\begin{equation}
\label{eq:oneDimensional}
g(t)=\min_{x\in X}c'x+\frac{\Omega}{2}h(x,t) +\frac{\Omega}{2}t \cdot
\end{equation}
As $X$ is nonempty and bounded, $g$ is real-valued and, by Proposition~\ref{prop:convexity}, it is convex.
Throughout, $x(t)$ denotes an optimal solution to \eqref{eq:oneDimensional}.
In this section we describe two algorithms for \ensuremath{\text{PO}} \ that utilize a QP oracle. The first one is a coordinate descent approach,
whereas, the second one is an accelerated bisection search algorithm
on the function $g$.
Finally,
we discuss how to exploit the warm starts with the simplex method to solve convex as well as discrete cases.
\subsection{Coordinate descent algorithm}
\label{sec:coordinate}
Algorithm~\ref{alg:coordinateDescent} successively optimizes over $x$ for a fixed value of $t$, and then optimizes over $t$ for a fixed value of $x$. Observe that the optimization problem in line~\ref{line:QP} over $x$ is a QP, and the optimization in line~\ref{line:closedForm} over $t$ has a closed form solution: by simply setting the derivative to zero, we find that $t_{i+1}=\sqrt{{x_{i+1}}'Qx_{i+1}}$.
\begin{algorithm}[h]
\caption{Coordinate descent.}
\label{alg:coordinateDescent}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\Require $X \text{ polyhedron; }Q\text{ psd matrix; }c\text{ cost vector; } \Omega>0$
\Ensure Optimal solution $x^*$
\State \textbf{Initialize }$t_0 > 0$ \label{line:initt0} \Comment{e.g. $t_0=1$}
\State $i\leftarrow 0$ \Comment{iteration counter}
\Repeat
\State $x_{i+1}\leftarrow \argmin\limits_{x\in X}\left\{c'x+\frac{\Omega}{2t_i}x'Qx+\frac{\Omega}{2}t_{i}\right\}$\Comment{solve QP}\label{line:QP}
\State $t_{i+1}\leftarrow \argmin\limits_{t\geq 0}\left\{c'x_{i+1}+\frac{\Omega}{2t}{x_{i+1}}'Qx_{i+1}+\frac{\Omega}{2}t\right\}$\Comment{$t_{i+1}=\sqrt{{x_{i+1}}'Qx_{i+1}}$}\label{line:closedForm}
\State $i\leftarrow i+1$
\Until stopping condition is met \label{line:stoppingCriterion}
\State \Return $x_i$
\end{algorithmic}
\end{algorithm}
First observe that the sequence of objective values $\left\{c'x_i+\frac{\Omega}{2t_i}x_i'Qx_i+\frac{\Omega}{2}t_{i}\right\}_{i\in \ensuremath{\mathbb{N}}}$ is non-increasing. Moreover, the dual feasibility KKT conditions for the QPs in line \ref{line:QP} are of the form
\begin{equation}
\label{eq:QPKKT}
-c'-\frac{\Omega}{t_i}{x_{i+1}}'Q=\lambda'A-\mu.
\end{equation}
Let $\|\cdot\|$ be a norm and suppose that the QP oracle finds feasible primal-dual pairs with $\epsilon>0$ tolerance with respect to $\|\cdot\|$. In particular $x_{i+1}$ in line \ref{line:QP} violates \eqref{eq:QPKKT} by at most $\epsilon$, i.e.,
\begin{equation*}
\left\|-c'-\frac{\Omega}{t_i}{x_{i+1}}'Q-\lambda'A+\mu\right\|\leq \epsilon.
\end{equation*}
Proposition \ref{prop:convergence} below states that, at each iteration of Algorithm~\ref{alg:coordinateDescent}, we can bound the violation of the dual feasibility condition \eqref{eq:KKT0} corresponding to the original problem \ensuremath{\text{CO}}. The bound depends only on the precision of the QP oracle $\epsilon$, the relative change of $t$ in the last iteration $\frac{\Delta_i}{t_i}$, where $\Delta_i=t_{i+1}-t_i$, and the gradient of the function $f(x)= \Omega \sqrt{x'Qx}$ evaluated at the new point $x_{i+1}$.
\begin{proposition}[\textit{Dual feasibility bound}]
\label{prop:convergence}
A pair $(x_{i+1},t_{i+1})$ in Algorithm~\ref{alg:coordinateDescent} satisfies
$$\left\|-c'-\Omega\frac{x_{i+1}'Q}{\sqrt{{x_{i+1}}'Qx_{i+1}}}-\lambda'A+\mu\right\| \leq \epsilon+\frac{\left|\Delta_i\right|}{t_i}\cdot
\left\| \nabla f(x_{i+1})\right\|$
\end{proposition}
\begin{proof}
\begin{align*}
&\left\|-c'-\Omega\frac{ {x_{i+1}}'Q}{\sqrt{{x_{i+1}}'Qx_{i+1}}}-\lambda'A+\mu\right\|\\
=&\left\|-c'-\Omega\frac{{x_{i+1}}'Q}{t_i+\Delta_i}-\lambda'A+\mu\right\|\\
=&\left\|-c'-\Omega\frac{{x_{i+1}}'Q}{t_i}-\Omega {x_{i+1}}'Q\left(\frac{1}{t_i+\Delta_i}-\frac{1}{t_i}\right)-\lambda'A+\mu\right\|\\
=&\left\|-c'-\Omega\frac{{x_{i+1}}'Q}{t_i}-\lambda'A+\mu+\Omega \left(\frac{\Delta_i}{t_i\cdot t_{i+1}}\right) {x_{i+1}}'Q \right\| \\
\leq& \epsilon +\left\| \Omega \frac{\Delta_i}{t_i} \cdot \frac{{x_{i+1}}'Q}{t_{i+1}}\right\|=\epsilon+ \Omega \frac{\left|\Delta_i\right|}{t_i}\cdot \left\| \frac{{x_{i+1}}'Q}{\sqrt{{x_{i+1}}'Qx_{i+1}}}\right\|.
\end{align*}
\end{proof}
Let $t^*$ be a minimizer of $g$ on $\ensuremath{\mathbb{R}}_+$.
We now show that the sequence of values of $t$ produced by Algorithm~\ref{alg:coordinateDescent},
$\left\{t_i\right\}_{i\in \ensuremath{\mathbb{N}}}$, is monotone and bounded by $t^*$.
\begin{proposition}[\textit{Monotonicity}]
\label{prop:monotonicity}
If $t_i\leq t^*$, then $t_{i+1}=\sqrt{{x_{i+1}}'Qx_{i+1}}$ satisfies $t_i\leq t_{i+1}\leq t^*$. Similarly, if $t_i\geq t^*$, then $t_i\geq t_{i+1}\geq t^*$.
\end{proposition}
\begin{proof}
If $t_i\leq t^*$, then $\frac{\Omega}{2t_i}\geq \frac{\Omega}{2t^*}$. It follows that $x(t_{i+1})$ is a minimizer of an optimization problem with a larger coefficient for the quadratic term than $x(t^*)$, and therefore ${{x_{i+1}}'Qx_{i+1}}=t_{i+1}^2\leq {t^*}^2= {x^*}'Qx^*$, and $t_{i+1}\leq t^*$. Moreover, the inequality $t_i\leq t_{i+1}$ follows from the convexity of the one-dimensional function $g$ and
the fact that function $g$ is minimized at $t^*$, and that $g(t_{i+1})\leq g(t_i)$.
The case $t_i\geq t^*$ is similar.
\end{proof}
Since the sequence $\left\{t_i\right\}_{i\in \ensuremath{\mathbb{N}}}$ is bounded and monotone, it converges to a supremum or infimum. Thus $\left\{t_i\right\}_{i\in \ensuremath{\mathbb{N}}}$ is a Cauchy sequence, and
$\lim\limits_{i \to \infty} \Delta_i = 0$. Corollaries \ref{cor:KKTConvergence} and \ref{cor:0Convergence} below state that Algorithm~\ref{alg:coordinateDescent} converges to an optimal solution. The cases where there exists a KKT point for \ensuremath{\text{PO}} \ (i.e., there exists an optimal solution with $t^*>0$) and where there are no KKT points are handled separately.
\begin{corollary}[Convergence to a KKT point]
\label{cor:KKTConvergence}
If \ensuremath{\text{PO}} \ has a KKT point, then Algorithm~\ref{alg:coordinateDescent} converges to a KKT point.
\end{corollary}
\begin{proof}
By convexity, the set of optimal solutions to \eqref{eq:oneDimensional} is an interval, $[t_\ell,t_u]$. Since by assumption there exists a KKT point, we have that $t_u>0$. The proof is by cases, depending on the value of $t_0$ in line~\ref{line:initt0} of Algorithm~\ref{alg:coordinateDescent}.
\begin{description}
\item [Case $t_\ell\leq t_0\leq t_u$] Since $t_0$ is optimal, we have by Proposition~\ref{prop:monotonicity} that $t_1=t_0$. Since $\Delta_0=0$ and $t_0=\sqrt{x_{i+1}'Qx_{i+1}}>0$, we have that $\left\| \nabla f(x_{i+1})\right\|<\infty$ in Proposition~\ref{prop:convergence}, and $\frac{\left|\Delta_i\right|}{t_i}\cdot
\left\| \nabla f(x_{i+1})\right\|=0$.
\item [Case $t_0< t_\ell$]We have by Proposition~\ref{prop:monotonicity} than for all $i\in \ensuremath{\mathbb{N}}$, $t_i=\sqrt{x_i'Qx_i}\geq t_0>0$. Therefore, there exists a number $M$ such that $\frac{1}{t_i}\left\| \nabla f(x_{i+1})\right\|<M$ for all $i\in \ensuremath{\mathbb{N}}$, and we find that $\frac{\left|\Delta_i\right|}{t_i}\cdot
\left\| \nabla f(x_{i+1})\right\|\xrightarrow{\Delta_i\to 0} 0$.
\item [Case $t_0> t_u$]We have by Proposition~\ref{prop:monotonicity} than for all $i\in \ensuremath{\mathbb{N}}$, $t_i=\sqrt{x_i'Qx_i}\geq t_u>0$. Therefore, there exists a number $M$ such that $\frac{1}{t_i}\left\| \nabla f(x_{i+1})\right\|<M$ for all $i\in \ensuremath{\mathbb{N}}$, and we find that $\frac{\left|\Delta_i\right|}{t_i}\cdot
\left\| \nabla f(x_{i+1})\right\|\xrightarrow{\Delta_i\to 0} 0$.
\end{description}
Therefore, in all cases, Algorithm~\ref{alg:coordinateDescent} convergences to a KKT point by Proposition~\ref{prop:convergence}.
\end{proof}
\begin{corollary}[Convergence to $0$]
\label{cor:0Convergence}
If $t^*=0$ is the unique optimal solution to $\min \{g(t): t \in \ensuremath{\mathbb{R}}_+\}$, then for any $\xi>0$ Algorithm~\ref{alg:coordinateDescent} finds a solution $(\bar{x},\bar{t})$, where $\bar{t}<\xi$ and $\bar{x}\in \argmin\left\{c'x:\sqrt{x'Qx}=\bar{t}, x\in X\right\}$.
\end{corollary}
\begin{proof}
The sequence $\left\{t_i\right\}_{i\in \ensuremath{\mathbb{N}}}$ converges to $0$ (otherwise, by Corollary~\ref{cor:KKTConvergence}, it would converge to a KKT point). Thus, $\lim_{i\to\infty}\sqrt{x_i'Qx_i}=0$ and all points obtained in line~\ref{line:QP} of Algorithm~\ref{alg:coordinateDescent} satisfy $x_{i+1}\in \argmin\left\{c'x:\sqrt{x'Qx}=t_{i+1}, x\in X\right\}$.
\end{proof}
\ignore{
\begin{remark}
From Proposition~\ref{prop:convergence} we see that optimal primal-dual pairs of \CP correspond to the optimal primal-dual pairs of the QP \eqref{eq:oneDimensional} at $t^*$.
\end{remark}
}
We now discuss how to initialize and terminate Algorithm~\ref{alg:coordinateDescent}, corresponding to lines \ref{line:initt0} and \ref{line:stoppingCriterion}, respectively.
\subsubsection*{Initialization.}
The algorithm may be initialized by an arbitrary $t_0 > 0$.
Nevertheless, when a good initial guess on the value of $t^*$ is available, $t_0$ should be set to that value.
Moreover, observe that setting $t_0=\infty$ results in a fast computation of $x_1$ by solving an LP.
\subsubsection*{Stopping condition.}
Proposition~\ref{prop:convergence} suggests a good stopping condition for Algorithm~\ref{alg:coordinateDescent}. Given a desired dual feasibility tolerance of $\delta>\epsilon$, we can stop when $\epsilon + \frac{\left|\Delta_i\right|}{t_i}\cdot \left\| \nabla f(x_{i+1}) \right\|<\delta$. Alternatively, if
$\exists k \text{ s.t. } \max_{x \in X} \left\| \nabla f(x) \right\| \le k < \infty$, then the simpler $\left|\frac{\Delta_i}{t_i}\right|\leq \frac{\delta-\epsilon}{k}$ is another stopping condition. For instance,
a crude upper bound on $ \nabla f(x) = \Omega\left\| \frac{{x}'Q}{\sqrt{{x}'Qx}}\right\|$ can be found by maximizing/minimizing the numerator $x'Q$ over $X$ and minimizing $x'Qx$ over $X$. The latter minimization is guaranteed to have a nonzero optimal value if $0 \not \in X$ and $Q$ is positive definite.
\ignore{
\begin{remark}
We provide some intuition for Proposition~\ref{prop:convergence}. Recall that $t_i=\sqrt{x_i'Qx}$, and so we can write (with an abuse of notation) that the gradient of $t$ at $x$ is $\frac{\partial t}{\partial x}(x_i)=\frac{{x_i}'Q}{\sqrt{{x_i}'Qx_i}}$. A natural estimator of the future change of $t$ is the rate of change of $t$ at the current point, given by $\Omega\frac{\partial t}{\partial x}(x_{i+1})$, times the relative change in the previous iteration, $\frac{\Delta_i}{t_i}$. According to Proposition~\ref{prop:convergence}, the natural estimator gives a bound on the violation of KKT condition \eqref{eq:KKT0} at the current point.
\end{remark}
}
\subsection{Bisection algorithm}
\label{sec:bisection}
Algorithm~\ref{alg:bisection} is an accelerated bisection approach to solve \ensuremath{\text{PO}}. The algorithm maintains lower and upper bounds, $t_{\min}$ and $t_{\max}$, on $t^*$ and, at each iteration, reduces the interval $[t_{\min}, t_{\max}]$ by at least half. The algorithm differs from the traditional bisection search algorithm in lines \ref{line:iBisection10}--\ref{line:iBisection3}, where it uses an acceleration step to reduce the interval by a larger amount:
by Proposition~\ref{prop:monotonicity},
if $t_0\leq t_1$ (line \ref{line:iBisection10}), then $t_0\leq t_1\leq t^*$, and therefore $t_1$ is a higher lower bound on $t^*$ (line \ref{line:iBisection11}); similarly, if $t_0\geq t_1$, then $t_1$ is an lower upper bound on $t^*$ (lines \ref{line:iBisection20} and \ref{line:iBisection21}). Intuitively, the algorithm takes a ``coordinate descent" step as in Algorithm~\ref{alg:coordinateDescent} after each bisection step. Preliminary computations show that the acceleration step reduces
the number of steps as well as the overall solution time for the bisection algorithm by about 50\%.
\begin{algorithm}[h]
\caption{Accelerated bisection.}
\label{alg:bisection}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\Require $X \text{ polyhedron; }Q\text{ psd matrix; }c\text{ cost vector; } \Omega>0$
\Ensure Optimal solution $x^*$
\State \textbf{Initialize }$t_{\min}$ and $t_{\max}$ \Comment{ensure $t_{\min}\leq t^* \leq t_{\max}$}\label{line:initTs}
\State $\hat{z}\leftarrow \infty$ \Comment{best objective value found}
\Repeat
\State $t_0\leftarrow \frac{t_{\min}+t_{\max}}{2}$
\State $x_0\leftarrow \argmin\limits_{x\in X}\left\{c'x+\frac{\Omega}{2t_0}x'Qx+\frac{\Omega}{2}t_{0}\right\}$\Comment{solve QP}\label{line:updateX}
\State $t_{1}\leftarrow \sqrt{{x_{0}}'Qx_{0}}$
\If{$t_0 \leq t_1$}\label{line:iBisection10} \Comment{accelerate bisection}
\State $t_{\min}\leftarrow t_1$\label{line:iBisection11}
\Else\label{line:iBisection20}
\State $t_{\max}\leftarrow t_1$\label{line:iBisection21}
\EndIf \label{line:iBisection3}
\If{$c'x_0+\Omega\sqrt{{x_0}'Qx_0}\leq \hat{z}$} \Comment{update the incumbent solution}
\State $\hat{z}\leftarrow c'x_0+\Omega\sqrt{{x_0}'Qx_0}$
\State $\hat{x}\leftarrow x_0$
\EndIf
\Until stopping condition is met \label{line:stoppingCriterion2}
\State \Return $\hat{x}$
\end{algorithmic}
\end{algorithm}
\subsubsection*{Initialization.}
In line~\ref{line:initTs}, $t_{\min}$ can be initialized to zero and $t_{\max}$ to ${x_{LP}}'Qx_{LP}$, where $x_{LP}$ is an optimal solution to the LP relaxation
$\min_{x\in X}c'x$.
\subsubsection*{Stopping condition.}
There are different possibilities for the stopping criterion in line \ref{line:stoppingCriterion2}. Note that if we have numbers $t_m$ and $t_M$ such that $t_m \leq t^* \leq t_M$, then $c'x (t_M)+\Omega\sqrt{{x(t_m)}'Qx(t_m)}$ is a lower bound on the optimal objective value $c'x^*+\Omega\sqrt{{x^*}'Qx^*}$. Therefore, in line~\ref{line:updateX}, a lower bound $z_l$ on the objective function can be computed, and the algorithm can be stopped when the gap between $\hat{z}$ and $z_l$ is smaller than a given threshold. Alternatively, stopping when $\frac{\left|t_1-t_0\right|}{t_0}\cdot \Omega\left\| \frac{{x_{0}}'Q}{\sqrt{{x_{0}}'Qx_{0}}}\right\|<\delta-\epsilon$ provides a guarantee on the dual infeasibility as in Proposition~\ref{prop:convergence}.
\subsection{Warm starts}
\label{sec:warmStarts}
Although any QP solver can be used to run the coordinate descent and bisection algorithms described in Sections \ref{sec:coordinate} and \ref{sec:bisection}, simplex methods for QP are particularly effective
as they allow warm starts for small changes in the model parameters in iterative applications. This is the main motivation for the QP based algorithms presented above.
\subsubsection{Warm starts with primal simplex for convex optimization}
\label{sec:warmStartPrimal}
All QPs solved in Algorithms~\ref{alg:coordinateDescent}--\ref{alg:bisection} have the same feasible region and only the objective function changes in each iteration. Therefore, an optimal basis for a QP is primal feasible for the next QP solved in the sequence, and can be used to warm start a primal simplex QP solver.
\subsubsection{Warm starts with dual simplex for discrete optimization}
When solving discrete counterparts of \ensuremath{\text{CO}} with a branch-and-bound algorithm
one is particularly interested in utilizing warm starts in solving convex relaxations at the nodes of the search tree. In a branch-and-bound algorithm, children nodes typically have a single additional bound constraint compared to the parent node.
For this purpose, it is also possible to warm start Algorithm~\ref{alg:coordinateDescent} from a dual feasible basis.
Let $(x^*,t^*)$ be an optimal solution to \ensuremath{\text{PO}} \ and $B^*$ be an optimal basis. Consider a new problem
\begin{equation}
\label{eq:dualFeasible}
\min \left\{c'x+\frac{\Omega}{2t}x'Qx+\frac{\Omega}{2}t: x \in \bar X, \ t \ge 0\right\},
\end{equation}
where the feasible set $\bar{X}$ is obtained from $X$ by adding new constraints.
Note that $B^*$ is a dual feasible basis for \eqref{eq:dualFeasible} when $t = t^*$. Therefore,
Algorithm~\ref{alg:coordinateDescent} to solve problem \eqref{eq:dualFeasible} can be warm started
by initializing $t_0=t^*$ and using $B^*$ as the initial basis to compute $x_1$ with a dual simplex algorithm.
The subsequent QPs can be solved using the primal simplex algorithm as noted in Section~\ref{sec:warmStartPrimal}.
\ignore{
In typical branch-and-bound algorithms for MILPs and MIQPs, the optimal basis found at each node is then used to warm start the continuous solver in the children nodes. A child node typically has a single additional bound constraint. To extend the branch-and-bound algorithms to MICPs, it is sufficient to define the basis as the pair $(B^*,t^*)$ described in the previous paragraph, and use Algorithm~\ref{alg:coordinateDescent} as the continuous solver.
}
\ignore{
\subsection{Unbounded case}
\label{sec:unbounded}
In many cases it is possible to determine that problem \CP \ is bounded \textit{a priori} (e.g., $X$ is a polytope, or $c\geq 0$). We now discuss how to detect whether problem \CP is bounded or not when there is not a simple guarantee.
First, note that if the LP relaxation \eqref{eq:LPrelaxation} is bounded then problem \CP is bounded. Moreover, as Proposition~\ref{prop:unbounded} states, if any of the QPs is unbounded then problem \CP is unbounded.
\begin{proposition}
\label{prop:unbounded}
If $g(t)=-\infty$ for any fixed $t\geq 0$, then problem \CP is unbounded.
\end{proposition}
\begin{proof}
If $g(t)=-\infty$, then there exists a sequence of feasible points $\left\{x_i\right\}_{i\in \ensuremath{\mathbb{N}}}$ such that
\begin{align*}
&\lim_{i\to \infty }c'x_i+\frac{\Omega}{2t}{x_i}'Qx_i=-\infty\\
\implies&\lim_{i\to \infty }c'x_i+\max\left\{\frac{\Omega}{2t}{x_i}'Qx_i,2t\right\}=-\infty.
\end{align*}
Since $c'x+\Omega\sqrt{{x_i}'Qx_i}\leq c'x+\max\left\{\frac{\Omega}{2t}{x_i}'Qx_i,2t\right\}$, we have that the sequence $\left\{x_i\right\}_{i\in \ensuremath{\mathbb{N}}}$ is also a unbounded sequence for problem \CP.
\end{proof}
Unfortunately, as Example~\ref{ex:unbounded} shows, it is possible that $g(t)>-\infty$ for all $t$ and that problem \CP is unbounded. In this case we have that $\lim\limits_{t\to \infty}g(t)=-\infty$.
\begin{example}
\label{ex:unbounded}
Consider the one-dimensional unconstrained problem $$\min_{x\in \ensuremath{\mathbb{R}}} x+\Omega\left|x\right|,$$
which is unbounded for $\Omega<1$. In this case we have $$g(t)=\min_{x\in \ensuremath{\mathbb{R}}}\left(x+\frac{\Omega}{2t} x^2+\frac{\Omega}{2}t\right)=t\left(\frac{\Omega^2-1}{2\Omega}\right),$$ which is bounded for all $\Omega> 0$. Nevertheless, we see that when $\Omega<1$ we have that $\lim\limits_{t\to \infty}g(t)=-\infty$.
\end{example}
We now summarize a process for instances that may be unbounded. We first check for easy certificates of boundedness or unboundedness. In case we are unable to verify whether the problem is bounded or not, we run Algorithm~\ref{alg:coordinateDescent} until a feasible solution with a sufficiently low objective value is found.
\begin{description}
\item[Step 1] Solve the LP relaxation \eqref{eq:LPrelaxation}. If it is bounded, then problem \ensuremath{\text{CO}} is bounded and can be solved using Algorithms\footnote{Note that Algorithm~\ref{alg:bisection} requires solving the LP in any case. Moreover, Algorithm~\ref{alg:coordinateDescent} can be warm started from the LP optimal solution.}~\ref{alg:coordinateDescent} or \ref{alg:bisection}. Otherwise go to Step 2.
\item[Step 2] Initialize $t$, and compute $g(t)$. If $g(t)=-\infty$, then problem \CP is unbounded. Otherwise go to Step 3.
\item[Step 3] Choose a lower bound $m$. Use Algorithm~\ref{alg:coordinateDescent} until convergence (in which case the solution found is optimal) or until a feasible solution is found such that the objective value is less than $m$.
\end{description}
}
\section{Computational experiments}
\label{sec:computational}
In this section we report on computational experiments with solving convex \ensuremath{\text{CO}} \ and its discrete counterpart CDO \ with the algorithms described in Section~\ref{sec:algorithms}. The algorithms are implemented with CPLEX Java API. We use the simplex and barrier solvers of CPLEX version 12.6.2 for the computational experiments. All experiments are conducted on a workstation with a 2.93GHz Intel\textregistered Core\textsuperscript{TM} i7 CPU and 8 GB main memory using a single thread.
\subsection{Test problems} We test the algorithms on two types of data sets. For the first set the feasible region is described by a cardinality constraint and bounds, i.e., $X=\left\{x\in\ensuremath{\mathbb{R}}^{n}:\sum_{i=1}^n x_i= b,\;
\ensuremath{\textbf{0}} \leq x \leq \ensuremath{\textbf{1}} \right\}$ with $b = n/5$. For the second data set the feasible region consists of the
path polytope of an acyclic grid network. For discrete optimization problems we additionally enforce the binary restrictions $x\in \ensuremath{\mathbb{B}}^n$.
\ignore{
\subsubsection{Feasible regions} We consider two classes of feasible regions:
\begin{description}
\item[Cardinality instances] The feasible region consists of a single cardinality constraint and bound constraints, i.e. $$X=\left\{x\in\ensuremath{\mathbb{R}}^{n}:\sum_{i=1}^n x_i= b,\; 0\leq x_i\leq 1 \;\forall i=1,\ldots,n\right\}.$$ In the computational experiments, we set $b=n/5$.
\item[Path instances] The feasible region consists of the path polytope in acyclic grid networks.
\end{description}
We limit our computational experiments to integral polytopes because our branch-and-bound algorithm does not use cutting planes and would not be effective for non-integral polytopes. Note that the lack of cutting planes is a limitation only of our branch-and-bound algorithm, and that Algorithm~\ref{alg:coordinateDescent} could be used in branch and cut approaches.
}
For both data sets the objective function $q(x) = c'x + \Omega \sqrt{x'Qx}$ is generated as follows:
Given a rank parameter $r$ and density parameter $\alpha$, $Q$ is the sum of a low rank factor matrix and a full rank diagonal matrix; that is, $Q=F\Sigma F'+D$, where
\begin{itemize}
\item $D$ is an $n\times n$ diagonal matrix with entries drawn from Uniform$(0,1)$.
\item $\Sigma=HH'$ where $H$ is an $r\times r$ matrix with entries drawn from Uniform$(-1,1)$.
\item $F$ is an $n\times r$ matrix in which each entry is $0$ with probability $1-\alpha$ and
drawn from Uniform$(-1,1)$ with probability $\alpha$.
\end{itemize}
Each linear coefficient $c_i$ is drawn from Uniform$(-2\sqrt{Q_{ii}},0)$.
\ignore{Therefore if the objective function is interpreted as the value-at-risk of normally distributed random variables, then we have that on average the expected return of each variable is proportional to its standard deviation (and risky variables have thus better expected returns).}
\subsection{Experiments with convex problems}
\label{sec:resultsContinuous}
In this section we present the computational results for convex instances. We compare the following algorithms:
\begin{description}
\item [ALG1] Algorithm~\ref{alg:coordinateDescent}.
\item [ALG2] Algorithm~\ref{alg:bisection}.
\item [BAR] CPLEX' barrier algorithm (the default solver for convex conic quadratic problems).
\end{description}
For algorithms ALG1 and ALG2 we use CPLEX' primal simplex algorithm as the QP solver.\ignore{, and the stopping condition $\frac{\left|\Delta_i\right|}{t}\leq 10^{-5}$
unless specified otherwise.}
\ignore{
Specifically, we present three sets of computational results. First in Section~\ref{sec:resultsContinuousQ} we study the effects of changing the $Q$ matrix, and we are primarily concerned with comparing between the simplex-based algorithms. Then in Section~\ref{sec:resultsContinuousDimension} we study the effects of changing the dimension (for a fixed structure of the $Q$ matrix), and we are primarily concerned with comparing the performance of the barrier algorithm and the simplex-based algorithms. Finally in Section~\ref{sec:resultsContinuousTolerance} we study the effects of changing the tolerance (for a fixed dimension and structure of the $Q$ matrix), and compare the barrier algorithm with a simplex-based algorithm.
}
\subsubsection*{Optimality tolerance}
As the speed of the interior point methods crucially depends on the chosen optimality tolerance, it is prudent to first compare the speed vs the quality of the solutions for the algorithms tested. Here we study the impact of the optimality tolerance in the solution time and the quality of the solutions for CPLEX' barrier algorithm BAR and simplex QP-based algorithm ALG1. The optimality tolerance of the barrier algorithm is controlled by the QCP convergence tolerance parameter (``BarQCPEpComp"), and in Algorithm~\ref{alg:coordinateDescent}, by the stopping condition $\frac{\left|\Delta_i\right|}{t}\leq \delta$.
In both cases, a smaller optimality tolerance corresponds to a higher quality solution. We evaluate the quality of a solution as $\texttt{optgap}=\left|(z_{\min} -z)/z_{\min}\right|,$
where $z$ is the objective value of the solution found by an algorithm with a given tolerance parameter and $z_{\min}$ is the objective value of the solution found by the barrier algorithm with tolerance $10^{-12}$ (minimum tolerance value allowed by CPLEX).
Table~\ref{tab:tolerance} presents the results for different tolerance values
for a $30\times 30$ convex grid instance with $r=200$, $\alpha=0.1$, and $\Omega=1$.
The table shows, for varying tolerance values and for each algorithm, the quality of the solution, the solution time in seconds, the number of iterations, and QPs solved (for ALG1). We highlight in bold the default tolerance used for the rest of the experiments
presented in the paper. The tolerance value $10^{-7}$ for the barrier algorithm corresponds to the default parameter in CPLEX.
\input{tolerance.tex}
First observe that the solution time increases with reduced optimality tolerance for both algorithms. With lower tolerance, while the barrier algorithm performs more iterations, ALG1 solves more QPs; however, the total number of simplex iterations barely increases. For ALG1 the changes in the value of $t$ are very small between QPs, and the optimal bases of the QPs are thus the same. Therefore, using warm starts, the simplex method is able to find high precision solutions inexpensively.
ALG1 achieves much higher precision an order of magnitude faster than the barrier algorithm.
For the default tolerance parameters used in our computational experiments, Algorithm~\ref{alg:coordinateDescent} is several orders of magnitude more precise than the barrier algorithm.
\ignore{
In most settings Algorithm~\ref{alg:coordinateDescent} is more precise than the barrier algorithm with a very low tolerance parameter ($10^{-11}$). Moreover we see that to achieve high precisions the simplex methods require solving more QPs, but the number of simplex iterations does not increase: the changes in the value of $t$ are very small between QPs, and the optimal bases of the QPs are thus the same. Therefore, using warm starts, the simplex methods are able to find high precision solutions inexpensively.
}
\subsubsection*{Effect of the nonlinearity parameter $\Omega$.}
We now study the effect of changing the nonlinearity parameter $\Omega$.
Tables \ref{tab:contCard1000} and \ref{tab:contGrid30} show
the total solution time in seconds, the total number of simplex or barrier iterations, and the number of QPs solved in cardinality
(1000 variables) and path instances (1760 variables), respectively.
Each row represents the average over five instances for a rank ($r$) and density($\alpha$) configuration and algorithm used.
For each parameter choice the fastest algorithm is highlighted in bold.
\input{continuousCard1000.tex}
\input{continuousGrid30.tex}
First observe that in both data sets the barrier algorithm is the slowest: it is 3.5 and 6 times slower than the simplex QP-based methods for the cardinality instances, and is up to 15 times slower for the path instances. The barrier algorithm does not appear to be too sensitive to the nonlinearity parameter $\Omega$, whereas the simplex QP-based methods are faster for smaller $\Omega$.
\ignore{
With respect to the simplex-based algorithms, we observe that ALG1-1 is slower than ALG1-2 and ALG2, and ALG1-2 and ALG2 perform similarly. Recall that both ALG1-2 and ALG2 start by solving an LP while ALG1-1 initially solves a QP. We conclude from the performance of ALG1-1 and ALG1-2 that the initialization step of Algorithm~\ref{alg:coordinateDescent} is critical for the overall performance. Algorithm~\ref{alg:bisection}, on the other hand, does not depend on a initialization step and performs well in practice.
}
The number of simplex iterations in ALG1 increases with the nonlinearity parameter $\Omega$. Indeed, the initial problem solved by ALG1 is an LP (corresponding to $\Omega=0$), so as $\Omega$ increases the initial problem becomes a worse approximation, and more work is needed to converge to an optimal solution.
Also note that Algorithm~\ref{alg:bisection} requires fewer QPs to be solved, but as a result it benefits less from warm starts (it requires more simplex iterations per QP than ALG1). Indeed, in ALG2 the value of $t$ changes by a larger amount at each iteration (with respect to ALG1), so the objective function of two consecutive QPs changes by a larger amount.
\subsubsection*{Effect of the dimension}
Table \ref{tab:contCardSizes} presents a comparison of the algorithms for the convex cardinality instances with sizes 400, 800, 1600, and 3200. Each row represents the average over five instances, as before, generated with parameters $r=200$, $\alpha=0.1$, and $\Omega=2$.
Additionally, Figure~\ref{fig:improvement} shows the solution time for each algorithm and the speed-up factor of the simplex QP-based algorithms compared to the barrier algorithm as a function of the dimension ($n$).%
\input{continuousCardSizes.tex}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.9\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{ContinuousTime.pdf}
\label{fig:timeDimension}
\caption{Solution time as a function of dimension.}
\end{subfigure}
\begin{subfigure}{0.9\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{ContinuousFactor.pdf}
\caption{Speed-up as a function of dimension.}
\label{fig:factorDimension}
\end{subfigure}
\caption{Barrier vs the simplex QP-based algorithms.}
\label{fig:improvement}
\end{figure}
Observe in Table \ref{tab:contCardSizes} that the number of QPs solved with the simplex-based algorithms does not depend on the dimension. The number of simplex iterations, however, increases with the dimension. For $n=400$ all algorithms perform similarly and the problems are solved very fast. However, as the dimension increases, the simplex-based algorithms outperform the barrier algorithm, often by many factors. For $n=3200$, the fastest simplex-based algorithm ALG2 is more than 20 times faster than the barrier algorithm. Similar results are obtained for other parameter choices and for the path instances as well. In summary, the simplex-based algorithms scale better with the dimension, and are faster by orders of magnitude for large instances.
\subsection{Discrete instances}
In this section we describe our experiments with the discrete counterpart CDO.
As of version 12.6.2 of CPLEX, it is not possible to employ a user-defined convex solver such as Algorithm~\ref{alg:coordinateDescent} at the nodes of the CPLEX' branch-and-bound algorithm. Therefore, in order to test the proposed approach for CDO, we implement a rudimentary branch-and-bound algorithm described in Appendix~\ref{sec:branchAndBound}. The algorithm uses a maximum infeasibility rule for branching, and does not employ presolve, cutting planes, or heuristics. We test the following configurations:
\begin{description}
\item [BBA1] Branch-and-bound algorithm in Appendix~\ref{sec:branchAndBound} using Algorithm~\ref{alg:coordinateDescent} as the convex solver.
The first QP at each node (except the root node) is solved with CPLEX dual simplex method using the parent dual feasible basis as a warm start (as mentioned in Section~\ref{sec:warmStarts}) and all other QPs are solved with CPLEX primal simplex method using the basis from the parent node QP as a warm start.
\item [BBBR] Branch-and-bound algorithm in Appendix~\ref{sec:branchAndBound}, using CPLEX barrier algorithm as the convex solver. This configuration does not use warm starts.
\item [CXBR] CPLEX branch-and-bound algorithm with barrier solver, setting the branching rule to maximum infeasibility, the node selection rule to best bound, and disabling presolve, cuts and heuristics. In this setting CPLEX branch-and-bound algorithm is as close as possible to our branch-and-bound algorithm.
\item [CXLP] CPLEX branch-and-bound algorithm with LP outer approximations, setting the branching rule to maximum infeasibility, the node selection rule to best bound, and disabling presolve, cuts and heuristics. In this setting CPLEX branch-and-bound algorithm is as close as possible to our branch-and-bound algorithm.
\item [CXLPE] CPLEX branch-and-bound algorithm with LP outer approximations, setting the branching rule to maximum infeasibility, the node selection rule to best bound, and disabling cuts and heuristic. Since presolve is activated, CPLEX uses extended formulations described in \cite{Vielma2015}. Besides presolve, all other parameters are set as in CXLP
\item [CXD] CPLEX default branch-and-bound algorithm with LP outer approximations.
\end{description}
In all cases the time limit is set to two hours.
Table \ref{tab:discCard200} presents the results for discrete cardinality instances with 200 variables and Table~\ref{tab:discGrid30} for the discrete path instances with 1,740 variables ($30\times 30$ grid). Each row represents the average over five instances with varying rank and density parameters, and algorithm. The tables show the solution time in seconds, the number of nodes explored in the branch-and-bound tree, the end gap after two hours as percentage, and the number of instances that are solved to optimality for varying values of $\Omega$. For each instance class we highlight in bold the algorithm with the best performance.
\input{discreteCard200.tex}
\input{discreteGrid30.tex}
\ignore{
We first give some general comments in Section~\ref{sec:discGeneralComments}, then in Section~\ref{sec:performanceBBA1} we comment on the performance of configuration BBA1, and finally in Section~\ref{sec:discWarmStarts} we study the impact of warm starts.
}
First of all, observe that the difficulty of the instances increases considerably for higher values of $\Omega$ due to higher integrality gap. The problems corresponding to high values of the density parameter $\alpha$ are also more challenging.
\subsubsection*{Performance of CPLEX branch-and-bound}
Among CPLEX branch-and-bound algorithms, CXD is the best choice when $\Omega\geq 2$. Configuration CXD is much more sophisticated than the other configurations, so a better performance is expected. However, note that for $\Omega=1$ configuration CXD is not necessarily the best. In particular in the path instances (Table~\ref{tab:discGrid30}) CXLP and CXLPE are 2.3 times faster than CXD. This result suggests that in simple instances the additional features used by CXD (e.g. cutting planes and heuristics) may be hurting the performance.
The extended formulations result in much stronger relaxations in LP based branch-and-bound and, consequently, the number of branch-and-bound nodes required with CXLPE is only a small fraction of the number of nodes required with CXLP. However, CXLPE requires more time to solve each branch-and-bound node, due to the higher number of variables and the additional effort needed to refine the LP outer approximations. For the cardinality instances, CXLPE is definitely the better choice and is faster by orders of magnitude. For the path instances, however, CXLP is not necessarily inferior: when $\Omega=1$ CXLP is competitive with CXLPE, and when $\Omega=3$ CXLP performs better.
The barrier-based branch-and-bound CXBR, in general, performs poorly. For the cardinality instances, it outperforms CXLP but is slower than the other algorithms. For the path instances it has the worst performance, often struggling to find even a single feasible solution (resulting in infinite end gaps).
\subsubsection*{Performance of BBA1}
Note that BBA1 and BBBR are very simple and differ only by the convex node solver. BBA1 is faster than BBBR by an order of magnitude. BBA1 is also considerably faster than the simplest CPLEX branch-and-bound algorithms CXBR and CXLP.
We see that BBA1 outperforms CXLPE (which uses presolve and extended formulations) in all instances. Observe that in the cardinality instances with $\Omega=1,2$ and path instances with $\Omega=1$, BBA1 requires half the number of nodes (or less) compared to CXLPE to solve the instances to optimality (since the relaxations solved at each node are stronger), which translates into faster overall solution times. In the more difficult instances BBA1 is able to solve more instances to optimality, and the end gaps are smaller.
Despite the fact that BBA1 is a rudimentary branch-and-bound implementation, it is faster than default CPLEX in most of the cases. Indeed, BBA1 is the better choice in 21 of the instance classes considered, while CXD is better in only 2. Moreover, in the instances where CXD is better the difference between the algorithms is small (around 10\% difference in solution times), while in the other instances BBA1 is often faster by many factors.
We observe that CXD is comparatively better for the instances with a low factor rank ($r=100$), and BBA1 is comparatively better for the instances with a high factor rank ($r=200$).
\subsubsection*{Warm starts}
Algorithm BBA1 is faster than BBBR in part due to a faster convex solver (as observed in Section~\ref{sec:resultsContinuous}), and in part due to node warm starts. To quantify the impact of warm starts, we plot in Figure~\ref{fig:timePerNode} the \emph{time per node} (computed as solution time divided by the number of branch-and-bound nodes) for BBA1, BBBR and CXLPE, and also plot the solution time for the corresponding convex instances with solvers ALG1 and BAR\footnote{The time per node is similar for all combinations of parameters $\Omega$, $r$ and $\alpha$, and thus we plot the average over all parameters.}.
\begin{figure}[h!]
\centering
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=1\textwidth]{Bar1.pdf}
\caption{Cardinality instances}
\end{subfigure}
~
\begin{subfigure}[t]{0.5\columnwidth}
\centering
\includegraphics[width=1\textwidth]{Bar2.pdf}
\caption{Path instances}
\end{subfigure}
\caption{Time per node.}
\label{fig:timePerNode}
\end{figure}
For the small cardinality instances with 200 variables, Algorithm~\ref{alg:coordinateDescent} is slightly worse than the barrier algorithm to solve the convex relaxations; however, it is 15 times faster than barrier when used in branch-and-bound due to the node warm starts from dual feasible solutions.
For the larger path instances with 1,740 variables, Algorithm~\ref{alg:coordinateDescent} is 10 times faster than the barrier algorithm to solve the convex relaxations, and is about 20 times faster for the discrete instances. Thus node warm starts make the algorithm twice as fast.
Finally, observe that the solve time per node for BBA1 is smaller compared to CXLPE: the proposed simplex-based algorithm is thus as effective as the simplex method for extended formulations in exploiting warm starts.
Moreover, it solves the nonlinear convex relaxations at each node to optimality, whereas CXLPE solves its LP relaxation.
The improved lower bounds lead to significantly small search trees.
We conclude that Algorithm~\ref{alg:coordinateDescent} is indeed suitable for branch-and-bound algorithms since it benefits from node warms starts from the parent nodes, resulting in a significant improvement in solution times.
\ignore{
\section{Extensions}
\label{sec:extensions}
We discuss in this section how to extend the algorithms of Section~\ref{sec:algorithms} to SOCPs with linear objective and a single conic quadratic constraint using a Lagrangean relaxation. We have that
\begin{align*}
&\min_{x\in X }\left\{c'x:d'x+\Omega\sqrt{x'Qx}\leq b_0\right\}\\
=&\min_{x\in X,s\geq 0 }\left\{c'x:d'x+\Omega s\leq b_0, \sqrt{x'Qx}\leq s\right\}\\
=&\max_{\lambda\geq 0 }\min_{x\in X,s\geq 0 }\left\{c'x+\lambda \sqrt{x'Qx}-\lambda s:d'x+\Omega s\leq b_0\right\}\\
=&\max_{\lambda\geq 0 }\min_{x\in X,s,t\geq 0 }\left\{c'x+ \frac{\lambda}{2t}x'Qx+\frac{\lambda t}{2}-\lambda s:d'x+\Omega s\leq b_0\right\}\\
=&\max_{\lambda\geq 0}h(\lambda).
\end{align*}
The function $h$ is a concave univariate function, and the optimal $\lambda^*$ can be found using bisection search. Evaluating function $h$ for a fixed $\lambda$ requires solving a problem of the form \ensuremath{\text{PO}}, which can be done using the algorithms of Section~\ref{sec:algorithms}. Moreover, each evaluation $h(\lambda)$ can be warm started using the optimal basis from the previous evaluation.
We tested a simple version of this Lagrangean relaxation approach, using Algorithm~\ref{alg:coordinateDescent} to solve the QPs, but our results were not as good as those reported in Section~\ref{sec:computational}. In the continuous instances the algorithm was slightly worse than CPLEX barrier algorithm (between 10\% and 20\% slower); in discrete instances, using a branch-and-bound algorithm based on Lagrangean relaxations, it was twice as slow as CPLEX LP branch-and-bound with extended formulations. Nevertheless, using the Lagrangean relaxation may be useful in problems where only a lower bound is required (i.e., solving for a fixed $\lambda$ instead of searching for $\lambda^*$), or in problems where the QPs are particularly easy to solve.
}
\section{Conclusions}
\label{sec:conclusions}
We consider minimization problems with a conic quadratic objective and linear constraints, which are natural generalizations of linear programming and quadratic programming. Using the perspective function we reformulate the objective and propose simplex QP-based algorithms that solve a quadratic program at each iteration. Computational experiments indicate that the proposed algorithms are faster than interior point methods by orders of magnitude, scale better with the dimension of the problem, return higher precision solutions, and, most importantly, are amenable to warm starts. Therefore, they can be embedded in branch-and-bound algorithms quite effectively.
\section*{Acknowledgement}
This research is supported, in part,
by grant FA9550-10-1-0168 from the Office of the Assistant Secretary of Defense for Research and Engineering.
\bibliographystyle{plainnat}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,322
|
Воецкое — село в Барышском районе Ульяновской области. Входит в состав Ленинского городского поселения.
География
Населённый пункт расположен на реке Барыш в 20 километрах к юго-западу от города Барыш — административного центра района. Расстояние до Ульяновска — 132 километра.
Часовой пояс
История
Люди обосновались на здешних порожних землях в последней четверти XVII-го века. Первопоселенцами были крестьяне боярских детей Ивана Воецкого и Василия Грязева. Спустя сто лет селом одновременно владели статский советник Николай Иванович Обухов, подпоручик Андрей Семёнович Гладков и коллежский советник Андрей Михайлович Ушаков.
В 1745 году прихожанами был построен каменный храм. Престолов в нем два: главный (холодный) — в память Усекновения главы св. прор. Предтечи и Крестителя Господня Иоанна и в приделе (тёплый) — во имя св. великомученика Феодора Стратилата.
В 1780 году, при создании Симбирского наместничества, село Архангельское Воецкое тож, при реке Барыше, однодворцев, помещичьих крестьян, вошло в состав Канадейского уезда.
В 1859 году село Воецкое, удельных крестьян, по правую сторону Пензенской почтовой дороги, входило в 1-й стан Карсунского уезда Симбирской губернии.
До 2005 года являлось административным центром и единственным населённым пунктом ныне упразднённого Воецкого сельсовета.
Население
Согласно статистическим данным, в 1913 году в селе было 258 дворов, проживало 1606 жителей. Население в 1996 году — 600 человек.
Достопримечательности
Памятник воинам, погибшим в годы ВОВ.
Инфраструктура
Село разделено на три улицы: Молодёжная, Советская, Центральная.
Известные уроженцы и жители
Братья Михеевы — девять братьев, одновременно сражавшихся на стороне Советского Союза в Великой Отечественной войне, что считается уникальным случаем в мировой военной истории.
Кабанихин Анатолий Васильевич (1919—2001) — гитарист, музыкальный педагог, мастер инструментовки для народных оркестров и ансамблей гитаристов.
Примечания
Литература
Ссылки
Официальный сайт муниципального образования «Барышский район»
Населённые пункты Барышского района
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,481
|
Q: doFilter() overridden method httpRequest.getUserPrincipal() always return NullPoint Exception I created spring boot application. i have used following dependencies.
Spring boot version is 2.1.0.RELEASE
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
</dependencies>
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-saml-servlet-filter-adapter</artifactId>
<version>4.5.0.Final</version>
</dependency>
Security configuration class look like following.
@EnableWebSecurity
@Configuration
public class SecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests().antMatchers("/actuator").authenticated();
http.headers().cacheControl().disable();
http.csrf().disable();
http.logout().logoutSuccessUrl("/assets/logout.html");
}
}
Filter class look like following:
@WebFilter(urlPatterns = "/*", initParams = {
@WebInitParam(
name = "keycloak.config.file",
value = "./config/saml.xml"
)})
public class SingleSignOnFilter extends SamlFilter {
private static final String loginGc = "/gc/";
private static final String logoutUrl = "/gc/assets/logout.html";
@Override
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain)throws IOException, ServletException {
HttpServletRequest httpRequest = (HttpServletRequest) request;
HttpServletResponse httpResponse = (HttpServletResponse) response;
String requestPath = httpRequest.getRequestURI();
boolean needAuthentication = !requestPath.equals(logoutUrl);
boolean alreadyLoginGc = !requestPath.equals(loginGc);
if (needAuthentication) {
if (alreadyLoginGc) {
super.doFilter(request, response, chain);
} else {
httpResponse.setHeader("Cache-Control", "no-cache, no
store");
super.doFilter(request, response, chain);
}
} else {
chain.doFilter(request, response);
}
}
}
when i call httpRequest.getUserPrincipal(); this method its will return Nullpoint Exception.
and when i trying to get authenticated user its will return annonymousUser.
Authentication auth = SecurityContextHolder.getContext()
.getAuthentication();
auth.getName();
Saml.xml look like following:
<keycloak-saml-adapter>
<SP entityID="https://localhost/gc" sslPolicy="NONE">
<PrincipalNameMapping policy="FROM_NAME_ID"/>
<IDP entityID="************************">
<SingleSignOnService requestBinding="POST"
validateResponseSignature="true"
bindingUrl="********************************"/>
<SingleLogoutService requestBinding="REDIRECT"
responseBinding="REDIRECT"
redirectBindingUrl="***************************"/>
<Keys>
<Key signing="true">
<CertificatePem>
-----BEGIN CERTIFICATE-----
##########################
###### Dummy Data ########
##########################
-----END CERTIFICATE-----
</CertificatePem>
</Key>
</Keys>
</IDP>
</SP>
</keycloak-saml-adapter>
A: I'm not sure the filter implementation you are using handles the required logic for spring security underneath but it is clear that SecurityContextHolder is not populated since there are no adapters configured to do so.
Keycloak provides a Spring Security Adapter, try to use it. It will populate httpRequest.getUserPrincipal() as well.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,828
|
Q: XCode debugging. How can I see where my code fails ? I get no stacktrace Currently I am struggling to find out where my code fails.
Xcode sometimes gives me a stacktrace, but currently is doesn't.
I just get an error msg in my console like: *** -[CFString copyWithZone:]: message sent to deallocated instance 0xbe10d80. But sometimes I don't get an error message in my console at all when my app crashes. How can I figure out where the problem acually occurs? How do you guys locate your problems?
Perhaps someone knows a few environment settings that can help?
A: Go to Debugger then click Breakpoint. then run ur app u can see where error occuring
A: You can use Instruments (/Developer/Applications/Instruments) to help detect usage of zombie objects. Here is a link to a tutorial to use it to detect memory leaks, but there it can also be used for other purposes.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,187
|
Welcome to the Stalking Resource Center
The mission of the Stalking Resource Center is to enhance the ability of professionals, organizations, and systems to effectively respond to stalking.
CJ Only
Stalking Resource Center/
Training/
Below are posted recordings of our online trainings. Please click the link and follow the instructions to view the recording.
Webinars:
BWJP Webinar Part I: Recognizing Stalking in Intimate Partner Violence Cases
Coordinated Approach to Preventing Stalking: The New York City CAPS Model
Stalking: A Qualifying Crime for a U Visa
Stalking In Later Life
Media Literacy and the Social Normalization of Stalking
FTC Response to Technology Misuse and Abuse
Overview and Considerations in Federal Stalking Cases
Technical Evidence in Stalking Prosecutions: Where to Get It and How to Get It In
VAWA Amendments to Clery: Recognizing & Responding to Stalking on Campus
The Intersection of Stalking and Sexual Assault
The Stalking and Harassment Assessment and Risk Profile
Supporting Stalking Victims Who Relocate for Personal Safety: Effective Strategies for Future Privacy & Safety
Understanding Stalking Dynamics and Implications for Transgender Individiuals and Communities
Teens and Stalking Webinar
Technology Abuse Evidentiary Issues
National Stalking Awareness Month 2014 Webinar
Planning for National Stalking Awareness Month on Your Campus
Research has shown that 7.5 million adults are stalked in one year in the United States, yet stalking is a crime that is often misunderstood, minimized, or missed entirely. While stalking remains an underreported crime, when it is reported, stalking charges are rarely filed even when all the elements of the crime are present. Part I of this webinar series will explore the importance of recognizing the intersection of stalking in intimate partner violence cases and the various technologies being used to track and monitor victims.
Presented by the SRC, the New York City Mayor's Office to Combat Domestic Violence, and the New York City Police Department
Research indicated that 75 million adults are stalked in a one year period and individuals aged 18-24 have the highest rate of stalking. Yet, stalking is a crime that is often misunderstood, minimized, or missed entirely, even when it intersects with other crimes such as domestic violence. Recognizing the higher risk of violence for domestic violence victims who are also being stalked, the NYC Mayor's Office to Combat Domestic Violence (OCDV), inc collaboration with the New York City Police Department (NYPD) and local District Attorney's offices, created the CAPS (Coordinated Approach to Prevent Stalking) Model.
The CAPS Model is a homicide prevention program aimed at identifying intimate partner stalking cases and providing appropriate criminal justice and social services interventions before stalking behavior escalates to physical injury, serious physical injury or fatality. The success of the program is evident: within the first year of the initiative, stalking cases identified by NYPD increased by 233 percent.
In this informative webinar for criminal justice professionals, participants will hear from the Stalking Resource Center, the national experts on victimization, the NYC Mayor's Office to Combat Domestic Violence, and representative from the NYPD Domestic Violence Unit which was a key part of the implementation.
Presented by the SRC and the National Latin@ Network (Casa de Esperanza)
An immigrant victim of stalking may qualify for a U visa. Unfortunately, stalking is a crime that is often misunderstood, minimized or missed entirely. For this reason the National Latin@ Network from Casa de Esperanza and the Stalking Resource Center are partnering to help advocates better represent immigrant victims of stalking.
The webinar will give an overview of the U visa, and will provide advocates with tools and knowledge on the crime of stalking. This will aid in the identification of the crime and will provide strategies to prove the "substantial physical or emotional abuse" requirement for the U visa.
Presented by the SRC and the National Clearinghouse on Abuse in Later Life (NCALL)
Presented by Andrea Quijada of the Media Literacy Project.
Stalking is a crime that is frequently minimized and misunderstood, in part because it is a behavior that is socially normalized. Media plays a critical role in shaping our understanding of stalking, and other forms of violence. Media literacy is a tool that we can all use to better deconstruct media and deconstruct the culture at large as we work towards building more effective violence response and prevention methods. Using media clips and examples, this webinar will introduce basic media literacy concepts and deconstruction questions that can be integrated into stalking, domestic violence, and sexual assault response and prevention programs, workshops, and daily conversations.
***This webinar includes an image for a product called "Forget Me Not Panties." It was brought to our attention that these are a hoax. The webinar presenter learned after the webinar that the website was created as an art project. You can read more here.
What is notable, is that the website received so many requests for the panties (over a million requests within the first month or so), that they had to create a fake PR firm to handle the press. As our the presenter notes, "for me, the concern is not whether or not the panties are fake. Rather, that the students who created the fake panties were so easily able to use the ideas of surveillance, stalking, and control of women and their bodies as effective marketing tools because those ideas are embedded in our culture."
Presented by Jacqueline Connor and Lisa Weintraub Schifferle, Federal Trade Commission
The FTC, a national consumer protection agency, will talk about its role in addressing online safety and how its efforts impact stalking and domestic violence victims. Learn how the FTC uses both law enforcement and consumer education to combat the types of tools that stalkers use to hack into email, track location, and even take videos. Find out about online safety tips that may be particularly useful to stalking and domestic violence victims.
Overview of and Considerations in Federal Stalking Cases
The presenters, Supervisory Special Agent (SSA) Cari Robins, SSA Sabina Sauer and Crime Analyst Kristen Solik, are all members of the Federal Bureau of Investigation's Behavioral Analysis Unit.
While there are stalking laws within all 50 states as well as a federal law, this remains a widely under-prosecuted crime. The federal statute in particular seems to be underutilized. In general the federal statute, Title 18 U.S.C. 2261A, applies to cases in which the offender and/or their course of conduct cross state lines. With the ease of interstate travel and the virtual nature of our worlds, there is an increasing amount of stalking cases with a federal nexus. This webinar will provide an overview of the federal stalking statute as well as several other applicable federal laws. The discussion will inform about the federal response to stalking cases and also address the advantages and challenges of pursuing cases through the federal system.
Presented by Elaina Roberts, Program Attorney, Stalking Resource Center and John Wilkinson, Attorney Advisor, AEquitas
More than 7.5 million people in the United States are affected by stalking every year, with some studies indicating that one in four victims report use of technology by the offender. The use of personal computers, mobile devices, and other technology in stalking activity presents challenges for the prosecutor who must connect the activity to the defendant. Prosecutors must be familiar with the sources of available evidence, how to obtain it from technology providers, and how to present it effectively to a jury. This webinar will cover the applicable rules of evidence and relevant case law associated with proving a technology-facilitated stalking case, and will provide strategies on when and how to introduce technical evidence and overcome common objections at trial.
To download a copy of the PowerPoint slides, click here.
The 2013 Violence Against Women Act (VAWA) amendments to the Clery Act require that colleges and universities address stalking in a variety of ways. While these rules are not effective until July 2015, many campuses are already working to implement these provisions and struggling with questions regarding counting stalking crimes, determining Clery geography in stalking cases, and implementing stalking awareness and prevention programming. In this free webinar, the Stalking Resource Center and the Clery Center will explore these and other issues related to recognizing and effectively responding to stalking on campus.
This webinar will address the often overlooked link between stalking and sexual assault. Stalking is a crime that is often co-perpetrated with other crimes, such as domestic violence and sexual assault. Research supports a connection between stalking and sexual assault—both pre- and post-assault. In this webinar, we will explore the nature and dynamics of stalking, focusing on its intersection with sexual assault. We will also discuss ways in which this information impacts our responses to and services for victims. Michelle M. Garcia, Director of the Stalking Resource Center, National Center for Victims of Crime will be the presenter. This session will provide useful information and strategies for a wide variety of professionals, including prosecutors and other lawyers, law enforcement, medical professionals, judges, victim advocates, journalists and communications professionals, and others who interact with and write about sexual assault victims.
This training is co-hosted by the Battered Women's Justice Project and the Stalking Resource Center. This presentation will introduce the Stalking and Harassment Assessment and Risk Profile (SHARP) and will describe the assumptions and conceptual framework. The overall goal of SHARP is to provide a research informed tool for increasing awareness of stalking by: (1) Assessing the "big picture" of stalking; (2) describing the risk profile to better understand the level of concern and dangerousness of the situation; (3) providing users with a narrative summary of responses to the assessment questions in a word document that can be used for a variety of purposes; and, (4) suggesting research-grounded safety strategies based on assessment responses for consideration. SHARP is a tool that can be used in conjunction with other risk assessments and tools in the field. SHARP can be used by victims or others on behalf of the victim.
To view the materials, click here.
Victims of stalking often relocate, sometimes multiple times, for their personal safety and privacy. Victim service providers and other responders working with these victims face unique challenges given the vast amount of data and information available online and elsewhere that stalkers may access. In this webinar, participants will learn how breaches in location, personal or other information may occur in stalking cases and how to strategize with victims to prevent future breaches and preserve privacy post-relocation. We will also explore the potential risks and benefits for stalking victims who are considering identity change as part of their safety planning and what factors should explored with the victim to determine if it is a viable and safe option.
Understanding Stalking Dynamics and Implications for Transgender Individuals and Communities
The Stalking Resource Center partnered with FORGE to present this webinar. Recent national data indicates that 6.6 million people are stalked in a one year period in the United States; yet stalking is a crime that is often misunderstood, minimized or missed entirely. Rebecca Dreke, Deputy Director of the Stalking Resource Center, will provide foundational information on stalking, including common stalking dynamics, the impact on victims, and how victim service providers can better assist transgender victims and survivors of stalking. Additionally, the webinar will include a case study in which a transgender professor was stalked by a student. We will explore how their respective identities compromised the effectiveness of officials' and bystanders' responses. Webinar participants will be offered practical tools on safety planning and threat assessment as well as other examples to support them in better serving transgender individuals who have experienced stalking.
Teens and Stalking
Research indicates that 12% of adult stalking victims report being stalked before the age of 18, yet this statistic may underestimate the reality of teen stalking victimization for a variety of reasons. Although the dynamics of stalking among teens and stalking among adults are often similar—including primarily intimate partner offenders, low reporting rates, and connection to sexual and physical assault— practitioners should know how they differ to better serve the populations they work with. In this webinar we will explore what the research indicates about teen stalking victimization as well as some considerations for working with teen victims/survivors. Click here to download a PDF of the slides.
Stalkers and domestic violence perpetrators are increasingly using technology as part of their course of conduct. GPS tracking devices, spyware, Facebook tampering, and harassing text messages--technological abuse is present in many cases of stalking and intimate partner violence. This webinar will equip attorneys, advocates, and members of the judiciary to better handle the introduction of evidence of technological abuse. By gaining a greater understanding of the case law and evidentiary rules related to technological abuse, participants will be better prepared to:
Maneuver the ethical requirements of gathering and documenting technological abuse.
Get evidence of technological abuse admitted into court.
National Stalking Awareness Month 2014
January 2014 will mark the 10th anniversary of the first National Stalking Awareness Month (NSAM). This webinar discusses the history of NSAM and planning for the 2014 observance. Our speakers include Michelle Garcia, director of the Stalking Resource Center; Kevin Sweeney with the Office on Violence Against Women, U.S. Department of Justice; and special guest Debbie Riddle, whose sister, Peggy Klinke, was murdered by a stalker in January 2003. It is in Peggy's memory that we commemorate Stalking Awareness Month in January each year.
January is National Stalking Awareness Month (NSAM), a month to raise awareness about the 6.6 million people who are stalked in the United States in one year. In this webinar, the Stalking Resource Center, a program of the National Center for Victims of Crime, and the Office on Violence Against Women, U.S. Department of Justice, discuss the history of National Stalking Awareness Month, how other communities have observed NSAM in the past, and how you can plan for 2013. We provide resources, campaign posters, and take away material, such as our "31 Days of Status Updates" to post on your social networking sites during the month of January.
January is National Stalking Awareness Month. Every year, organizations and communities schedule events to raise awareness about the 6.6 million people who are stalked in the US in one year. In this webinar, hosted by CALCASA, Laura Kikuchi, Program Assistant for the Stalking Resource Center (SRC), and Hema Khan, Program Attorney of the SRC, provided information and resources to help plan for National Stalking Awareness Month on campuses, and discussed how to create an effective stalking response on campus.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,004
|
Our Services – Project, Inc.
Project, Inc. has a long history of helping organizations increase manufacturing and sales capacity while reducing payroll, production, and facility costs. Your company can benefit from our solid project experience in quality hand work, fast turn times on critical jobs, and fulfillment.
We can handle jobs from the point of initial problem analysis through fulfillment – or we can handle any part of the job in between! Our long experience in hand finishing diverse tasks is invaluable to our customers. Whether your challenge is growth, cost reduction or just quick turnaround time, we have a solution. The next time a task "doesn't quite fit" your operation, try us — we'll fit it into ours!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,172
|
package io.fsq.spindle.codegen.runtime.test
import io.fsq.spindle.codegen.runtime.{CodegenException, Validator}
import io.fsq.testlib.FSAssert
import org.junit.{Assert, Test, rules}
class ValidatorTest {
private val className = "someclass"
private val fieldNames = Vector("a", "b", "c")
@Test
def testValidateShardKeyAnnotationsSuccess(): Unit = {
Validator.validateShardKeyAnnotations(shardKeyAnnotations = "b:1", fieldNames, className)
}
@Test
def testValidateShardKeyAnnotationsInvalidShardKeyThrowsException(): Unit = {
val e = FSAssert.assertThrows[CodegenException](
Validator.validateShardKeyAnnotations(shardKeyAnnotations = "e:1", fieldNames, className)
)
Assert.assertEquals("Unknown field name 'e' in shard_key annotation for class someclass", e.getMessage)
}
@Test
def testValidateShardKeyAnnotationsInvalidFormatThrowsException(): Unit = {
val e = FSAssert.assertThrows[CodegenException](
Validator.validateShardKeyAnnotations(shardKeyAnnotations = "invalid", fieldNames, className)
)
Assert.assertEquals(
"Invalid shard_key specifier 'invalid' for class someclass -- format must be FIELD_NAME:SHARD_TYPE; " +
"e.g., `id:hashed`",
e.getMessage
)
}
@Test
def testValidateShardKeyAnnotationsThrowIfCompoundShardKey(): Unit = {
val e = FSAssert.assertThrows[CodegenException](
Validator.validateShardKeyAnnotations(shardKeyAnnotations = "a:1,b:1", fieldNames, className)
)
Assert.assertEquals(
"Invalid shard_key specifier: a:1,b:1 for class someclass; compound shard" +
" keys are not yet supported",
e.getMessage
)
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,035
|
#include "CLI11.hpp"
#include "entrypoints.hpp"
void print_license()
{
std::cout << R"(
Groho 20.06: A simulator for inter-planetary travel.
Copyright (c) 2017 - 2020 by Kaushik Ghose.
Released under the MIT License. Some rights reserved.
)";
}
int main(int argc, char* argv[])
{
print_license();
CLI::App app{ "Groho: A simulator for inter-planetary travel" };
app.require_subcommand(1);
std::string scn_file, sim_folder, kernel_file;
bool non_interactive;
auto loop = app.add_subcommand(
"sim",
"Monitor changes in scenario and plot files,\n"
"rerun and rechart simulation continuously");
loop->add_option("simfile", scn_file, "Scenario file")->required();
loop->add_option("simfolder", sim_folder, "Simulation folder")->required();
loop->add_flag(
"--non-interactive",
non_interactive,
"Run simulation and exit, instead of looping.");
loop->callback(
[&]() { groho::simulate(scn_file, sim_folder, non_interactive); });
auto commands = app.add_subcommand(
"commands", "Describe spacecraft commands available");
commands->callback([&]() { groho::list_commands(); });
auto inspect = app.add_subcommand("inspect", "Inspect kernel file");
inspect->add_option("spk", kernel_file, "Kernel file")->required();
inspect->callback([&]() { groho::inspect(kernel_file); });
CLI11_PARSE(app, argc, argv);
return 0;
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 265
|
The Adobe Admin Console allows a system administrator to configure domains and directories, which are used for login via Federated ID, for Single Sign-On (SSO).
Once ownership of a domain is demonstrated using a DNS token and it has been linked to a Federated ID directory, users who have email addresses within the claimed domain can log in to Creative Cloud via an Identity Provider system (IdP) after corresponding accounts have been created on the relevant Adobe Admin Console.
The process is provisioned either as a software service that runs within the company network and is accessible from the Internet or a cloud service hosted by a third party that allows for the verification of user login details via secure communication using the SAML protocol.
One such IdP is WSO2-Ellucian Ethos, a cloud-based service which facilitates secure identity management.
The document aims to describe the process necessary to configure the Adobe Admin Console and The WSO2 Identity server to be able to log in to Adobe Creative Cloud applications and associated websites for Single Sign-On.
The IdP does not have to be accessible from outside the corporate network, but if it is not, only workstations within the network (or connected through VPN) will be able to perform authentication to activate a license or sign in after deactivating their session.
A WSO2 Server is installed.
All Active Directory accounts to be associated with a Creative Cloud for Enterprise account have an email address listed within Active Directory.
Navigate to WSO2 Identity Management Console.
Save idP signing certificate (X.509) from list of Keystores.
To configure single sign-on for your directory, enter the required information in your Adobe Admin Console.
Upload the IdP certificate that you saved.
In the IdP binding list, select HTTP - Redirect.
In the User login setting list, select Email.
Enter ethos as the IdP issuer.
In the IdP login URL, enter https://ethos/<domain_name>/samlsso.
To save the SAML XML Metadata file on your computer, click Download Metadata.
Select the I understand I need to complete the configuration with my identity provider check box.
To finish configuration of your directory, click COMPLETE.
Go to the WSO2 Identity Server Management Console.
On the WSO2 server, navigate to Identity > Service Providers > Add.
In the Service Provider Name box, enter the required name.
In the Description box, enter the description of the service provider.
Select the Define Custom Claim Dialect option.
Add three Claim URIs attributes.
Add the following Service Provider Claims values.
Add the following Local Claim values.
In the Subject Claim URI list, select Email.
To save changes, click Update.
Open saved Adobe Metadata from Admin Console.
Copy the entityID field value and keep it safe for the further use.
Copy the Location field value and keep it safe for the further use.
On the Register New Service Provider screen, navigate to Inbound Authentication Configuration > SAML2 Web SSO Configuration.
To edit the service provider, in the Actions column, click the corresponding Edit link.
1. In the Issuer box, enter the entityID field value copied from the Adobe Metadata.
2. In the Assertion Consumer URLs box, enter the location field value copied from Adobe Metadata, and click Add.
3. In the NameID format box, enter urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress.
4. In the Response Signing Algorithm and the Response Signing Algorithm lists, ensure the selected value ends with sha1.
5. Select the Enable Attribute Profile and Include Attributes in the Response Always checkboxes. Click Update.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,574
|
I used to write all the time, and to this date have acquired over 10 journals/diaries. It was my first homeschooled teacher, Mrs. Schultz, who instilled a deep sense of vocabulary, reading, writing, and the English language for me. I think she was one of the people who I was too young and naive to truly appreciate, and I wish I could let her know right now how thankful I am for her influence. She loved me as if I were her own granddaughter and tutored me since I was 5 years old. Even after I was okay enough to attend school, she would periodically take me out to Burger King to talk, and I could feel strangers' eyes on us as if I were her adopted Asian relative.
As I grew older, I wanted to put some distance between me and all the people who were a part of my past, because I wanted desperately to escape my rough upbringing, thinking when I reached college, my health would magically bounce back to norm and I could forget about everything that happened. I didn't feel like I had a sense of independence, but she nevertheless called me on my birthdays, and for three years even sent me a pearl to add to a pearl necklace to represent each passing year. At some point, I heard that Mr. Schultz grew very sick and passed away. Eventually, at some point in high school my mom told me that Mrs. Schultz herself was also very sick and hospitalized. Her daughter Donna wasn't too healthy herself, and I remembered thinking it wasn't fair that such a wonderful family had to suffer so much. However, all the negative news only pushed me further to stay away from them as I felt I hadn't come to terms with what was on my own plate.
A couple years later, my dad and I attended Mrs. Schultz's funeral, and I had no idea how to act. It was my first funeral, and I had not stayed in touch with her for so long that I didn't really feel the full effect of her passing. Bewildered, I showed up and her son greeted us at the doorway, telling me that I meant a lot to her, and I stuttered out, "She meant a lot to me too," not sure if my demeanor was appropriate enough for the occasion. When the speaker encouraged us to share stories at the podium, I couldn't bring myself to say anything, partly because I didn't want to handle any emotions, and partly because I am terrible at public speaking. But I'll take the time now to say that I understand now, almost 20 years later, how lucky I was to have had someone who cared so deeply for me and my education.
I'm glad I also recognize how amazing my high school counselor was for going above and beyond to help me with notes and anything else I needed, for showing me compassion and defending me against other teachers. I sent her an email the past Thanksgiving to thank her, and she was elated to hear from me.
That's one of the silver linings of my condition- I like to think I see through everyone by the way they treat the ones who are "down." And I like to hope that I am able to make a positive impact when I am able in any way I can while I'm here on planet earth.
The entire week, I had been incredibly stressed out about the invitation to my friend's rooftop birthday in NYC. This happens to me almost every time I'm invited out somewhere: the contradictory feelings swarming in of both elation to feeling included and wanting to be there for my friends as much as I could, but simultaneously feeling anxiety and uncertainty of whether I could go. I knew if I didn't go, I would be safe and rested at home, but full of disappointment as if this was proof that I couldn't go out like a normal young person. I knew if I pushed myself to go, I'd have social anxiety and probably come across some obstacles, but no risk no reward. I also knew the next day my family and I were headed into the city again to watch Les Miserables, and I always need the next day to rest: could I do it?
I asked just about every single one of my close college friends to go with me, but they were all busy or uninterested. Finally, two days before one of my high school friendquaintances said she and another friend were going via train, and I felt comfortable enough with them, knowing we used to hang out a bit and they were really sweet. So, it was decided that I'd go with them into the city, and I outlined a very specific plan as I always do when I travel, which I learned in Taipei to specify the total cost, travel time, walk time, down to a T. The trip turned out to be really enjoyable! Affinia Hotel for the pre-game portion was literally a two minute walk from Penn Station, and there were no stairs, only an escalator up from the train tracks this time. We took pictures, and I got tipsy enough to talk to new friends, and then around 9:30pm the three of us Lyfted for the first time over to Monarch Rooftop Lounge. My ride was under $10, so it was free, and our driver was ridiculously nice and it was so convenient!
At Monarch, we took some more photos and each ordered a drink. A bunch of other guests showed up, and I talked to some but not enough to really get to know them. It was really fun, although the outdoor portion wasn't as lovely as we had hoped since it wasn't technically warm enough, and half of it was enveloped in some greenhouse, and so we huddled under red heat lamps when outside and then ran indoors. I wish I was brave enough to network even more, but I still had a lot of fun. My dad was supposed to pick me up by himself to give me some space since I'd been home with my parents for eternity and I really wanted a night out alone, so I was really upset when he brought my mom along.
My insomnia wasn't any better that night, and the next day we headed into the city with me in a sour mood having cried and barely slept, eaten or drank much since the night before. I was exhausted, but it got worse when I realized this theater was old school and had no elevators… I hiked up probably around 3 flights or stairs, and felt extremely claustrophobic. I felt like I was going to pass out, and at intermission, I couldn't take it anymore and realized the line for the bathroom was snaking all the way down to the first floor. I felt so fatigued I started crying and freaking out again, so my dad took me out across the street for some air and to eat at Junior's. I felt like I had failed, and I felt incredibly guilty for making my dad miss the second half of the show. However, my family was really patient and kind, and afterwards we just chatted and I calmed down, sipping my sugared iced tea to get rid of the lightheadedness.
On the bright side, these theater tickets were brought on Groupon so they were only $35 a piece, and the play wasn't amazing, so I wasn't that impressed. I just figured, it's Broadway, shouldn't this be the best of the best?
My last musing is that I like it sometimes when people get mad at me. Now, hear me out first. It's never fun to have anyone angry at you, and personally I get a very worked up and antsy feeling to quickly resolve the issue because I feel physically bothered when I know I messed up and hurt someone I care about. But now that I think about it, I probably have gotten mad or gotten into small fights with everyone I really cared about. Why? Because they were worth it that it affected my mood. I have high standards for friends and family, and sometimes it may seem like too much of an expectation. But one time, my friend was upset when it seemed like I wouldn't be able to make her party, and I remembered feeling kind of well, happy about it. Because it meant I mattered to her, and that I was wanted and would be noticeably missed. I think one of the best feelings in the world is to know that you matter to someone.
Also, I'm totally aware that my categories in this blog are not organized yet, I'll need to find time to sit down and properly fix all that.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,568
|
PSR B1257+12A (catalogato, secondo le convenzioni della nomenclatura planetaria, anche come PSR B1257+12 b), o Draugr, è un pianeta extrasolare orbitante attorno alla pulsar PSR B1257+12, situata nella costellazione della Vergine a 980 anni luce di distanza dalla Terra. È il pianeta più interno, e orbita ad una distanza di 0,19 UA con un periodo orbitale di circa 25 giorni. Nel 1997, venne ipotizzato che il pianeta fosse in realtà un artefatto causato dal vento emanato dalla pulsar, ma questa ipotesi venne smentita. Il pianeta ha una massa più o meno doppia rispetto a quella della Luna e paragonabile quella di Ganimede, collocandosi così come il più piccolo pianeta orbitante attorno a un'altra stella mai scoperto finora. L'oggetto, orbitando a una distanza pari all'incirca alla metà di quella in cui orbita Mercurio intorno al Sole probabilmente mostra sempre la stessa faccia alla stella per via delle forti forze di marea.
Nonostante la natura degenere della sua stella, la brevissima distanza a cui il pianeta orbita gli consente di raggiungere temperature relativamente elevate, paragonabili a quelle di un inverno europeo. Tuttavia il fortissimo vento stellare causato dalla pulsar difficilmente potrebbe permettere a un corpo tanto piccolo e con un campo magnetico conseguente di mantenere una propria atmosfera in grado di consentire all'acqua di mantenersi allo stato liquido o in grado di schermare a sufficienza la superficie dalle radiazioni ionizzanti. Pertanto la presenza di vita, almeno per come la conosciamo noi, è con tutta probabilità da escludere.
Come semplice curiosità è possibile notare come, nonostante il corpo riceva un irraggiamento dalla propria stella superiore a quello ricevuto da Marte da parte del Sole e di soli 11 K inferiore a quello ricevuto dalla Terra, la luminosità sulla superficie del pianeta (immaginandolo senza atmosfera o privo di nubi) è invece paragonabile a quella di una pallida luna piena sulla Terra. Questo avviene perché se nel nostro sistema solare grossa parte dell'energia viene trasferita ai corpi planetari attraverso la banda della luce visibile, nel sistema di PSR B1257+12 invece il calore viene trasmesso perlopiù attraverso raggi X e raggi γ, mentre la luce visibile costituisce solo una piccola percentuale dell'irraggiamento complessivo.
Note
Voci correlate
PSR B1257+12B
PSR B1257+12C
PSR B1257+12D
Altri progetti
Collegamenti esterni
On PSR 1257+12 A (Extrasolar Visions) – impressione artistica di come apparirebbe la pulsar dalla superficie di PSR B1257+12A.
Corpi celesti scoperti nel 1994
Pianeti extrasolari della costellazione della Vergine
Pianeti terrestri
Pianeti extrasolari scoperti con il metodo delle frequenze di pulsazione
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,923
|
\section{Background \& Summary}
\subsection{Scientific Context}
\begin{figure}[bh!]
\centering
\includegraphics[]{walnutartefacts.pdf}
\caption{Vertical slice through an FDK reconstruction of a CBCT scan of a walnut. The red dot indicates the vertical position of the source orbit and the yellow arrows point at the high cone angle artefacts.}
\label{fig.cbartefactexample}
\end{figure}
X-ray computed tomography (CT) is a widely used projection-based imaging modality with a broad range of clinical, scientific and industrial applications. In many of those, CT scanners use a particular projection geometry called circular \textit{cone-beam} (\textit{CB}). This scanning geometry typically leads to a distinct type of artefact in the image regions with a high cone angle, cf. Figure~\ref{fig.cbartefactexample}. While several reconstruction or correction methods have been proposed to reduce high cone angle artefacts~\cite{hsieh2000,dennerlein2008,zhang2016}, they remain a crucial drawback of CBCT scanners over other scanners, which have disadvantages such as higher radiation dose or costs in return~\cite{KoEiKiShWo17}.
In the scientific field, there is often a clear division between computational imaging groups with a background in mathematics and computer science, which focus on enhancing CT methodology on one side and experimental imaging groups using CT as a tool to conduct their scientific studies on the other. The latter typically use commercial CT solutions coming with proprietary software which does not give full access to the raw projection data or the details of the experimental acquisition. As a result, many mathematical and computational studies rely on artificial data simulated with varying degrees of realism. This lack of suitable experimental data is a significant hurdle for the translation of innovative research into applications.
Many important, recent CT innovations introduce machine learning techniques into the tomographic image reconstruction process~\cite{wang2018,ravishankar2019}, in particular deep neuronal networks (\textit{deep learning}). For these approaches, realistic experimental data is not only needed for evaluation but more crucially, for constructing the method itself. Namely, the network parameters are optimized based on \textit{training data} which consists of a large number of representative pairs of input data with the desired ideal output of the network (\textit{ground truth}). While many large, open, bench-mark data collections meeting these criteria exist for standard applications of deep learning (e.g., MNIST \cite{MNIST} for the classification of handwritten digits), there are very few suitable projection datasets for deep learning for CT so far.
\subsection{Previous Works \& Limitations}
Several open fan beam (2D) and cone beam (3D) X-ray CT datasets acquired by a laboratory set-up or with synchrotron parallel X-ray sources exist~\cite{coban2015,jorgensen2017,singh2018,decarlo2018,walnuthelsinki}. Suitable clinical datasets are more difficult to acquire and distribute openly. The ultimate quality measure for clinical images is their diagnostic value, which needs to be assessed by radiologists. Therefore, data is often only distributed as part of an image reconstruction challenge. A prominent example of this is the Mayo Clinic Low Dose CT challenge~\cite{lowdosechallenge} consisting of 3D helical CT abdominal scans of ten cancer patients. While these datasets are extremely useful to evaluate reconstruction algorithms on a wide range of different objects and acquisition conditions, they are not suitable for machine learning as they typically contain only a single or very few scanned objects or have not been designed such that the reconstruction quality can easily be assessed in an automated way with respect to a high-quality ground truth reconstruction.
\subsection{Motivation \& Summary}
To fill this gap, we acquired a carefully designed CBCT data collection suitable for developing machine learning approaches: 42 walnuts (this choice will be discussed in the next section) were scanned with a special laboratory X-ray CBCT scanner. For each sample, CB projections were acquired on 3 different circular orbit heights. This creates different cone angles and resulting artefact pattern as well as allowing for an artefact-free, high-quality ground truth to be computed from the combined data. We provide reconstructed volumes and an open software implementation of the complete image reconstruction pipeline along with the raw projection data. Note that while 42 samples seem few compared the training data sizes used in other deep learning applications, each sample here is a 3D object. Extracting 2D slices from these high-resolution volumes composed of $501^3$ voxels gives enough data for training 2D networks that are then used to process volumes slice-by-slice, which is currently the most common approach in 3D applications~\cite{PeBaSe18}. While this dataset is designed to benchmark machine-learning-based correction techniques for CB artefacts, it can also be used for algorithm development and evaluation for other tomography applications, such as image reconstruction from limited or sparse-angle (low-dose) scanning, super resolution, or for image analysis tasks such as semantic segmentation.
\section{Methods} \label{sec:Methods}
\subsection{Sample Collection}
A data set suitable for deep learning with convolutional layers (\textit{convolutional neuronal networks, CNNs}) needs to be collected in a particular way. During training, the network needs to learn to recognize common spatial features and their natural variations of the class of objects that should be imaged. For this, data acquired from a sufficiently large number of representative samples is needed. Having too few samples to train on can lead to over-fitting and reduce the network's ability to generalize to unseen data. Partly inspired by~\cite{walnuthelsinki}, we decided to scan 42 walnuts: Similar to objects scanned in (pre-)clinical imaging, they contain natural inter-population variability which is advantageous compared to manufactured objects like phantoms used to calibrate scanners. They consist of a hard shell, a softer inside, air filled cavities and a variety of large-to-fine-scale details which makes them a good proxy for the human head. In addition, their size ($\approx 3~\mathrm{cm}$ height) is suitable for our experimental set-up.
\subsection{X-Ray Tomography Scanner}
The scans were performed using a custom-built, highly flexible X-ray CT scanner, the FleX-ray scanner, developed by XRE nv\footnote{\href{https://xre.be/}{https://xre.be/}} and located in the FleX-ray Lab at the Centrum Wiskunde \& Informatica (CWI) in Amsterdam, Netherlands~\cite{flexray}. The general purpose of the FleX-ray Lab is to conduct proof-of-concept experiments directly accessible to researchers in the field of mathematics and computer science.
The scanner consists of a cone-beam microfocus X-ray point source (limited to 90 kV and 90 W) that projects polychromatic X-rays onto a $1536\times 1944$ pixels, $74.8~\mathrm{\mu m}^2$ each, 14-bit flat panel detector (Dexella 1512NDT) and a rotation stage in-between, upon which the sample is mounted. All three components are mounted on translation stages which allow them to move independently from one another. A schematic view of the set-up with the description of possible movements is shown in Figure~\ref{fig.flexray}.
\begin{figure}
\centering
\includegraphics[]{flexray.pdf}
\caption{ FleX-ray Lab, the X-ray cone-beam tomography set-up used for the data acquisition~\cite{flexray}. The arrows indicate the degrees of freedom.}
\label{fig.flexray}
\end{figure}
\subsection{Projection Geometry and Acquisition Parameters}
Our aim was to create a data collection suitable for \textit{supervised learning}. In supervised learning, the training data consists of pairs of input data with the desired ideal output of the network (the ground truth). A distance function (\textit{training loss}) between ground truth and current output of the network is used to drive the optimization of the network's parameters. In our case, the input to the network may be the artefact-ridden reconstruction of a sample computed from a single orbit CBCT data set, and the ground truth could be the corresponding, high-quality, artefact-free reconstruction. We thus needed to acquire projection data from which both of these reconstructions can be computed. To obtain severe high cone angle artefacts, we needed to maximize the vertical cone-beam angle by moving the sample as close as possible to the source and choose an appropriate detector-to-object distance to maximize magnification while keeping the sample in the field of view at all times. Then, we varied the source height to collect projections from 3 circular orbits, cf. Figure~\ref{fig.trajectories} (the detector height needed to be adjusted accordingly in order to fit the entire sample in every projection). In the following section, we will see that while the reconstructions from each orbit alone have different artefact pattern, combining the data from all orbits gives a high-quality reconstruction free of high cone angle artefacts.
Each walnut was embedded in a foam mount (cf. Figure~\ref{fig.trajectories} bottom row). This foam is almost transparent to the X-ray beam used in our experiment. For each orbit, 1201 projections were taken during a continuous, full rotation of the sample. First and last projection were taken at the same position, leading to an angular increment of $0.3^\circ$. The exposure time for each projection was $80 \mathrm{ms}$ and the acquired data was binned on the fly by 2-by-2 pixel windows, i.e. each raw projection was of size $768\times972$ pixels. Each binned detector pixel is sized $149.6\times 149.6~\mu\mathrm{m}^2$ for a total detection field of view of $114.89\times 145.41~\mathrm{mm}^2$. During the experiment, the source voltage and power were set to $40~\mathrm{kV}$ and $12~\mathrm{W}$, respectively. These values had been adjusted to ensure maximum contrast in the projection domain while avoiding detector saturation. Table~\ref{tab.scannerconfig} summarizes the acquisition parameters used.
Before every orbital scan, the source was turned off to record a projection of the detector offset count, the so-called \textit{dark-field} image. After switching the source on again, a projection was recorded without the sample in the field of view, the so-called \textit{flat-field} image showing the beam profile. A second flat-field was recorded after the orbital scan to correct for shadowing effects. Flat-field and dark-field images can be used to pre-process the raw photon count data for the image reconstruction as described in the next section. Examples of the projections collected for each sample are shown in Figure~\ref{fig.projex}.
\begin{table}
\centering
\begin{tabular}{ll}
\hline
Tube voltage & $40~\mathrm{kV}$ \\
Tube power & $12~\mathrm{W}$ \\
Exposure time & $80~\mathrm{ms}$ \\
Number of averages & 1\\
Hardware binning & $2\times 2$ pixels\\
Effective detector pixel size & $149.6~\mathrm{\mu m}$\\
Detector rows & 972\\
Detector colums & 768\\
Source to object distance & $66~\mathrm{mm}$\\
Source to detector distance & $190~\mathrm{mm}$ \\
Magnification & $3.016$\\
Number of projections per orbit & $1201$\\
Angular increment & $0.3^\circ$\\
\hline
\end{tabular}
\caption{\label{tab.scannerconfig}Summary of the acquisition parameters used.}
\end{table}
\begin{figure}
\centering
\includegraphics[height=10cm]{src_pos} \\
\vspace{0.3cm}
\includegraphics[height=3.9cm]{src_v1}
\includegraphics[height=3.9cm]{src_v2}
\includegraphics[height=3.9cm]{src_v3}
\caption{Scanning geometry and trajectories for each sample. Top row: Schematic view from the side. Three full circular orbits are recorded at 3 distinct source and detector heights. The 3 squares on the left denote the source positions. Bottom row: Photographs of actual realization.}
\label{fig.trajectories}
\end{figure}
\newcommand{0.27}{0.27}
\begin{figure}
\centering
\begin{subfigure}[b]{1\textwidth}
\begin{subfigure}[h]{0.05\textwidth}
\caption*{}
\end{subfigure}
\begin{subfigure}[h]{0.27\textwidth}
\subcaption*{high source position}
\end{subfigure}
\begin{subfigure}[h]{0.27\textwidth}
\subcaption*{mid. source position}
\end{subfigure}
\begin{subfigure}[h]{0.27\textwidth}
\subcaption*{low source position}
\end{subfigure}
\end{subfigure}
\begin{subfigure}[b]{1\textwidth}
\begin{subfigure}[c]{0.05\textwidth}
\caption*{\rotatebox{90}{flat-field}}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{flatv1}
\subcaption*{[5899,13342]}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{flatv2}
\subcaption*{[7983,13314]}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{flatv3}
\subcaption*{[6281,12970]}
\end{subfigure}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\begin{subfigure}[c]{0.05\textwidth}
\caption*{\rotatebox{90}{raw projections}}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{proj388v1}
\subcaption*{[3975,12734]}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{proj388v2}
\subcaption*{[4918,12077]}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{proj388v3}
\subcaption*{[4544,12340]}
\end{subfigure}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\begin{subfigure}[c]{0.05\textwidth}
\caption*{\rotatebox{90}{dark-field}}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{darkv1}
\subcaption*{[304,410]}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{darkv2}
\subcaption*{[330,460]}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{darkv3}
\subcaption*{[345,467]}
\end{subfigure}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\begin{subfigure}[c]{0.05\textwidth}
\caption*{\rotatebox{90}{pre-processed projections}}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{projlogv1}
\subcaption*{[0,1]}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{projlogv2}
\subcaption*{[0,1]}
\end{subfigure}
\begin{subfigure}[c]{0.27\textwidth}
\includegraphics[angle=90,width=\textwidth]{projlogv3}
\subcaption*{[0,1]}
\end{subfigure}
\end{subfigure}
\caption{Examples of the collected projections. From left to right, the position of the source varies. The dynamic range is indicated below each image.}
\label{fig.projex}
\end{figure}
\subsection{Reconstructed Volumes}
Each projection image $P$ consist of raw photon counts per detector pixel that are distorted by off-set counts ("dark currents") and pixel-dependent sensitivities. Using the corresponding recorded dark-field image $D$ and flat-field image $F$, $P$ can be corrected and converted into a beam intensity loss image $I$ following the Beer-Lambert law as
\begin{equation}
I = - \log \left( \frac{P - D}{F - D} \right) \qquad .
\end{equation}
For each sample and each of the three source positions, a reconstruction was computed using the FDK algorithm~\cite{feldkamp1984} implemented in the ASTRA toolbox~\cite{vanaarle2016}. Then, the data from all source positions was combined to compute a high-quality reconstruction. This was done by solving a non-negativity constrained least-squares problem using 50 iterations of accelerated gradient descent~\cite{chambolle2016}. The corresponding forward and backward projection operators were implemented using the CUDA kernels in the ASTRA toolbox. In both cases, we chose a volume of $501^3$ voxels of size $100\mu\mathrm{m}^3$. The computation for one FDK with the data from one orbit took about $24\mathrm{s}$ on an NVIDIA GeForce GTX 1070, the iterative reconstruction of the complete data $56\mathrm{min}$. An example of the reconstructed volumes is shown in Figure~\ref{fig.reconall}: In the FDK reconstructions from single orbit data, the image regions with low beam incident angles are reconstructed well while strong artefacts can be seen in regions with high beam angle. They are caused by a combination of two factors: First, the circular orbit associated with a cone shaped beam does not fulfill Tuy's condition~\cite{tuy1983}, resulting in missing data in the measurement domain located around the rotation axis. Second, the FDK algorithm approximates the incoming beam by a collection of tilted fan-beams for each row of the flat detector. In contrast, the iterative reconstruction from the combined data is both sharp and artefact-free and can therefore be regarded as a ground truth reconstruction.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics{pos1.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[]{pos2.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[]{pos3.pdf}
\end{subfigure}
\\
\vspace{0.08cm}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[]{combined_positions.pdf}
\end{subfigure}
\caption{Vertical slice through reconstructed volumes from a single sample. Red dots indicate the source height for the circular orbit used for the reconstruction. Top row: FDK reconstruction from top, middle, and low source position. Yellow arrows point at the high cone angle artefacts. Bottom row: Iterative reconstruction from combined measurements.}
\label{fig.reconall}
\end{figure}
\subsection{Code Availability}
Python and MATLAB scripts for loading, pre-processing and reconstructing the projection data in the way described above are published on github\footnote{\href{https://github.com/cicwi/WalnutReconstructionCodes}{https://github.com/cicwi/WalnutReconstructionCodes}}. They make use of the ASTRA toolbox, which is openly available on \url{www.astra-toolbox.com} or accessible as a conda package\footnote{use \texttt{conda install -c astra-toolbox/label/dev astra-toolbox} to install the development version}. ASTRA is currently only fully supported for Windows and Linux\footnote{Installing it on Mac OS is possible but very involved and version-dependent.}. For obtaining a comparable scaling of the image intensities between FDK and iterative reconstructions, it is required to use a development version of the ASTRA toolbox more recent than 1.9.0dev. For each dataset, a text file containing information about motor positions (source 3D position, detector position and detector orientation) is provided and used by the aforementioned Python/MATLAB scripts to set up the reconstruction geometry. All reference reconstructions provided have been computed with the Python scripts. Furthermore, while the scripts allow to sub-sample the projections and to choose a different image resolution, the reference reconstructions were computed with all projections and within a volume of $501^3$ voxels of size $100\mu\mathrm{m}^3$ as mentioned above.
\section{Data Records}
The complete projection data for a single walnut and the corresponding reference reconstructions are shared as a single ZIP archive (ca. $6\mathrm{GB}$ per file). The $42$ resulting ZIP files (named \verb$Walnut1.zip - Walnut42.zip$, ca. $254.2\mathrm{GB}$ in total), were
uploaded on zenodo\footnote{\href{https://zenodo.org}{https://zenodo.org}}, and had to be split up into several bundles to with separate DOIs: Samples 1-8 \cite{dataCit1}, samples 9-16 \cite{dataCit2}, samples 17-24 \cite{dataCit3}, samples 25-32 \cite{dataCit4}, samples 33-37 \cite{dataCit5} and samples 38-42 \cite{dataCit6}. Note, however, that each ZIP file can be downloaded separately via zenodo's web interface. \\
The ZIP file for the i$^{th}$ sample, \verb$Walnut<i>.zip$, contains a folder \verb$Walnut<i>/$ with the sub-folders \verb$Projections/$ and \verb$Reconstructions/$:
\begin{itemize}
\item \verb!Projections/tubeV<j>/! contains the measured projection data with the source at position \verb!j!, where \verb!j=1/2/3! corresponds to the high/middle/low source position (cf. Figure~\ref{fig.trajectories}). Each of these folders contains the files:
\begin{itemize}
\item \verb!di000000.tif! is a TIFF file containing the dark-field measurement (cf. Figure~\ref{fig.projex}).
\item \verb!io000000.tif! and \verb!io000001.tif! are TIFF files containing the flat-field measurements before and after the orbit was scanned (cf. Figure~\ref{fig.projex}).
\item \verb!scan_<k>.tif! is a TIFF file containing the projection measurement at angle \verb!k! (cf. Figure~\ref{fig.projex}).
\item \verb!scan_geom_original.geom! and \verb!scan_geom_corrected.geom! are text files describing the acquisition geometry of each angular projection. Their format and usage is explained in more detail in the following sections.
\item \verb!data settings XRE.txt! and \verb!scan settings.txt! are text files automatically generated by the FleX-ray scanning software containing scan settings such as motor positions, source power or camera exposure time. We included them for completeness.
\item \verb!script_executed.txt! is a text file automatically generated by the FleX-ray scanning software containing a copy of the script executed by the scanner. We included it for completeness.
\end{itemize}
\item \verb$Reconstructions$ contains the reference reconstruction as described above, stored as TIFF files each containing a single $x$-slice of the volume:
\begin{itemize}
\item \verb!fdk_pos<j>_<k>! contains the \verb!k!$^{th}$ $x$-slice of the FDK reconstruction computed from the projection data acquired at source position \verb!j! (cf. Figure~\ref{fig.reconall}).
\item \verb!full_AGD_50_<k>! contains the \verb!k!$^{th}$ $x$-slice of the ground truth reconstruction computed by $50$ iterations of accelerated gradient descent (cf. Figure~\ref{fig.reconall}).
\end{itemize}{}
\end{itemize}
\section{Technical Validation}
The FleX-ray scanner is subject to regular maintenance and calibration. Furthermore, a visual inspection of all projections for each sample was carried out to ensure that the collected data does not suffer from over-saturation and the sample was always in the field of view. The reconstructed volumes were inspected to ensure that the correction of geometric distortions such as in-plane rotation tilt of the detector was successful.
For the iterative reconstruction from the combined data (ground truth reconstruction), the registration of the scanning geometries from the single orbits had to be corrected manually due to mechanical inaccuracies in the motors positions reported by the scanner. For this, three volumes corresponding to the three orbits were reconstructed first and then manually co-registered using rigid transformations. Corresponding corrected geometry description text files that are used in the combined reconstruction are provided (\verb!scan_geom_corrected.geom!). Samples for which this procedure did not succeed were discarded. For completeness, the original geometry description text files as deduced from the reported motor positions are also provided (\verb!scan_geom_original.geom!).
\section{Usage Notes}
\subsection{Projection Data}
The projection data for each sample is shared as a collection of 16 bit unsigned integer TIFF files containing the raw photon counts per detector pixel. They can be interpreted and manipulated by most common image visualization software such as ImageJ~\cite{imagej} or scientific computing languages such as MATLAB or Python, e.g., through the matplotlib module for the latter. In order to be used by most tomographic reconstruction algorithms, they need to be pre-processed as described above and exemplified in the provided scripts. Each row of the geometry description files (\verb!scan_geom_*.geom!) describes the geometry of one of the acquired projections by 12 floating point numbers: source $x$ position, source $y$ position, source $z$ position, detector center $x$ position, detector center $z$ position, detector center $z$ position, detector 3D basis vector from pixel $(0,0)$ to pixel $(1,0)$, and detector 3D basis vector from pixel $(0,0)$ to pixel $(0,1)$. This parametrization is illustrated in Figure~\ref{fig.geom}.
\begin{figure}
\centering
\includegraphics[]{vector_geometry.pdf}
\caption{\label{fig.geom}Parametrization of the cone-beam geometry: Each projection is described by $(s_x, s_y, s_z, d_x, d_y, d_z, u_x, u_y, u_z, v_x, v_y, v_z)$}
\end{figure}
\subsection{Reconstructed Volumes}
In principle, the four reconstructions described in the previous sections (cf. Figure~\ref{fig.reconall}) can be computed from the projection data with the scripts provided. Depending on the available computational resources this may, however, require a lot of computing time. For this reason, we share the reconstructions, too. They can also be used as a comparison to test novel reconstruction algorithms, or as an image collection for image analysis tasks. Each volume is released as a collection of 32 bit floating point TIFF files, where every single file is one axial slice through the volume as described above. As for the projection data, open source software is available for visualization and manipulation of such files.
\subsection{Further Usage}
The reconstruction scripts can easily be modified to generate different kind of artefacts and tackle different problems related to tomographic reconstruction. To create a limited or sparse-angle (low-dose) problems, one can simply load subsets of the projection data. To mimic a super-resolution experiment, the projection data can be artificially binned into larger pixels. In every case, the iterative reconstruction from the full data set can be used as a ground truth.
\section*{Acknowledgements}
This work was supported by the Netherlands Organisation for Scientific Research (NWO 613.009.106, 639.073.506).
The authors would like to thank Alexander Kostenko for his help in using the FleX-ray Lab and sample preparations, and Nicola Vigan\`{o} for his help with the image registration.
\section*{Author Contributions}
HDS, FL, MvE and KJB conceptualized the study and designed the experiment.
HDS, GC and SBC set up the experiment and performed the data acquisition.
HDS performed the data processing, inspection and geometry correction.
HDS and FL wrote the reconstruction scripts and the the main parts of the manuscript.
All authors contributed to the discussion and finalization of the manuscript and approved it.
\section*{Competing Financial Interests}
The authors declare no competing financial interests.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,219
|
\section{Impossibility of Exact Shifting within Positive Fields}
\label{sec:shifting_lower_bound}
In this section, we present an example showing that, within a positive field,
we cannot shift positive requests down, obtaining $\alpha$ requests in every
node, like we did in the case of negative requests
(cf.~\lref[Corollary]{cor:crucial_lemma_neg}). In our construction, the tree
$T$~consists of root $r$ and
two distinct subtrees $T_1$ and $T_2$, each of size $s$ and containing $\ell$
leaves.
\balance
Suppose that, at the beginning, \textsc{TC}\xspace has the entire tree~$T$ in its cache and
the following ordered events happen (cf.~\lref[Figure]{fig:trbl_exmpl}).
\begin{enumerate}
\item \textsc{TC}\xspace evicts $T_1 \cup \{ r \}$ from the cache.
\item $(s+1) \cdot \alpha - \ell$ requests appear one by one at $r$. The number of
requests is too small to trigger a fetch of any subtree of $T_1 \cup \{ r \}$.
\item \textsc{TC}\xspace evicts $T_2$ from the cache.
\item $s \cdot \alpha$ requests appear one by one at the root of $T_1$. This
time, the number of requests is too small to trigger a fetch of any
subtree of $T$.
\item $\ell$ requests appear one by one at $r$. After the last one appears,
{\textsc{TC}\xspace} fetches the entire $T$ to the cache.
\end{enumerate}
The evictions happen because of some feasible sequence of negative requests that
is irrelevant from our perspective.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth,keepaspectratio]{images/example}
\caption{A troublesome example of a positive field. Numbers in circles describe
the chronology of the events.}
\label{fig:trbl_exmpl}
\end{figure}
Now, observe that when requests appear at the root in the second stage of our
construction, $T_2$ is still in the cache (i.e., does not belong to the field
yet). Thus, all the requests, except for the last $\ell$ ones can be shifted
down only to nodes from $T_1$. Hence, for large $\alpha$ and~$s$, shifting can
deliver $\Omega(\alpha)$ requests only to half of the nodes.
\section{Introduction}
In the classic online paging problem, items of some universe are requested by
a~processing entity (e.g., blocks of RAM are requested by the processor). To
speed up the access, computers use a faster memory, called
\emph{cache}, capable of accommodating $k$ such items. Upon a~request to a
non-cached item, the algorithm has to fetch it into the cache, paying a fixed
cost, while a request to a cached item is free. If the cache is full, the
algorithm has to free some space by evicting an arbitrary subset of items from
the cache.
The paging problem is inherently online: the algorithm has to make decisions
what to evict from the cache without the knowledge of future requests; its
cost is compared to the cost of an optimal \emph{offline} solution and the
worst-case ratio of these two amounts is called \emph{competitive ratio}. The first
analysis of this basic problem in an online model was given over three
decades ago by Sleator and Tarjan~\cite{competitive-analysis}. The problem was later
considered in a~variety of flavors. In particular, some papers considered a
\emph{bypassing model}~\cite{caching-rejection-penalties,paging-irani}, where
item fetching is optional: the requested item can be served without being in
the cache, for another fixed cost (usually being at most the cost of item
fetching).
In this paper, we introduce a natural extension of this fundamental problem, where
items have inter-de\-pen\-den\-cies. More precisely, we assume that the universe is
an arbitrary (not necessarily binary) rooted tree $T$ and the requested items
are its nodes. For any tree node $v$, $T(v) \subseteq T$ is a subtree rooted
at $v$ containing $v$ and all its descendants. We require the following
property: if a~node $v$ is in the cache, then all nodes of $T(v)$ are also
cached. In other words, we require that \emph{the cache is a~subforest of
$T$}, i.e., a union of disjoint subtrees of~$T$. We call this problem
\emph{online tree caching}.
Furthermore, we assume a bypassing model and distinguish between two types of
requests: a request can be either \emph{positive} or \emph{negative}. The
positive requests correspond to ``normal'' requests known from caching
problems: we pay~$1$ if the node is not cached; for a negative request, we pay
$1$ if the corresponding request is cached. After serving the request, we may
reorganize our cache arbitrarily, but the resulting cache has to still be a
subforest of $T$. We pay $\alpha$ for fetching or evicting any single node,
where $\alpha \geq 1$ is an integer and a~parameter of the problem. Our goal
is to minimize the overall cost of maintaining the cache and serving the
requests.
One interesting application for our model arises in the context of modern IP
routers which need to store a rapidly increasing number of forwarding
rules~\cite{bgp-routeviews,steve-myth}. In \lref[Section]{sec:motivation}, we
give a glimpse of this application, discussing how tree caching algorithms can
be applied in existing systems to effectively reduce the memory requirements
on IP routers.
\subsection{Our Contributions and Paper Organization}
We initiate the study of a natural new caching with bypassing problem which
allows to account for tree-dependencies among items. The problem finds
immediate applications, e.g., in IP routing and software-defined networking
(see \lref[Section]{sec:motivation}).
In particular, we consider the online tree caching problem within the resource
augmentation paradigm: we assume that cache sizes of the online algorithm
($k_\textnormal{ONL}$) and the optimal offline algorithm ($k_\textnormal{OPT}$) may differ. We assume
$k_\textnormal{ONL} \geq k_\textnormal{OPT}$ and let $R = k_\textnormal{ONL}/(k_\textnormal{ONL}-k_\textnormal{OPT}+1)$.
In \lref[Section]{sec:algo}, we present an elegant deterministic online
algorithm~\textsc{TC}\xspace for this problem. While our algorithm is simple, its analysis
presented in \lref[Section]{sec:analysis} requires several non-trivial
insights into the problem. In particular, we rigorously prove that \textsc{TC}\xspace is
$O(h(T) \cdot R)$-competitive, where $h(T)$ is the height of tree~$T$. That
is, we show that there exists a constant~$\beta$, such that $\textsc{TC}\xspace(I) \leq
O(h(T) \cdot R) \cdot \textsc{Opt}\xspace(I) + \beta$ for any input $I$. Note that this
result is optimal up to the factor~$O(h(T))$: in
\lref[Appendix]{sec:lower-bound-on-the-problem}, we show that the lower
bound~$R$ for the paging problem~\cite{competitive-analysis} implies an
$\Omega(R)$ lower bound for our problem for any $\alpha \geq 1$. Finally, in
\lref[Section]{sec:implementing_counters}, we show that \textsc{TC}\xspace can be
implemented efficiently.
\subsection{Related Work on Caching}
Our formal model is a novel variant of competitive paging, a~classic online
problem. In the framework of the competitive analysis, the paging problem was
first analyzed by Sleator and Tarjan~\cite{competitive-analysis}, who showed
that algorithms \textsc{Least-Recently-Used}, \textsc{First-In-First-Out} and
\textsc{Flush-When-Full} are $k_\textnormal{ONL} / (k_\textnormal{ONL} - k_\textnormal{OPT} + 1)$-competitive
and no deterministic algorithm can beat this ratio. In the non-augmented case
when $k_\textnormal{ONL} = k_\textnormal{OPT} = k$, the competitive ratio is simply $k$.
The simple paging problem was later generalized to allow different fetching
costs (weighted paging)~\cite{double-coverage,young-paging-greedy-dual} and
additionally different item sizes (file caching)~\cite{young-paging-landlord},
with the same competitive ratio. Asymptotically same results can be achieved
when bypassing is allowed (see \cite{caching-rejection-penalties,paging-irani}
and references therein). With randomization, the competitive ratio can be
reduced to $O(\log k)$ even for file caching~\cite{generalized-caching-optimal}.
The lower bound for randomized algorithms is $H_k =
\Theta(\log k)$~\cite{paging-mark} and is matched by known paging
algorithms~\cite{paging-optimal-easy,paging-optimal-difficult}.
To the best of our knowledge, the variant of caching, where fetching items to
the cache is not allowed unless some other items are cached (e.g., because of
tree dependencies) was
not considered previously in the framework of competitive analysis. Note that
there is a seemingly related problem called restricted
caching~\cite{restricted-caching} (there are also its variants called matroid
caching~\cite{matroid-caching} or companion caching~\cite{companion-caching}).
Despite naming similarities, the restricted caching model is completely
different from ours: there the restriction is that each item can be placed only in
a~restricted set of cache locations.
\section{Application: Minimizing Forwarding Tables in Routers}
\label{sec:motivation}
Dependencies among to-be-cached items arise in numerous settings and are a
natural refinement of many caching problems. To give a concrete example, one
important application for our tree-based dependency model arises in the context
of IP routers. In particular, the online tree caching problem we introduce in
this paper is motivated by router memory constraints in IP-based networks. The
material presented in this section serves for motivation, and is not necessary
for understanding the remainder of the paper.
Nowadays, routers have to store an~enormous number of forwarding rules: the
number of rules has doubled in the last six years~\cite{bgp-routeviews} and
the superlinear growth is likely to be sustained~\cite{steve-myth}. This
entails large costs for Internet Service Providers: fast router memory
(usually Ternary Content Addressable Memory (TCAM)) is expensive and
power-hungry~\cite{tcam-expensive}. Many routers currently either operate at
(or beyond) the edge of their memory capacities. A~solution, which could delay
the need for expensive or impossible memory upgrades in routers, is to store
only a subset of rules in the actual router and store all rules on a~secondary
device (for example a commodity server with a large but slow
memory)~\cite{cacheflow,route-caching-flat,prefix-caching,fib-caching-non-overlapping,fibium-zipf}.
This solution is particularly attractive with the advent of Soft\-ware-Defined
Network (SDN) technology, which allows to manage the expensive memory using a
software controller~\cite{cacheflow,fibium-zipf}. In particular, our
theoretical model can describe real-world architectures
like~\cite{cacheflow,fibium-zipf},
that is, our model formalizes the underlying operational
problems of such architectures. Our
algorithm, when applied in the context of such architectures, can
hence be used to prolong the lifetime of IP routers.
\paragraph{Setup, positive requests, fetches and evictions.}
The setup (see~\cite{fibium-zipf} for a more technical discussion) depicted in
\lref[Figure]{fig:motivation} consists of two entities: the actual router
(e.g., an OpenFlow switch) which caches only a~subset of all forwarding rules,
and the (SDN) controller, which keeps all rules in its less expensive and
slower memory. During runtime, packets arrive at the router, and if an
appropriate forwarding rule is found within the rules cached by the router,
then the packet is forwarded accordingly, and the associated cost is zero.
Otherwise, the packet has to be forwarded to the controller (where
an~appropriate forwarding rule exists); this indirection costs~$1$. Hence, the
rules correspond to cacheable items and accesses to rules are modeled by
positive requests to the corresponding items. At some chosen points in time,
the caching algorithm run at the controller may decide to remove or add rules
to the cache. Any such change entails a~fixed cost $\alpha$.\footnote{This
cost corresponds to the transmission of a message from the controller to the
router as well as the update of internal data structures of the router. Such
an update of proprietary and vendor-dependent structures can be quite
costly~\cite{tcam-expensive-updates}, but the empirical studies show it to be
independent of the rule being updated~\cite{fib-updates}.}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth,keepaspectratio]{images/router}
\caption{The router (\emph{right}) caches only a subset of all rules, and
rules that are not cached are answered by the controller (\emph{left}) that
keeps the whole tree of rules. Updates to the rules are passed by the
controller to the router.}
\label{fig:motivation}
\end{figure}
\paragraph{Tree dependencies.}
Note that the technical feasibility of this solution heavily depends on the
rule dependencies. In the most ubiquitous scenario, the rules are prefixes of
IP addresses (they are bit strings). Whenever a packet arrives, the router
follows a longest matching prefix (LMP) scheme: it searches for the rule that
is a~prefix of the destination IP of the packet and among matching rules it
chooses the longest one. In other words, if the prefixes corresponding to
rules are stored in the tree\footnote{We do not have to assume that they are
actually stored in a real tree; this tree is implicit in the LMP scheme.},
then the tree is traversed from the root downwards, and the last found rule is
used. This explains why we require the cached nodes to form a subforest:
leaving a less specific rule on the router while evicting a more specific one
(i.e., keeping a~tree node in cache while evicting its descendant) will result
in a~situation where packets will be forwarded according to the less specific
rule, and hence potentially exit through the wrong port. The LMP scheme also
ensures that the described approach is implementable: one could simply add
an~artificial rule at the tree root in the router (matching an empty prefix).
This ensures that when no actual matching rule is found in the router (in the
cache), the packet will be forwarded according to this artificial rule to the
controller that stores all the rules and can handle all packets appropriately.
So far, the papers on IP rule caching avoided dependencies either assuming
that rules do not overlap (a~tree has a single level)~\cite{route-caching-flat}
or by preprocessing the tree, so that the rules become
non-overlapping~\cite{prefix-caching,fib-caching-non-overlapping}.
Unfortunately, this could lead to a large inflation of the routing table. A
notable exception is a recent solution called CacheFlow~\cite{cacheflow}. The
CacheFlow model supports dependencies even in the form of directed acyclic
graphs. However, CacheFlow was evaluated only experimentally, and no
worst-case guarantees were given on the overall cost of caching. Our work
provides theoretical foundations for respecting tree dependencies.
\paragraph{Negative requests.}
Additionally, a rule may need to be updated. For example, due to a~change
communicated by a dynamic routing protocol (e.g., BGP) the action defined by
a~rule has to be modified. In either case, we have to update the rules at the
controller: we assume that this cost is zero. (This cost is unavoidable for
any algorithm, so such an assumption makes our problem only more difficult.)
Furthermore, if the rule is also stored at the router, then we have to pay a~fixed
cost of $\alpha$ for updating the router (see the remark for the cost of
fetches and evictions). Such penalties can be easily simulated in our model:
we issue a~sequence of $\alpha$ negative requests to the updated node. It is
straightforward to show that the costs in these two models can differ by a
factor of at most $2$. For a~formal argument, see
\lref[Appendix]{sec:bisimulation}.
\paragraph{Implementability.}
Note that the whole input (fed to a tree caching algorithm) is created at the
controller: positive requests are caused by cache misses (which redirect
packet to the controller) and batches of $\alpha$ negative requests are caused
by updates sent to the dynamic routing algorithm run at the controller.
Therefore, the whole tree caching algorithm can be implemented in software
in the controller only. Furthermore, our algorithm is a simple counter-based
scheme, which can be implemented efficiently and also fine-tuned for speed,
see \lref[Section]{sec:implementing_counters}.
\paragraph{Other work on forwarding table minimization.}
Other approaches for minimizing the number of stored rules were mostly based
on \emph{rules compression (aggregation)}, where the set of rules was replaced
by another equivalent and smaller set. Optimal aggregation of a fixed routing
table can be achieved by dynamic
programming~\cite{ortc,fib-compression-two-dimensional}, but the main
challenge lies in balancing the achieved compression and the amount of changes
to the routing table in the presence of \emph{updates} to this table. While
many practical heuristics have been devised by the networking community for
this problem~\cite{mms,fib-compression-fifa,fib-compression-globecom10,fib-compression-infocom13,fib-sigcomm,fib-compression-smalta,fib-compression-infocom10},
worst-case analyses were presented only for some restricted
scenarios~\cite{fib-icdcs,fib-sirocco}. Combining rules compression and rules
caching is so far an unexplored area.
\section{Preliminaries}\label{sec:preliminaries}
We denote the height of $T$ by $h(T)$. For any node $v$, $T(v)$ denotes the
subtree of $T$ rooted at $v$ (containing~$v$ and all its descendants). A
\emph{tree cap} rooted at $v$ is ``an~upper part'' of $T(v)$, i.e., it
contains $v$ and if it contains node~$u$, then it also contains all nodes on
the path from $u$ to $v$. If $A \subseteq B$ are both tree caps rooted at $v$,
then we say that $A$ is a tree cap of $B$.
We assume discrete time slotted into rounds, with round $t \geq 1$
corresponding to time interval $(t-1,t)$. In round $t$, the algorithm is given
one (positive or negative) request to exactly one tree node and has to process
it, i.e., pay associated costs (if any). Right after round~$t$, at time $t$,
the algorithm may arbitrarily reorganize its cache, (i) ensuring that the
resulting cache is a subforest of $T$ (i.e., if the cache contains node $v$,
then it contains the entire~$T(v)$) and (ii)~preserving the cache capacity
constraint. An algorithm pays $\alpha$ for a~single node fetch or eviction. We
denote the contents of the cache at round $t$ by $C_t$. (As the cache changes
contents only between rounds, $C_t$ is well defined.) We assume that $\alpha$
is an even integer (this assumption may change costs at most by a constant
factor). We assume that the algorithm starts with the empty cache.
We call a non-empty set $X$ a \emph{valid positive changeset} for cache $C$ if
$X \cap C = \emptyset$ and $C \cup X$ is a subforest of~$T$, and a~\emph{valid
negative changeset} if $X \subseteq C$ and $C \setminus X$ is a subforest of
$T$. We call $X$ a~\emph{valid changeset} if it is either valid positive or
negative changeset. Note that the union of positive (negative) changesets is
also a valid positive (negative) changeset. We say that the algorithm applies
changeset~$X$, if it fetches all nodes from~$X$ (for a positive changeset) and
evicts all nodes from $X$ (for a negative one). Note that not all valid
changesets may be applied as the algorithm is also limited by its cache capacity
($k_\textnormal{ONL}$ for an online algorithm and $k_\textnormal{OPT}$ for the optimal offline one).
\section{Algorithm}\label{sec:algo}
The algorithm \textsc{Tree Caching} (\textsc{TC}\xspace) presented in the following is
a simple scheme that follows a \emph{rent-or-buy paradigm}: it fetches (or evicts)
a changeset $X$ if the cost associated with requests at $X$ reaches the cost of
such fetch or eviction.
More concretely, \textsc{TC}\xspace operates in multiple phases. The first phase starts at time $0$.
\textsc{TC}\xspace starts each phase with the empty cache and proceeds as follows. Within a
phase, every node keeps a counter, which is initially zero. If at round~$t$ it
pays~$1$ for serving the request, it increments its counter. Whenever a node
is fetched or evicted from the cache, its counter is reset to zero. Note that
this implies that the counter of $v$ is equal to the number of negative
(positive) requests to $v$ since its last fetching to the cache (eviction from
the cache). For a~set $A \subseteq T$, we denote the sum of all counters in
$A$ at time $t$ by $\textrm{cnt}_t(A)$. At time~$t$, \textsc{TC}\xspace verifies whether
there exists a valid changeset $X$, such that
\begin{itemize}
\item \emph{(saturation property)} $\textrm{cnt}_t(X) \geq |X| \cdot \alpha$ and
\item \emph{(maximality property)} $\textrm{cnt}_t(Y) < |Y| \cdot \alpha$ for any valid
changeset $Y \supsetneq X$.
\end{itemize}
In this case, the algorithm modifies its cache applying~$X$.
If, at time $t$, \textsc{TC}\xspace is supposed to fetch some set $X$, but by doing so it
would exceed the cache capacity $k_\textnormal{ONL}$, it evicts all nodes from the cache
instead, and starts a~new phase at time~$t$. Such a \emph{final eviction}
might not be present in the last phase, in which case we call it
\emph{unfinished}.
In \lref[Lemma]{lem:no_over-requested_changesets} (below), we show that at any
time, all valid changesets satisfying both properties of \textsc{TC}\xspace are either all
positive or all negative. Furthermore, right after the algorithm applies a
changeset, no valid changeset satisfies saturation property.
\section{Analysis of TC}
\label{sec:analysis}
Throughout the paper, we fix an input $I$, its partition into phases, and
analyze both \textsc{TC}\xspace and \textsc{Opt}\xspace on a~single fixed phase $P$. We denote the times at
which $P$ starts and ends by $\textrm{begin}(P)$ and $\textrm{end}(P)$, respectively, i.e., rounds in
$P$ are numbered from $\textrm{begin}(P)+1$ to $\textrm{end}(P)$. A proof of the following technical
lemma follows by induction and is presented in
\lref[Appendix]{sec:proof_of_lemma_1}.
\begin{lemma}
\label{lem:no_over-requested_changesets}
Fix any time $t > \textrm{begin}(P)$. For any valid changeset $X$ for $C_t$, it holds that
$\textrm{cnt}_t(X) \leq |X| \cdot \alpha$. If a~changeset $X$ is applied at time $t$,
the following properties hold:
\begin{enumerate}
\item $X$ contains the node requested at round $t$,
\label{lemit:1}
\item $\textrm{cnt}_t(X) = |X| \cdot \alpha$,
\label{lemit:2}
\item $\textrm{cnt}_t(Y) < |Y| \cdot \alpha$ for any valid changeset $Y$ for~$C_{t+1}$
(note that $C_{t+1}$ is the cache state right after application of $X$),
\label{lemit:3}
\item $X$ is a tree cap of a tree from $C_{t+1}$ if
$X$ is positive and it is a~tree cap of a tree from $C_t$ if $X$ is
negative.
\label{lemit:4}
\end{enumerate}
\end{lemma}
In the following, we assume that no positive requests are given to nodes
inside cache and no negative ones to nodes outside of it. (This does not
change the behavior of \textsc{TC}\xspace and can only decrease the cost of \textsc{Opt}\xspace.)
For the sake of analysis, we assume that at time $\textrm{end}(P)$, \textsc{TC}\xspace actually
performs a cache fetch (exceeding the cache size limit) and then, at the same
time instant, empties the cache. This replacement only increases the cost of
\textsc{TC}\xspace. Let $k_P$ denote the number of nodes in the cache of $\textsc{TC}\xspace$ at $\textrm{end}(P)$.
In a finished phase, we measure it after the artificial fetch, but right
before the final eviction, and thus $k_P \geq k_\textnormal{ONL} + 1$; in an unfinished
phase $k_P \leq k_\textnormal{ONL}$.
The crucial part of our analysis that culminates in
\lref[Section]{sec:shifting} is the technique of shifting requests. Namely, we
modify the input sequence by shifting requests up or down the tree, so that
the resulting input sequence (i) is not harder for \textsc{Opt}\xspace and (ii) is more
structured: we may lower bound the cost of \textsc{Opt}\xspace on each node separately and
relate it to the cost of \textsc{TC}\xspace.
\subsection{Event Space and Fields}
\label{sec:event}
In our analysis, we look at a two-dimensional, discrete, spatial-temporal
space, called the \emph{event space}. The first dimension is indexed by tree
nodes, whose order is an~arbitrary extension of the partial order given by the
tree. That is, the parent of a node $v$ is always ``above''~$v$. The second
dimension is indexed by round numbers of phase~$P$. The space elements are
called \emph{slots}. Some slots are occupied by requests: a~request at node
$v$ given at round $t$ occupies slot $(v,t)$. From now on, we will identify
$P$ with a set of requests occupying some slots in the event space.
We partition slots of the whole event space into disjoint parts, called
\emph{fields}, and we show how this partition is related to the costs of \textsc{TC}\xspace
and \textsc{Opt}\xspace. For any node~$v$ and time $t$, $\textrm{last}_v(t)$ denotes the last time
strictly before~$t$, when node $v$ changed state from cached to non-cached or
vice versa; $\textrm{last}_v(t) = \textrm{begin}(P)$ if $v$ did not change its state before $t$ in
phase $P$. For a~changeset~$X_t$ applied by
\textsc{TC}\xspace at time $t$, we define the field $F^t$ as
\[
F^t = \left\{\ (v,r) : v \in X_t \, \wedge\, \textrm{last}_v(t)+1 \leq r \leq t\ \right\}.
\]
That is, field $F^t$ contains all the requests that eventually trigger the
application of $X_t$ at time $t$. We say that $F^t$ ends at $t$. We call field
$F^t$ \emph{positive} (\emph{negative}) if $X_t$ is a positive (negative)
changeset. An~example of a~partitioning into fields is given in
\lref[Figure]{fig:fields}. We define $\textrm{req}(F^t)$ as the number of requests
belonging to slots of~$F^t$ and let $\textrm{size}(F^t)$ be the number of involved
nodes (note that $\textrm{size}(F^t) = |X_t|)$. The observation below follows
immediately by \lref[Lemma]{lem:no_over-requested_changesets}.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\columnwidth,keepaspectratio]{images/fields_horizontal}
\caption{Partitioning of a single phase into fields for a line (a tree with
no branches). The thick line represents cache contents. Possible final eviction
at $\textrm{end}(P)$ is not depicted. $F^{t_1}$ is a~negative field and $F^{t_2}$ is a
positive one. In the particular depicted example, nodes are ordered from the
leaf (bottom) to the root (top of the picture). We emphasize that for a
general, branched tree, some notions (in particular fields) no longer have
nice geometric interpretations.}
\label{fig:fields}
\end{figure}
\begin{observation}
\label{obs:field_requests}
For any field $F$, $\textrm{req}(F) = \textrm{size}(F) \cdot \alpha$. All these requests are
positive (negative) if $F$ is positive (negative).
\end{observation}
Finally, we call the rest of the event space defined by phase $P$
\emph{open field} and denote it by $F^\infty$. The set of all fields except $
F^\infty$ is denoted by $\mathcal{F}$. Let $\textrm{size}(\mathcal{F}) = \sum_{F \in \mathcal{F}} \textrm{size}(F)$.
\begin{lemma}
\label{lem:alg_cost}
For any phase $P$ partitioned into a set of fields $\mathcal{F} \cup \{ F^\infty \}$,
it holds that $\textsc{TC}\xspace(P) \leq 2 \alpha \cdot \textrm{size}(\mathcal{F}) + \textrm{req}(F^\infty) + k_P
\cdot \alpha$.
\end{lemma}
\begin{proof}
By \lref[Observation]{obs:field_requests}, the cost associated with serving
the requests from all fields from $\mathcal{F}$ is $\sum_{F \in \mathcal{F}} \alpha
\cdot \textrm{size}(F) = \alpha \cdot \textrm{size}(\mathcal{F})$. The cost of the cache reorganization
at the fields' ends is exactly the same. The term $\textrm{req}(F^\infty)$ represents
the cost of serving the requests from $F^\infty$ and $k_P \cdot \alpha$
upper-bounds the cost of the final eviction (not present in an unfinished
phase).
\end{proof}
\subsection{Shifting Requests}\label{sec:shifting}
The actual challenge in the proof is to relate the structure of the fields to
the cost of {\textsc{Opt}\xspace}. The rationale behind our construction is based on the
following thought experiment. Assume that the phase is unfinished (for
example, when the cache is so large that the whole input corresponds to a
single phase). Recall that the number of requests in each field $F \in \mathcal{F}$ is
equal to $\textrm{size}(F) \cdot \alpha$. Assume that these requests are evenly
distributed among the nodes of $F$ (each node from $F$ receives $\alpha$
requests in the slots of $F$). Then, the history of any node $v$ is
alternating between periods spent in positive fields and periods spent in
negative fields. By our even distribution assumption, each such a period
contains exactly $\alpha$ requests. Hence, for any two consecutive periods of
a~single node, \textsc{Opt}\xspace has to pay at least $\alpha$ (either $\alpha$ for positive
requests or $\alpha$ for negative ones, or $\alpha$ for changing the
cached/non-cached state of $v$). Essentially, this shows that $\textsc{Opt}\xspace$ has to
pay an amount that can be easily related to $\alpha \cdot
\textrm{size}(\mathcal{F})$.
Unfortunately, the requests may not be evenly distributed among the nodes. To
alleviate this problem, we will modify the requests in phase $P$, so that the
newly created phase $P'$ is not harder for $\textsc{Opt}\xspace$ and will ``almost'' have the
even distribution property. In this construction, the time frame of $P$ and
its fields are fixed.
\subsubsection{Legal Shifts}
We say that a request placed originally (in phase $P$) at slot $(v,t)$ is
\emph{legally shifted} if its new slot is $(m(v), t)$, where (i) for a
positive request, $m(v)$ is either equal to~$v$ or is one of its descendants
and (ii) for a negative request, $m(v)$ is either equal to $v$ or is one of
its ancestors. For any fixed sequence of fetches and evictions within phase
$P$, the associated cost may only decrease when these actions are replayed on
the modified requests.
\begin{observation}
\label{obs:pprim_easier_than_p}
If $P'$ is created from $P$ by legally shifting the requests, then $\textsc{Opt}\xspace(P')
\leq \textsc{Opt}\xspace(P)$.
\end{observation}
The main difficulty is however in keeping the legally shifted requests within
the field they originally belonged to. For example, a negative request from
$F$ shifted at round $t$ from node~$u$ to its parent may fall out of $F$ as
the parent may still be outside the cache at round~$t$. In effect, a careless
shifting of requests may lead to a situation where, for a single node~$v$,
requests do not create interleaved periods of positive and negative requests,
and hence we cannot argue that $\textsc{Opt}\xspace(P')$ is sufficiently large.
In the following subsections, we show that it is possible to legally shift the
requests of any field $F \in \mathcal{F}$ (i.e., shift positive requests down and negative
requests up), so that they remain within $F$, and they will
be either exactly or approximately evenly distributed among nodes of $F$.
This will create $P'$ with appropriately large cost for \textsc{Opt}\xspace.
\subsubsection{Notation}
We start with some general definitions and remarks. For any field $F$ and set
of nodes~$A$, let $F \cap A = \{ (v,t) \in F : v \in A \}$. Analogously, if
$L$ is a set of rounds, then let $F \cap L = \{ (v,t) \in F : t \in L \}$. For
any field $F^t$ and time $\tau$, we define
\[
F^t_{\leq \tau} = F^t \cap \left\{ t' : t' \leq \tau \right\}.
\]
It is convenient to think that $F^t$ evolves with time and $F^t_{\leq \tau}$
is the snapshot of $F^t$ at time~$\tau$. Note that $F^t$ may have some nodes
not included in $F^t_{\leq \tau}$. These objects are depicted in
\lref[Figure]{fig:fields}.
We may extend the notions of $\textrm{req}$ and $\textrm{size}$ to arbitrary subsets of fields
in a natural way.
For any subset $S \subseteq F$, we call it \emph{over-requested} if
$\textrm{req}(S) > \textrm{size}(S) \cdot \alpha$.
\begin{lemma}
\label{lem:not_over-requested}
Fix any field $F^t$, the corresponding changeset $X_t$, and any time $\tau$.
\begin{enumerate}
\item If $F^t$ is negative, then for any tree cap $D$ of $X_t$, the set
$F^t_{\leq \tau} \cap D$ is not over-requested.
\item If $F^t$ is positive, then for any subtree $T' \subseteq T$, the set
$F^t_{\leq \tau} \cap T'$ is not over-requested.
\end{enumerate}
\end{lemma}
\begin{proof}
As the nodes from $F^t_{\leq \tau} \cap D$ form a valid changeset at time~$\tau$,
\lref[Lemma]{lem:no_over-requested_changesets} implies $\textrm{req}(F^t_{\leq
\tau} \cap D) = \textrm{cnt}_\tau(F^t_{\leq \tau} \cap D) \leq |F^t_{\leq \tau} \cap
D| \cdot \alpha$.
The proof of the second property is identical: As $F^t_{\leq \tau} \cap T'$ is
also a valid changeset at time $\tau$, by
\lref[Lemma]{lem:no_over-requested_changesets}, $\textrm{req}(F^t_{\leq \tau}
\cap T') = \textrm{cnt}_\tau(F^t_{\leq \tau} \cap T')
\leq |F^t_{\leq \tau} \cap T'| \cdot \alpha$.
\end{proof}
By \lref[Lemma]{lem:not_over-requested} applied at $\tau = t$ and
\lref[Observation]{obs:field_requests}, we deduct the following corollary.
\begin{corollary}
\label{cor:density}
Fix any field $F^t$, the corresponding changeset $X_t$ and any tree
cap $D$ of $X_t$.
\begin{enumerate}
\item If $F^t$ is positive, then $\textrm{req}(F^t \cap D) \geq \alpha \cdot |D|$.
\item If $F^t$ is negative, then $\textrm{req}(F^t \cap (X_t \setminus D)) \geq
\alpha \cdot \text{$|X_t \setminus D|$}$.
\end{enumerate}
\end{corollary}
Informally speaking, the corollary above states that the average amount of
requests in a positive field is \emph{at least as large at the top of the
field as at its bottom}. For a negative field this relation is reversed.
\subsubsection{Shifting Negative Requests Up}
\label{sec:negative_shifting}
Fix a valid negative changeset $X_t$ applied at time~$t$ and the
corresponding field~$F^t$. We call a~tree cap \mbox{$Y \subseteq X_t$} \emph{proper} if
\begin{enumerate}
\item $\textrm{req}(F^t \cap Y) = |Y| \cdot \alpha$ and
\item $F^t_{\leq \tau} \cap D$ is not over-requested for any tree cap $D \subseteq Y$ and any time
$\tau \leq t$.
\end{enumerate}
The first property of \lref[Lemma]{lem:not_over-requested} states that before
we shift the requests of $F_t$, the set $X_t$ is proper. We start with $Y =
X_t$, and proceed in a bottom-up fashion, inductively using the lemma below.
We take care of a~single node of $Y$ at a time and ensure that after the shift
the number of requests at this node is exactly $\alpha$ and the remaining part
of $Y$ remains proper.
\begin{lemma}
\label{lem:shift_up_and_stay_proper}
Given a negative field $F^t$, the corresponding changeset~$X_t$ and
a proper tree cap $Y \subseteq X_t$, it is possible to choose a leaf $v$
and legally shift some requests inside $Y$,
so that in result $\textrm{req}({v}) = \alpha$ and $Y \setminus \{v\}$ is proper.
\end{lemma}
\begin{proof}
As $\textrm{req}(F^t \cap Y) = |Y| \cdot \alpha$, \lref[Corollary]{cor:density}
implies that any leaf of $Y$ was requested at least $\alpha$ times
inside~$F^t$. We pick an arbitrary leaf $v$, and let $r \geq \alpha$ be the
number of requests to $v$ in $F^t$.
We look at all the requests to $v$ in $F^t$ ordered by their round. Let $s$ be
the round when $(\alpha+1)$-th of them arrives. We will now show that at round
$s$, \textsc{TC}\xspace already has $p(v)$ in its cache. If it had not, $\{v\}$ would be a
tree cap of $F^t_{\leq s}$, and by the first property of
\lref[Lemma]{lem:not_over-requested}, it would contain at most $\alpha$
requests, which is a~contradiction. Hence, if we shift the chronologically
last $r - \alpha$ requests from $v$ to $p(v)$, these requests stay within
$F^t$.
It remains to show that $Y \setminus \{v\}$ is proper after such a shift. We
choose any tree cap $D \subseteq Y$ and any time \mbox{$\tau \leq t$}. If $D$
does not contain $p(v)$ or $\tau < s$, then the number of requests in
$F^t_{\leq \tau} \cap D$ was not changed by the shift, and hence $F^t_{\leq
\tau} \cap D$ is not over-requested. Otherwise, $D \cup \{v\}$ was a tree cap
in $Y$ and by the lemma assumption, $F^t_{\leq \tau} \cap (D \cup \{v\})$ was
not over-requested. As $F^t_{\leq \tau} \cap D$ has now exactly $\alpha$ less
requests than $F^t_{\leq \tau} \cap (D \cup \{v\})$ had, it is not
over-requested, either.
\end{proof}
\begin{corollary}
\label{cor:crucial_lemma_neg}
For any negative field $F^t$, it is possible to legally shift its requests up,
so that they remain within $F^t$ and after the modification each node is
requested exactly $\alpha$ times.
\end{corollary}
\subsubsection{Shifting Positive Requests Down}
\label{sec:positive_shifting}
We will now focus on the problem of shifting the positive requests down in a
single positive field $F^t$, corresponding to a single fetch of \textsc{TC}\xspace at the
time $t$. Our goal is to devise a shifting strategy, that will result in at
least $\Omega(\textrm{size}(F^t)/h(T))$ nodes having $\alpha/2$ requests each. While
this result may be suboptimal, deriving a shifting strategy for a~positive
field that would have the same equal distribution guarantee as the one
provided by \lref[Corollary]{cor:crucial_lemma_neg} is not possible
(the details are presented in the full version of the paper).
First, we prove that from any node $v$ in the field, we can shift down a
constant fraction of its requests within the field, distributing them to
different nodes.
\begin{lemma}
\label{lem:downshift}
Let $F^t$ be a positive field and let $X_t$ be the corresponding changeset
fetched to the cache at time~$t$. Fix any node $v \in X_t$ that has been
requested at least $c \cdot (\alpha / 2)$ times in~$F^t$, where $c$ is an
integer. It is possible to shift down its requests to the nodes of $T(v) \cap
X_t$, so that these requests remain inside $F^t$ and $\lceil c / 2 \rceil$
nodes of $T(v)$ get $\alpha / 2$ requests each.
\end{lemma}
\begin{proof}
We order the nodes $u_1, u_2, \ldots u_{|T(v) \cap X_t|}$ of $T(v) \cap X_t$,
so that $\textrm{last}_{u_i}(t) \leq \textrm{last}_{u_{i+1}}(t)$ for all $i$. In case of a
tie, we place nodes that are closer to $v$ first. Note that this linear
ordering is an extension of the partial order defined by the tree: the parent
of a~node cannot be evicted later than the node itself (otherwise the cache
would cease to be a subforest of $T$). In particular, it holds that $u_1 = v$.
We number $c \cdot (\alpha / 2)$ requests to $v$ chronologically, starting
from $1$. For any $j \in \{1, \ldots, \lceil c/2 \rceil \}$ we look at round
$\tau_j$ with the $((j-1) \cdot \alpha + 1)$-th request to $v$. When this
request arrives, node $u_j$ is already present in the cache. Otherwise, we
would have at least \mbox{$j \cdot \alpha + 1$} requests in $F^t_{\leq
{\tau_j}} \cap \{u_1, \ldots, u_j\}$ (already in $F^t_{\leq {\tau_j}}
\cap \{u_1\}$ alone), which would make it over-requested, and thus contradict
the second property of \lref[Lemma]{lem:not_over-requested}. Hence, we may
take requests numbered from $(j-1) \cdot \alpha + 1$ to $(j-1) \cdot \alpha +
\alpha/2$, shift them down from $v$ to $u_j$, and after such modification
these requests are still inside $F^t$. Note that for $j = 1$ requests are not
really shifted, as $u_1$ is $v$ itself. We perform such shift for any $j \in
\{1, \ldots, \lceil c/2 \rceil \}$, which yields the lemma.
\end{proof}
\begin{lemma}
\label{lem:crucial_lemma_pos}
For any positive field $F^t$, it is possible to legally shift its requests
down, so that they remain within $F^t$ and after the modification at least
$\textrm{size}(F^t)/(2 h(T))$ nodes in $F^t$ have at least $\alpha/2$ requests each.
\end{lemma}
\begin{proof}
Let $X_t$ be the changeset corresponding to field $F^t$, which is fetched to the cache
at time~$t$. By \lref[Observation]{obs:field_requests}, $\textrm{req}(F^t) = |X_t|
\cdot \alpha$. We gather the requests at every node into groups of $\alpha/2$
consecutive requests. In every node at most $\alpha/2$ requests remain not
grouped. Let $\overline\textrm{req}(X)$ denote the number of grouped requests in the
set $X$. Clearly, $\overline\textrm{req}(F^t) \geq |X_t| \cdot \alpha / 2$, i.e.,
there are at least $|X_t|$ groups of requests in set $X_t$.
Let $X_t = X_t^1 \sqcup X_t^2 \sqcup \dots \sqcup X_t^{h(T)}$ be a partition
of the nodes of the tree $X_t$ into layers according to their distance to the
root. By the pigeonhole principle, there is a layer $X_t^i$ containing at
least $\lceil |X_t| / h(T) \rceil$ groups of requests (each group has
$\alpha/2$ requests).
Nodes of $X_t^i$ are independent, i.e., for $u, v \in X_t^i$ the trees $T(u)$
and $T(v)$ are disjoint. Therefore, we may use the shifting strategy described
in \lref[Lemma]{lem:downshift} for each node of $X_t^i$ separately. After such
modification, at least $\lceil |X_t| / (2 h(T)) \rceil \geq \textrm{size}(F_t) / (2
h(T))$ nodes have at least $\alpha / 2$ requests each.
\end{proof}
\subsubsection{Using Request Shifting for Bounding OPT}
\label{sec:lower-bound}
Finally, we may use our request shifting to relate $\textrm{size}(\mathcal{F}) =
\sum_{F \in \mathcal{F}} \textrm{size}(F)$ to the cost of $\textsc{Opt}\xspace$ in a single phase $P$.
Recall that $k_P$ denotes the size of \textsc{TC}\xspace's cache at the end of $P$. We
assume that {\textsc{Opt}\xspace} may start the phase with an arbitrary state of the cache.
\begin{lemma}
\label{lem:leftovers}
For any phase $P$, $\textsc{Opt}\xspace(P) \geq (\textrm{size}(\mathcal{F}) / (4 h(T)) - k_P)
\cdot \alpha/2$.
\end{lemma}
\begin{proof}
We transform $P$ using legal shifts that are described in
\lref[Section]{sec:negative_shifting} and
\lref[Section]{sec:positive_shifting}. That is, we create a~corresponding
phase $P'$ that satisfies both
\lref[Corollary]{cor:crucial_lemma_neg} and
\lref[Lemma]{lem:crucial_lemma_pos}.
By \lref[Observation]{obs:pprim_easier_than_p}, it is sufficient to show that
$\textsc{Opt}\xspace(P') \geq (\textrm{size}(\mathcal{F}) / (4 h(T)) - k_P) \cdot \alpha/2$.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth,keepaspectratio]{images/leftover}
\caption{Partitioning of the phase into interleaving \textnormal{\textsc{in}}\xspace and \textnormal{\textsc{out}}\xspace periods
for node $v$. The thick line represents cache contents. The \emph{leftover}
\textnormal{\textsc{out}}\xspace period (the last one) is present for node $v$ as it has finished phase
$P$ inside \textsc{TC}\xspace's cache. The periods can be followed by requests contained in
$F^\infty$.}
\label{fig:leftover}
\end{figure}
We focus on a single node $v$. We cut its history into interleaved periods:
\textnormal{\textsc{out}}\xspace \emph{periods}, when $v$ is outside the cache and receives positive
requests, and \textnormal{\textsc{in}}\xspace \emph{periods} when \textsc{TC}\xspace keeps $v$ in the cache and $v$
receives negative requests. A final (possibly empty) part corresponding to the
time when $v$ is in the $F^\infty$ field is not accounted in \textnormal{\textsc{out}}\xspace or \textnormal{\textsc{in}}\xspace
periods, i.e., each \textnormal{\textsc{in}}\xspace or \textnormal{\textsc{out}}\xspace period corresponds to some field $F \in \mathcal{F}$.
Let $p^\textnormal{\textsc{in}}\xspace$ and $p^\textnormal{\textsc{out}}\xspace$ denote the total number of \textnormal{\textsc{in}}\xspace and \textnormal{\textsc{out}}\xspace periods
(respectively) for all nodes during the phase. An~example is given
in~\lref[Figure]{fig:leftover}.
Recall that \textsc{TC}\xspace starts each phase with an empty cache, and hence each node
starts with an \textnormal{\textsc{out}}\xspace period. For $k_P$ nodes that are in {\textsc{TC}\xspace}'s cache at the
end of the phase (and only for them) their history ends with an \textnormal{\textsc{out}}\xspace period
not followed by an \textnormal{\textsc{in}}\xspace period. We call them \emph{leftover periods}. Thus,
$p^\textnormal{\textsc{out}}\xspace = p^\textnormal{\textsc{in}}\xspace + k_P$. The total number of periods ($p^\textnormal{\textsc{in}}\xspace + p^\textnormal{\textsc{out}}\xspace$) is
equal to the total size of all \emph{fields}, $\textrm{size}(\mathcal{F})$, and thus $p^\textnormal{\textsc{out}}\xspace
\geq \textrm{size}(\mathcal{F}) / 2$.
We call a period \emph{full} if it has at least $\alpha/2$ requests. The
shifting strategies described in the previous section ensure that all
\textnormal{\textsc{in}}\xspace periods are full and at least $1/(2 h(T))$ of all \textnormal{\textsc{out}}\xspace periods are full.
Thus, there are at least $p^\textnormal{\textsc{out}}\xspace/(2 h(T)) - k_P$ full non-leftover \textnormal{\textsc{out}}\xspace
periods; each of them together with the following \textnormal{\textsc{in}}\xspace period constitutes a
\emph{full \textnormal{\textsc{out}}\xspace-\textnormal{\textsc{in}}\xspace pair}.
\textsc{Opt}\xspace has to pay at least $\alpha/2$ for the node in the course of the history
described by a~full \textnormal{\textsc{out}}\xspace-\textnormal{\textsc{in}}\xspace pair: it pays $\alpha$ either for changing the
cached/non-cached state of a node, or $\alpha/2$ for all positive requests or
$\alpha/2$ for all negative ones. Thus, $\textsc{Opt}\xspace(P') \geq ( p^\textnormal{\textsc{out}}\xspace / (2 h(T)) -
k_P ) \cdot \alpha/2 \geq ( \textrm{size}(\mathcal{F}) / (4 h(T)) - k_P ) \cdot \alpha/2$.
\end{proof}
\subsection{Competitive Ratio}
\label{sec:comp_ratio}
To relate the cost of \textsc{Opt}\xspace to \textsc{TC}\xspace in a single phase $P$, we still need to
upper-bound $\textrm{req} (F^\infty)$ and relate $k_P \cdot \alpha$ to the cost of
$\textsc{Opt}\xspace$ (i.e., compare the bounds on \textsc{TC}\xspace and \textsc{Opt}\xspace provided by
\lref[Lemma]{lem:alg_cost} and \lref[Lemma]{lem:leftovers}, respectively).
For the next two lemmas, we define $V_\textnormal{OPT}$ as the set of all nodes that were
in \textsc{Opt}\xspace cache at some time of~$P$ and let $V_\textnormal{OPT}^\textrm{c} = T \setminus V_\textnormal{OPT}$. Note
that $V_\textnormal{OPT}$ is a union of subforests (nodes present in \textsc{Opt}\xspace's cache at
consecutive times), and hence a subforest itself.
\begin{lemma}
\label{lem:f_infty}
For any phase $P$, it holds that $\textrm{req} (F^\infty) \leq 2 \cdot k_\textnormal{ONL} \cdot
\alpha + 2 \cdot \textsc{Opt}\xspace(P)$.
\end{lemma}
\begin{proof}
We assume first that $P$ is a finished phase. Then, $P$ ends with an
artificial fetch of $X_{\textrm{end}(P)}$ at time $\textrm{end}(P)$ (followed by the final eviction).
We split $F^\infty$ into two disjoint parts (see \lref[Figure]{fig:fields}):
\begin{align*}
F^\infty_- = &\; \{(v, t): v \in C_{\textrm{end}(P)}, t \geq \textrm{last}_v(\textrm{end}(P))\}, \\
F^\infty_+ = &\; \{(v, t): v \notin C_{\textrm{end}(P)} \sqcup X_{\textrm{end}(P)}, \,
t \geq \textrm{last}_v(\textrm{end}(P))\}.
\end{align*}
Note that $F^\infty_-$ contains only negative requests and $F^\infty_+$ only
positive ones. As $\textrm{req}(F^\infty) = \textrm{req}(F^\infty_-) + \textrm{req} (F^\infty_+ \cap
V_\textnormal{OPT}^\textrm{c}) + \textrm{req} (F^\infty_+ \cap V_\textnormal{OPT})$, we estimate each of these summands
separately.
\begin{itemize}
\item
Nodes from $F^\infty_-$ are in the cache $C_{\textrm{end}(P)}$ and were not
evicted from the cache. Thus, $\textrm{req}(F^{\infty}_-) \leq |C_{\textrm{end}(P)}| \cdot \alpha
\leq k_\textnormal{ONL} \cdot \alpha$.
\item
All the requests from $V_\textnormal{OPT}^\textrm{c}$ are paid by \textsc{Opt}\xspace, and hence
$\textrm{req}(F^\infty_+ \cap V_\textnormal{OPT}^\textrm{c}) \leq \textrm{req}(V_\textnormal{OPT}^\textrm{c}) \leq \textsc{Opt}\xspace(P)$.
\item
$F^\infty_+$ is a valid changeset for cache $C_{\textrm{end}(P)} \sqcup X_{\textrm{end}(P)}$.
As $V_\textnormal{OPT}$ is a subforest of $T$, $F^\infty_+ \cap V_\textnormal{OPT}$ is also a valid
changeset for the cache $C_{\textrm{end}(P)} \sqcup X_{\textrm{end}(P)}$. Therefore, $\textrm{req}
(F^\infty_+ \cap V_\textnormal{OPT}) \leq \textrm{size}(F^\infty_+ \cap V_\textnormal{OPT}) \cdot \alpha$, as
otherwise the set fetched at time $\textrm{end}(P)$ would not be maximal. (\textsc{TC}\xspace could
then fetch $X_{\textrm{end}(P)} \sqcup (F^\infty_+ \cap V_\textnormal{OPT})$ instead of $X_{\textrm{end}(P)}$.)
Thus, $\textrm{req} (F^\infty_+ \cap V_\textnormal{OPT}) \leq |V_\textnormal{OPT}| \cdot \alpha = k_\textnormal{OPT} \cdot
\alpha + (|V_\textnormal{OPT}| - k_\textnormal{OPT})
\cdot \alpha \leq k_\textnormal{ONL} \cdot \alpha + \textsc{Opt}\xspace(P)$.
The last inequality follows as --- independently of the initial state --- \textsc{Opt}\xspace
needs to fetch at least $|V_\textnormal{OPT}| - k_\textnormal{OPT}$ nodes to the cache during $P$.
\end{itemize}
Hence, in total, $\textrm{req} (F^\infty) \leq 2 \cdot k_\textnormal{ONL} \cdot
\alpha + 2 \cdot \textsc{Opt}\xspace(P)$ for a finished phase $P$.
We note that if there was no cache change at $\textrm{end}(P)$, the analysis above would
hold with $X_{\textrm{end}(P)} = \emptyset$ with virtually no change. Therefore, for an
unfinished phase $P$ ending with a fetch or ending without cache change at
$\textrm{end}(P)$, the bound on $\textrm{req}(F^\infty)$ still holds. However, if an unfinished
phase~$P$ ends with an eviction, then we look at the last eviction-free
time $\tau$ of~$P$. We now observe the evolution of field
$F^\infty$ from time~$\tau$ till $\textrm{end}(P)$. At time $\tau$, $\textrm{req}(F^\infty) \leq
2 \cdot k_\textnormal{ONL} \cdot \alpha + 2 \cdot \textsc{Opt}\xspace(P)$. Furthermore, in subsequent
times, it may only decrease: at any round $F^\infty$ gets an additional
request, but on eviction $\textrm{req}(F^\infty)$ decreases by $\alpha$ times
the number of evicted nodes (i.e., at least by $\alpha \geq 1$). Hence, the
value of $\textrm{req}(F^\infty)$ at $\textrm{end}(P)$ is also at most $2 \cdot k_\textnormal{ONL} \cdot
\alpha + 2 \cdot \textsc{Opt}\xspace(P)$.
\end{proof}
By combining \lref[Lemma]{lem:alg_cost}, \lref[Lemma]{lem:leftovers} and
\lref[Lemma]{lem:f_infty}, we immediately obtain the following corollary
(holding for both finished and unfinished phases).
\begin{corollary}
\label{cor:any_phase_bound}
For any phase $P$, it holds that
$\textsc{TC}\xspace(P) \leq O(h(T)) \cdot \textsc{Opt}\xspace(P) + O(h(T) \cdot (k_P + k_\textnormal{ONL}) \cdot \alpha)$.
\end{corollary}
Using the corollary above, its remains to bound the value of~$k_P$. This is
easy for an unfinished phase, as $k_P \leq k_\textnormal{ONL}$ there. For a~finished phase,
we provide another bound.
\begin{lemma}
\label{lem:opt_bound2}
For any finished phase $P$, it holds that
$k_P \cdot \alpha \leq \textsc{Opt}\xspace(P) \cdot (k_\textnormal{ONL} + 1) / (k_\textnormal{ONL} + 1 - k_\textnormal{OPT})$.
\end{lemma}
\begin{proof}
First, we compute the number of positive requests in $V_\textnormal{OPT}^\textrm{c}$. Let $X_{t_1},
X_{t_2}, \ldots, X_{t_s}$ be all positive changesets applied by \textsc{TC}\xspace in~$P$.
For any~$t$, let $X'_t = X_t \setminus V_\textnormal{OPT}$. As $X_t$ is some tree cap and
$V_\textnormal{OPT}$ is a~subforest of $T$, $X'_t$ is a~tree cap of $X_t$. By
\lref[Corollary]{cor:density}, the number of requests to nodes of $X'_t$ in
field $F^t$ is at least $|X'_t| \cdot \alpha$. These requests for different
changesets $X_t$ are disjoint and they are all outside of $V_\textnormal{OPT}$. Hence the
total number of positive requests outside of $V_\textnormal{OPT}$ is at least $\sum_{i=1}^s
|X'_{t_i}| \cdot \alpha$, where $\sum_{i=1}^s |X'_{t_i}| \geq |\bigcup_{i=1}^s
X'_{t_i}| = |(\bigcup_{i=1}^s X_{t_i}) \setminus V_\textnormal{OPT}| \geq |\bigcup_{i=1}^s
X_{t_i}| - |V_\textnormal{OPT}| \geq k_P - |V_\textnormal{OPT}|$.
Now $\textsc{Opt}\xspace(P)$ can be split into the cost associated with nodes from $V_\textnormal{OPT}$
and $V_\textnormal{OPT}^\textrm{c}$, respectively. For the former part,
\textsc{Opt}\xspace has to pay at least $(|V_\textnormal{OPT}| - k_\textnormal{OPT}) \cdot \alpha$ for the fetches
alone. For the latter part, it has to pay $1$ for each of at least $(k_P -
|V_\textnormal{OPT}|) \cdot \alpha$ positive requests outside of $V_\textnormal{OPT}$. Hence, $\textsc{Opt}\xspace(P)
\geq (|V_\textnormal{OPT}| - k_\textnormal{OPT}) \cdot \alpha + (k_P - |V_\textnormal{OPT}|) \cdot \alpha = (k_P -
k_\textnormal{OPT}) \cdot \alpha$. Then, $k_P \cdot \alpha \leq k_P \cdot \textsc{Opt}\xspace(P) / (k_P -
k_\textnormal{OPT})$. As the phase is finished, $k_P \geq k_\textnormal{ONL} + 1$, and thus $k_P \cdot
\alpha \leq (k_\textnormal{ONL} + 1) \cdot \textsc{Opt}\xspace(P) / (k_\textnormal{ONL} + 1 - k_\textnormal{OPT})$.
\end{proof}
\begin{theorem}
The algorithm \textsc{TC}\xspace is $O(h(T) \cdot k_\textnormal{ONL}/(k_\textnormal{ONL}-k_\textnormal{OPT}+1))$-competitive.
\end{theorem}
\begin{proof}
Let $R = h(T) \cdot k_\textnormal{ONL}/(k_\textnormal{ONL}-k_\textnormal{OPT}+1)$. We split an input~$I$ into a
sequence of finished phases followed by a single unfinished phase (which may
not be present). For a~finished phase $P$, we have $k_P > k_\textnormal{ONL}$, and hence
\lref[Corollary]{cor:any_phase_bound} and \lref[Lemma]{lem:opt_bound2}
imply that $\textsc{TC}\xspace(P) \leq O(R) \cdot \textsc{Opt}\xspace(P)$. For an unfinished phase $k_P
\leq k_\textnormal{ONL}$, and therefore, by \lref[Corollary]{cor:any_phase_bound}, $\textsc{TC}\xspace(P)
\leq O(h(T)) \cdot \textsc{Opt}\xspace(P) + O(h(T) \cdot k_\textnormal{ONL} \cdot \alpha)$. Summing over
all phases of $I$ yields $\textsc{TC}\xspace(I) \leq O(R) \cdot \textsc{Opt}\xspace(I) + O(h(T) \cdot k_\textnormal{ONL}
\cdot \alpha)$.
\end{proof}
\section{Implementation of TC}\label{sec:implementing_counters}
Recall that at each time $t$, \textsc{TC}\xspace verifies the existence of a valid changeset
that satisfies saturation and maximality properties (see the definition of
\textsc{TC}\xspace in \lref[Section]{sec:algo}). Here, we show that this operation can be
performed efficiently. In particular, in the following two subsections, we
will prove the following theorem.
\begin{theorem}
\textsc{TC}\xspace can be implemented using $O(|T|)$ additional memory, so that to make a
decision at time~$t$, it performs $O(h(T) + \max \{ h(T), \textrm{deg}(T) \} \cdot |X_t|)$ operations,
where $\textrm{deg}(T)$ is a maximum node degree in $T$ and
$X_t$ is the changeset applied at time $t$ ($|X_t| = 0$ if no changeset is
applied).
\end{theorem}
Let $v_t$ be the node requested at round $t$. Note that we may restrict our
attention to requests that entail a~cost for \textsc{TC}\xspace, as otherwise its counters
remain unchanged and certainly \textsc{TC}\xspace does not change cache contents. We use
\lref[Lemma]{lem:no_over-requested_changesets} to restrict possible candidates
for changesets that can be applied at time $t$. First, we note that if a~node
$v_t$ requested at round $t$ is outside the cache, then, at time~$t$, \textsc{TC}\xspace may
only fetch some changeset, and otherwise it may only evict some changeset.
Therefore, we may construct two separate schemes, one governing fetches and
one for evictions.
In \lref[Section]{sec:implementing_positive_counters}, using
\lref[Lemma]{lem:no_over-requested_changesets}, we show that after processing
a~positive request, \textsc{TC}\xspace needs to verify at most $h(T)$ possible positive changesets,
each in constant time, using an auxiliary data
structure. The cost of updating this structure at time $t$ is
$O(h(T) + h(T) \cdot |X_t|)$.
The situation for negative changesets is more complex as even after applying
\lref[Lemma]{lem:no_over-requested_changesets} there are still exponentially
many valid negative changesets to consider. In
\lref[Section]{sec:implementing_negative_counters}, we construct an~auxiliary
data structure that returns a viable candidate in time $O(h(T) + \textrm{deg}(T)
\cdot |X_t|)$. The update of this structure at time $t$ can be also done in
$O(h(T) + \textrm{deg}(T) \cdot |X_t|)$ operations.
\subsection{Positive Requests and Fetches}
\label{sec:implementing_positive_counters}
At any time $t$ and for any non-cached node $u$, we may define $P_t(u)$ as a
tree cap rooted at $u$ containing all non-cached nodes from $T(u)$. During an
execution of \textsc{TC}\xspace, we maintain two values for each non-cached node~$u$:
$\textrm{cnt}_t(P_t(u))$ and $|P_t(u)|$. When a counter at node~$v_t$ is incremented, we
update $\textrm{cnt}_t(P_t(u))$ for each ancestor~$u$ of~$v$ (at most $h(T)$ updated
values). Furthermore, if a node~$v$ changes its state from cached to
non-cached (or vice versa), we update the value of $|P_t(u)|$ for any ancestor $u$
of $v$ (at most $h(T)$ updates per each node that changes the
state). Therefore, the total cost of updating these structures at time $t$ is
at most $O(h(T) + h(T) \cdot |X_t|)$.
By \lref[Lemma]{lem:no_over-requested_changesets}, a positive valid changeset
fetched at time $t$ has to contain $v_t$ and is a single tree cap. Such a~tree
cap has to be equal to $P_t(u)$ for $u$ being an ancestor of $v_t$.
Hence, we may iterate over all
ancestors $u$ of $v_t$, starting from the tree root and ending at $v_t$, and
we stop at the first node~$u$, for which $P_t(u)$ is saturated (i.e.,
$\textrm{cnt}_t(P_t(u)) \geq |P_t(u)| \cdot \alpha$). If such a $u$ is found, the
corresponding set $P_t(u)$ satisfies also the maximality condition (cf.~the
definition of \textsc{TC}\xspace) as all valid changesets that are supersets of $P_t(u)$
were already verified to be non-saturated. Therefore, in such a case, \textsc{TC}\xspace
fetches~$P_t(u)$. Otherwise, if no saturated changeset is found, \textsc{TC}\xspace does
nothing. Checking all ancestors of $v_t$ can be performed in time $O(h(T))$.
\subsection{Negative Requests and Evictions}
\label{sec:implementing_negative_counters}
Handling evictions is more complex. If the request to
node $v_t$ at round $t$ was negative,
\lref[Lemma]{lem:no_over-requested_changesets} tells us only that the negative
changeset evicted by \textsc{TC}\xspace has to be a tree cap rooted at $u$, where $u$ is the
root of the cached tree containing $v_t$. There are exponentially many such
tree caps, and hence their naïve verification is intractable. To alleviate
this problem, we introduce the following helper notion. For any set of cached
nodes~$A$ and any time $t$, let
\[
\textrm{val}_t(A) = \textrm{cnt}_t(A) - |A| \cdot \alpha + \frac{|A|}{|T|+1}.
\]
Note that for any non-empty set $A$, $\textrm{val}_t(A) \neq 0$ as the first two terms
are integers and $|A|/(|T|+1) \in (0,1)$. Furthermore, $\textrm{val}_t$ is additive:
for two disjoint sets $A$ and $B$, $\textrm{val}_t(A \sqcup B) =
\textrm{val}_t(A) + \textrm{val}_t(B)$. For any time~$t$ and a cached node $u$, we define
\begin{align*}
H_t(u) = \arg \max_D \{ \textrm{val}_t(D) : &\; \textnormal{$D$ is a non-empty tree cap} \\
& \quad \textnormal{rooted at $u$} \}.
\end{align*}
Our scheme maintains the value $H_t(u)$ for any cached node $u$. To this end,
we observe that $H_t(u)$ can be defined recursively as follows. Let
$H'_t(u) = H_t(u)$ if $\textrm{val}_t(H_t(u)) > 0$ and $H'_t(u) = \emptyset$ otherwise.
Then, for any node $v$ and time $t$, by the additivity of $\textrm{val}_t$,
\begin{equation*}
\label{eq:h_t_recurrence}
H_t(u) = \{ u \} \; \sqcup \bigsqcup_\textnormal{$w$ is a child of $u$} H'_t(w).
\end{equation*}
Each cached node $u$ keeps the value $\textrm{val}_t(H_t(u))$. Note that set $H_t(u)$
itself can be recovered from this information: we iterate over all children of
$u$ (at most $\deg(T)$ of them) and for each child $w$, if $\textrm{val}_t(H_t(w)) >
0$, we recursively compute set $H_t(w)$. Thus, the total time for constructing
$H_t(u)$ is $O(\deg(T) \cdot |H_t(u)|)$.
During an execution of \textsc{TC}\xspace, we update stored values accordingly.
That is, whenever a~counter at a cached node $v_t$ is incremented, we update
$\textrm{val}_t(H_t(u))$ values for each cached ancestor $u$ of $v_t$, starting from
\mbox{$u = v_t$} and proceeding towards the cached tree root. Any such update can be
performed in constant time, and the total time is thus $O(h(T))$. For a~cache
change, we process nodes from the changeset iteratively, starting with nodes
closest to the root in case of an~eviction and furthest from the root in case
of a fetch. For any such node $u$, we appropriately stop or start maintaining
the corresponding value of $\textrm{val}_t(H_t(u))$. The latter requires looking up the
stored values at all its children. As $u$ does not have cached
ancestors, sets $H_t$ (and hence also the stored values) at other nodes
remain unchanged. In total, the
cost of updating all $H_t$ values at time $t$ is at most $O(h(T) + \deg(T)
\cdot |X_t|)$.
Finally, we show how to use sets $H_t$ to quickly choose a~valid changeset for
eviction. Recall that for a negative request $v_t$, the changeset to be
evicted has to be a tree cap rooted at $u$, where $u$ is the root of a cached subtree
containing $v_t$. For succinctness, we use $H^u$ to denote $H_t(u)$. We show
that if $\textrm{val}_t(H^u) < 0$, then there is no valid negative changeset that is
saturated, and hence \textsc{TC}\xspace does not perform any action, and if $\textrm{val}_t(H^u) >
0$, then $H^u$ is both saturated and maximal, and hence \textsc{TC}\xspace may evict~$H^u$.
\begin{enumerate}
\item First, assume that $\textrm{val}_t(H^u) < 0$. Then, for any tree cap~$X$ rooted
at~$u$, it holds that $\textrm{cnt}_t(X) - |X| \cdot \alpha < \textrm{val}_t(X) \leq
\textrm{val}_t(H^u) < 0$, i.e., $X$ is not saturated, and hence cannot be evicted by
\textsc{TC}\xspace.
\item Second, assume that $\textrm{val}_t(H^u) > 0$. As $\textrm{cnt}_t(H^u) - |H^u| \cdot
\alpha$ is an integer and $|H^u|/(|T|+1) < 1$, it holds that $\textrm{cnt}_t(H^u) -
|H^u| \cdot \alpha \geq 0$, i.e., $H^u$ is saturated. Moreover, by
\lref[Lemma]{lem:no_over-requested_changesets}, $\textrm{cnt}_t(H^u) \leq |H^u| \cdot
\alpha$, and therefore $\textrm{cnt}_t(H^u) - |H^u| \cdot \alpha = 0$, i.e.,
$\textrm{val}_t(H^u) = |H^u| / (|T|+1)$. It remains to show that $H^u$ is maximal,
i.e., there is no valid saturated changeset $Y \supsetneq H^u$. By
\lref[Lemma]{lem:no_over-requested_changesets}, $Y$ has to be a tree cap
rooted at $u$ as well. If $Y$ was saturated, $\textrm{val}_t(Y) = \textrm{cnt}_t(Y) - |Y|
\cdot \alpha + |Y| / (|T|+1) \geq |Y| / (|T|+1) > |H^u|/(|T|+1) = \textrm{val}_t(H^u)$,
which would contradict the definition of~$H^u$.
\end{enumerate}
Note that node $u$ can be found in time $O(h(T))$, and the
actual set~$H^u$ (of size $|X_t|$) can be computed
in time $O(\deg(T) \cdot |X_t|)$. Therefore the total time
for finding set $|X_t|$ is $O(h(T) + \deg(T) \cdot |X_t|)$.
\section{Conclusions}\label{sec:conclusion}
This paper defines a novel variant of online paging which finds
applications in the context of IP routing networks where forwarding rules can
be cached. We presented a deterministic online algorithm that achieves a
provably competitive trade-off between the benefit of caching and update costs.
It is worth noting that, in the offline setting, choosing the best static cache
in the presence of only positive requests is known as a~\emph{tree sparsity}
problem and can be solved in $O(|T|^2)$ time~\cite{tree-sparsity}.
We believe that our work opens interesting directions for future research.
Most importantly, it will be interesting to study the optimality of the
derived result; we conjecture that the true competitive ratio does not
depend on the tree height. In particular, primal-dual approaches that were
successfully applied for other caching
problems~\cite{young-paging-greedy-dual,generalized-caching-optimal,generalized-caching-bansal} may turn out to be useful also for the considered variant.
\section*{Acknowledgements}
The authors would like to thank Fred Baker from
Cisco, Moti Medina from the Max-Planck-Institute and Paweł
Gawrychowski from University of Wrocław for useful inputs.
\bibliographystyle{ACM-Reference-Format}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,378
|
In part 1, you will be asked personal questions about things such as what you do, or where you live. It will be a conversation between you and the examiner. This session will last for 4 to 5 minutes.
Do you prefer to go out or stay in in the evening?
Do you prefer to hang out with your family or friends in the evening?
What do young people in your country usually do in the evening?
Where is the best place in your city to spend the evening?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,398
|
<?php
/*
* Third party plugins that hijack the theme will call wp_footer() to get the footer template.
* We use this to end our output buffer (started in header.php) and render into the view/page-plugin.twig template.
*/
use Timber\Timber;
$timberContext = $GLOBALS['timberContext'];
if (!isset($timberContext)) {
throw new \Exception('Timber context not set in footer.');
}
$timberContext['content'] = ob_get_contents();
ob_end_clean();
$templates = array('pages/page-plugin.twig');
Timber::render($templates, $timberContext);
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,065
|
Kostelec is de naam van meerdere plaatsen in Tsjechië:
Kostelec (okres Tachov), een gemeente in het district Tachov
Kostelec (okres Jičín), een gemeente in het district Jičín
Kostelec (okres Hodonín), een gemeente in het district Hodonín
Kostelec (okres Jihlava), een gemeente in het district Jihlava
Kostelec nad Černými lesy, een stad in het district Praha-východ
Kostelec nad Labem, een stad in het district Mělník
Kostelec na Hané, een stad in het district Prostějov
Kostelec nad Orlicí, een stad in het district Rychnov nad Kněžnou
Kostelec u Heřmanova Městce, een gemeente in het district Chrudim
Kostelec u Holešova, een gemeente in het district Kroměříž
Kostelec nad Vltavou, een gemeente in het district Písek
Kostelec u Křížků, een gemeente in het district Praha - východ
Vrbatův Kostelec, een gemeente in het district Chrudim
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,012
|
\section{Algorithm Overview}
\label{sec:overview}
\vspace*{-0.1in}
In this section, we formally define the task of frame prediction and the role of each component in the proposed architecture.
Let ${\mathbf{x}}_t\in\mathrm{R}^{w\times h\times c}$ denote the $t$-th frame in an input video ${\mathbf{x}}$, where $w, h$, and $c$ denote width, height, and number of channels, respectively.
The objective of frame prediction is to generate the future frame $\hat{{\mathbf{x}}}_{t+1}$ given the input frames ${\mathbf{x}}_{1:t}$.
At the $t$-th time step, our network observes a history of previous consecutive frames up to frame $t$, and generates the prediction of the next frame $\hat{{\mathbf{x}}}_{t+1}$ as follows:
\begin{itemize}
\item
\textbf{Motion Encoder} recurrently takes an image difference input between frame ${\mathbf{x}}_t$ and ${\mathbf{x}}_{t-1}$ starting from $t=2$, and produces the hidden representation ${\mathbf{d}}_t$ encoding the temporal dynamics of the scene components (Section~\ref{sec:dynamic_network}).
\item
\textbf{Content Encoder} takes the last observed frame ${\mathbf{x}}_t$ as an input, and outputs the hidden representation ${\mathbf{s}}_t$ that encodes the spatial layout of the scene (Section~\ref{sec:contents_network}).
\item
\textbf{Multi-Scale Motion-Content Residual} takes the computed features, from both the motion and content encoders, at every scale right before pooling and computes residuals ${\mathbf{r}}_t$ \citep{resnets} to aid the information loss caused by pooling in the encoding phase (Section~\ref{sec:mcres_network}).
\item
\textbf{Combination Layers and Decoder} takes the outputs from both encoder pathways and residual connections, ${\mathbf{d}}_t$, ${\mathbf{s}}_t$, and ${\mathbf{r}}_t$, and combines them to produce a pixel-level prediction of the next frame $\hat{{\mathbf{x}}}_{t+1}$ (Section~\ref{sec:decoder_network}).
\end{itemize}
The overall architecture of the proposed algorithm is described in Figure~\ref{fig:arch}.
The prediction of multiple frames, $\hat{{\mathbf{x}}}_{t+1:t+T}$, can be achieved by recursively performing the above procedures over $T$ time steps (Section~\ref{sec:train_infer}).
Each component in the proposed architecture is described in the following section.
\begin{figure}[!t]
\hspace*{-.1cm}
\centering
\begin{subfigure}{0.48\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/high-level_architecture_noresidual.pdf}
\caption{Base MCnet}
\end{subfigure}
\hspace*{.4cm}
\begin{subfigure}{0.48\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/high-level_architecture_residual.pdf}
\caption{MCnet with Multi-scale Motion-Content Residuals}
\end{subfigure}
\vspace{-5pt}
\caption{Overall architecture of the proposed network. (a) illustrates MCnet without the Motion-Content Residual \textit{skip connections}, and (b) illustrates MCnet with such connections. Our network observes a history of image differences through the motion encoder and last observed image through the content encoder. Subsequently, our network proceeds to compute motion-content features and communicates them to the decoder for the prediction of the next frame.}
\label{fig:arch}
\vspace{-15pt}
\end{figure}
\vspace*{-0.13in}
\section{Architecture}
\label{sec:architecture}
\vspace*{-0.1in}
This section describes the detailed configuration of the proposed architecture, including the two encoder pathways, multi-scale residual connections, combination layers, and decoder.
\subsection{Motion Encoder}
\label{sec:dynamic_network}
The motion encoder captures the temporal dynamics of the scene's components by recurrently observing subsequent difference images computed from ${\mathbf{x}}_{t-1}$ and ${\mathbf{x}}_{t}$, and outputs motion features by
\begin{equation}
\left[{\mathbf{d}}_t,\mathbf{c}_{t}\right]=f^{\text{dyn}}\left({\mathbf{x}}_t-{\mathbf{x}}_{t-1}, {\mathbf{d}}_{t-1}, \mathbf{c}_{t-1}\right),
\label{eq:motion_encoder}
\end{equation}
where ${\mathbf{x}}_t-{\mathbf{x}}_{t-1}$ denotes element-wise subtraction between frames at time $t$ and $t-1$, ${\mathbf{d}}_t\in\mathbb{R}^{{w'}\times{h'}\times{c'}}$ is the feature tensor encoding the motion across the observed difference image inputs, and $\mathbf{c}_t\in\mathbb{R}^{{w'}\times{h'}\times{c'}}$ is a memory cell that retains information of the dynamics observed through time.
$f^{\text{dyn}}$ is implemented in a fully-convolutional way to allow our model to identify local dynamics of frames rather than complicated global motion. For this, we use an encoder CNN with a Convolutional LSTM \citep{convlstm} layer on top.
\subsection{Content Encoder}
\label{sec:contents_network}
The content encoder extracts important spatial features from a single frame, such as the spatial layout of the scene and salient objects in a video.
Specifically, it takes the last observed frame ${\mathbf{x}}_t$ as an input, and produces content features by
\begin{equation}
{\mathbf{s}}_t=f^{\text{cont}}\left({\mathbf{x}}_{t}\right),
\label{eq:content_encoder}
\end{equation}
where ${\mathbf{s}}_t\in\mathbb{R}^{{w'}\times{h'}\times{c'}}$ is the feature encoding the spatial content of the last observed frame, and $f^{\text{cont}}$ is implemented by a Convolutional Neural Network (CNN) that specializes on extracting features from single frame.
It is important to note that our model employs an \textit{asymmetric} architecture for the motion and content encoder.
The content encoder takes the last observed frame, which keeps the most critical clue to reconstruct spatial layout of near future, but has no information about dynamics.
On the other hand, the motion encoder takes a history of previous image differences, which are less informative about the future spatial layout compared to the last observed frame, yet contain important spatio-temporal variations occurring over time.
This asymmetric architecture encourages encoders to exploit each of two pieces of critical information to predict the future content and motion individually, and enables the model to learn motion and content decomposition naturally without any supervision.
\subsection{Multi-scale Motion-Content Residual}
\label{sec:mcres_network}
To prevent information loss after the pooling operations in our motion and content encoders, we use residual connections \citep{resnets}.
The residual connections in our network communicate motion-content features at every scale into the decoder layers after unpooling operations. The residual feature at layer $l$ is computed by
\begin{equation}
{\mathbf{r}}_t^l=f^{\text{res}}\left(\left[{\mathbf{s}}_t^l,{\mathbf{d}}_t^l\right]\right)^l,
\label{eq:res_connect}
\end{equation}
where ${\mathbf{r}}_t^l$ is the residual output at layer $l$, $\left[{\mathbf{s}}_t^l,{\mathbf{d}}_t^l\right]$ is the concatenation of the motion and content features along the depth dimension at layer $l$ of their respective encoders, $f^{\text{res}}\left(.\right)^l$ the residual function at layer $l$ implemented as consecutive convolution layers and rectification with a final linear layer.
\subsection{Combination Layers and Decoder}
\label{sec:decoder_network}
The outputs from the two encoder pathways, ${\mathbf{d}}_t$ and ${\mathbf{s}}_t$, encode a high-level representation of motion and content, respectively.
Given these representations, the objective of the decoder is to generate a pixel-level prediction of the next frame $\hat{{\mathbf{x}}}_{t+1}\in\mathbb{R}^{{w}\times{h}\times{c}}$.
To this end, it first combines the motion and content back into a unified representation by
\begin{equation}
{\mathbf{f}}_t=g^{\text{comb}}\left(\left[{\mathbf{d}}_t,{\mathbf{s}}_t\right]\right),
\label{eq:comb_decoder}
\end{equation}
where $\left[{\mathbf{d}}_t,{\mathbf{s}}_t\right]\in\mathbb{R}^{w'\times h'\times 2c'}$ denotes the concatenation of the higher-level motion and content features in the depth dimension, and ${\mathbf{f}}_t\in\mathbb{R}^{w'\times h'\times c'}$ denotes the combined high-level representation of motion and content.
$g^{\text{comb}}$ is implemented by a CNN with bottleneck layers~\citep{bottleneck}; it first projects both ${\mathbf{d}}_t$ and ${\mathbf{s}}_t$ into a lower-dimensional embedding space, and then puts it back to the original size to construct the combined feature ${\mathbf{f}}_t$.
Intuitively, ${\mathbf{f}}_t$ can be viewed as the content feature of the next time step, ${\mathbf{s}}_{t+1}$, which is generated by transforming ${\mathbf{s}}_t$ using the observed dynamics encoded in ${\mathbf{d}}_t$.
Then our decoder places ${\mathbf{f}}_t$ back into the original pixel space by
\begin{equation}
{\hat{{\mathbf{x}}}}_{t+1}=g^{\text{dec}}\left({\mathbf{f}}_t,{\mathbf{r}}_t\right),
\label{eq:rec_decoder}
\end{equation}
where ${\mathbf{r}}_t$ is a list containing the residual connections from every layer of the motion and content encoders before pooling sent to every layer of the decoder after unpooling.
We employ the deconvolution network~\citep{Zeiler11} for our decoder network $g^{\text{dec}}$,
which is composed of multiple successive operations of deconvolution, rectification and unpooling with the addition of the motion-content residual connections after each unpooling operation. The output layer is passed through a $\tanh\left(.\right)$ activation function.
Unpooling with fixed switches are used to upsample the intermediate activation maps.
\vspace*{-0.13in}
\section{Inference and Training}
\label{sec:train_infer}
\vspace*{-0.1in}
Section~\ref{sec:architecture} describes the procedures for single frame prediction, while this section presents the extension of our algorithm for the prediction of multiple time steps.
\subsection{Multi-step prediction}
\commenttext{Add Algorithm figure}
Given an input video, our network observes the first $n$ frames as image difference between frame ${\mathbf{x}}_t$ and ${\mathbf{x}}_{t-1}$, starting from $t=2$ up to $t=n$, to encode initial temporal dynamics through the motion encoder. The last frame ${\mathbf{x}}_n$ is given to the content encoder to be transformed into the first prediction $\hat{{\mathbf{x}}}_{t+1}$ by the identified motion features.
For each time step $t\in\left[n+1, n+T \right]$, where $T$ is the desired number of prediction steps, our network takes the difference image between the first prediction $\hat{{\mathbf{x}}}_{t+1}$ and the previous image ${\mathbf{x}}_{t}$, and the first prediction $\hat{{\mathbf{x}}}_{t+1}$ itself to predict the next frame $\hat{{\mathbf{x}}}_{t+2}$, and so forth.
\subsection{Training Objective}
To train our network, we use an objective function composed of different sub-losses similar to \citet{Mathieu15}. Given the training data $D=\{{\mathbf{x}}^{(i)}_{1,...,T}\}_{i=1}^{N}$, our model is trained to minimize the prediction loss by
\begin{equation}
\mathcal{L} = \alpha\mathcal{L}_{\text{img}}+\beta\mathcal{L}_{\text{GAN}},
\label{eq:full_loss}
\end{equation}
where $\alpha$ and $\beta$ are hyper-parameters that control the effect of each sub-loss during optimization. $\mathcal{L}_{\text{img}}$ is the loss in image space from \citet{Mathieu15} defined by
\begin{equation}
\mathcal{L}_{\text{img}} = \mathcal{L}_{p}\left({\mathbf{x}}_{t+k},\hat{{\mathbf{x}}}_{t+k}\right)+\mathcal{L}_{gdl}\left({\mathbf{x}}_{t+k},\hat{{\mathbf{x}}}_{t+k}\right),
\label{eq:loss_img}
\end{equation}
\begin{align}
\text{where \quad} \mathcal{L}_{p}\left({\mathbf{y}},{\mathbf{z}}\right)= & \sum_{k=1}^{T}||{\mathbf{y}}-{\mathbf{z}}||_{p}^{p}, \label{eq:lp}\\
\mathcal{L}_{gdl}\left({\mathbf{y}},{\mathbf{z}}\right)= & \sum_{i,j}^{h,w}\left| \, (|{\mathbf{y}}_{i,j}-{\mathbf{y}}_{i-1,j}|-|{\mathbf{z}}_{i,j}-{\mathbf{z}}_{i-1,j}|) \, \right|^{\lambda} \label{eq:lgdl}\\
& +\left| \, (|{\mathbf{y}}_{i,j-1}-{\mathbf{y}}_{i,j}|-|{\mathbf{z}}_{i,j-1}-{\mathbf{z}}_{i,j}|) \, \right|^{\lambda}. \nonumber
\end{align}
Here, ${\mathbf{x}}_{t+k}$ and $\hat{{\mathbf{x}}}_{t+k}$ are the target and predicted frames, respectively, and $p$ and $\lambda$ are hyper-parameters for $\mathcal{L}_p$ and $\mathcal{L}_{gdl}$, respectively.
Intuitively, $\mathcal{L}_{p}$ guides our network to match the average pixel values directly, while $\mathcal{L}_{gdl}$ guides our network to match the gradients of such pixel values.
Overall, $\mathcal{L}_{\text{img}}$ guides our network to learn parameters towards generating the correct average sequence given the input.
Training to generate average sequences, however, results in somewhat blurry generations which is the reason we use an additional sub-loss.
$\mathcal{L}_{\text{GAN}}$ is the generator loss in adversarial training to allow our model to predict realistic looking frames and it is defined by
\begin{equation}
\mathcal{L}_{\text{GAN}} = -\log D\left(\left[{\mathbf{x}}_{1:t},G\left({\mathbf{x}}_{1:t}\right)\right]\right) ,
\label{eq:loss_adv}
\end{equation}
where ${\mathbf{x}}_{1:t}$ is the concatenation of the input images, ${\mathbf{x}}_{t+1:t+T}$ is the concatenation of the ground-truth future images, $G\left({\mathbf{x}}_{1:t}\right)=\hat{{\mathbf{x}}}_{t+1:t+T}$ is the concatenation of all predicted images along the depth dimension, and $D\left(.\right)$ is the discriminator in adversarial training.
The discriminative loss in adversarial training is defined by
\begin{equation}\label{eq:loss_disc}
\begin{aligned}
\mathcal{L}_{\text{disc}} &= -\log D\left(\left[{\mathbf{x}}_{1:t},{\mathbf{x}}_{t+1:t+T}\right]\right) -\log\left(1-D\left(\left[{\mathbf{x}}_{1:t},G\left({\mathbf{x}}_{1:t}\right)\right]\right)\right).
\end{aligned}
\end{equation}
$\mathcal{L}_{\text{GAN}}$, in addition to $\mathcal{L}_{\text{img}}$, allows our network to not only generate the target sequence, but also simultaneously enforce realism in the images through visual sharpness that fools the human eye.
Note that our model uses its predictions as input for the next time-step during the training, which enables the gradients to flow through time and makes the network robust for error propagation during prediction.
For more a detailed description about adversarial training, please refer to Appendix \ref{sec:GANs}.
\section{Conclusion}
\label{sec:conclusion}
\commenttext{Tentative conclusion: may need to edit further}
\vspace*{-0.1in}
\label{sec:conclusion}
We proposed a motion-content network for pixel-level prediction of future frames in natural video sequences.
The proposed model employs two separate encoding pathways, and learns to decompose motion and content without explicit constraints or separate training.
Experimental results suggest that separate modeling of motion and content improves the quality of the pixel-level future prediction,
and our model overall achieves state-of-the-art performance in predicting future frames in challenging real-world video datasets.
\vspace*{-0.13in}
\section{Acknowledgements}
\vspace*{-0.1in}
This work was supported in part by ONR N00014-13-1-0762, NSF CAREER IIS-1453651, gifts from the Bosch Research and Technology Center, and Sloan Research Fellowship.
We also thank NVIDIA for donating K40c and TITAN X GPUs.
We thank Ye Liu, Junhyuk Oh, Xinchen Yan, Lajanugen Logeswaran, Yuting Zhang, Sungryull Sohn, Kibok Lee, Rui Zhang, and other collaborators for helpful discussions.
R. Villegas was partly supported by the Rackham Merit Fellowship.
\section{Experiments}
\label{sec:experiments}
\vspace*{-0.1in}
In this section, we present experiments using our network for video generation. We first evaluate our network, MCnet, on the KTH~\citep{Kth} and Weizmann action~\citep{ActionsAsSpaceTimeShapes_pami07} datasets, and compare against a baseline convolutional LSTM (ConvLSTM)~\citep{convlstm}.
We then proceed to evaluate on the more challenging UCF-101~\citep{Ucf} dataset, in which we compare against the same ConvLSTM baseline and also the current state-of-the-art method by~\citet{Mathieu15}.
For all our experiments, we use $\alpha=1$, $\lambda=1$, and $p=2$ in the loss functions.
In addition to the results in this section, we also provide more qualitative comparisons in the supplementary material and in the videos on the project website: \url{https://sites.google.com/a/umich.edu/rubenevillegas/iclr2017}.
\paragraph{Architectures.} The content encoder of MCnet is built with the same architecture as VGG16~\citep{Vgg16} up to the third pooling layer.
The motion encoder of MCnet is also similar to VGG16 up to the third pooling layer, except that we replace its consecutive $3$x$3$ convolutions with single $5$x$5$, $5$x$5$, and $7$x$7$ convolutions in each layer.
The combination layers are composed of $3$ consecutive $3$x$3$ convolutions (256, 128, and 256 channels in each layer).
The multi-scale residuals are composed of $2$ consecutive $3$x$3$ convolutions.
The decoder is the mirrored architecture of the content encoder where we perform unpooling followed by deconvolution.
For the baseline ConvLSTM, we use the same architecture as the motion encoder, residual connections, and decoder, except we increase the number of channels in the encoder in order to have an overall comparable number of parameters with MCnet.
\subsection{KTH and Weizmann action datasets}
\label{sec:kth}
\begin{figure*}[h!]
\vspace{-.7cm}
\centering
\includegraphics[width=0.49\linewidth] {figs/kth_psnr_compare.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {figs/new_psnr_compare.eps} \hspace{0.1cm} \\
\includegraphics[width=0.49\linewidth] {figs/kth_ssim_compare.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {figs/new_ssim_compare.eps} \hspace{0.1cm}
\vspace{-.2cm}
\caption{
Quantitative comparison between MCnet and ConvLSTM baseline with and without multi-scale residual connections (indicated by "+ RES"). Given 10 input frames, the models predict 20 frames recursively, one by one. Left column: evaluation on KTH dataset~\citep{Kth}. Right colum: evaluation on Weizmann~\citep{ActionsAsSpaceTimeShapes_pami07} dataset.}
\label{fig:kth_quantitative}
\vspace{-.7cm}
\end{figure*}
\begin{figure}[h!]
\vspace{-.4cm}
\hspace*{-.7cm}
\centering
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\hspace{-.4cm}
\parbox{2cm}{\centering G.T.} \hspace{-.3cm} \parbox{2cm}{\centering ConvLSTM} \hspace{-.3cm} \parbox{2cm}{\centering MCnet}
}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=12}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/ours_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/bl_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/gt_0011.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=15}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/ours_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/bl_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/gt_0014.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=18}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/ours_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/bl_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/gt_0017.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{20pt}
\caption*{t=21}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/ours_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/bl_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/gt_0020.jpg}
\caption*{Jogging}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=24}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/ours_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/bl_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/gt_0023.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=27}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/ours_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/bl_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/gt_0026.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=30}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/ours_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/bl_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/jogging/gt_0029.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-.7cm}
\centering
\\
\vspace{-.4cm}
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\hspace{.2cm}
\parbox{2cm}{\centering G.T.} \hspace{-.3cm} \parbox{2cm}{\centering ConvLSTM} \hspace{-.3cm} \parbox{2cm}{\centering MCnet}
}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/ours_0011.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/bl_0011.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/gt_0011.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/ours_0014.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/bl_0014.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/gt_0014.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/ours_0017.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/bl_0017.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/gt_0017.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{9pt}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/ours_0020.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/bl_0020.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/gt_0020.jpg}
\caption*{Walking}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/ours_0023.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/bl_0023.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/gt_0023.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/ours_0026.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/bl_0026.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/gt_0026.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/ours_0029.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/bl_0029.jpg}
\includegraphics[width=1\linewidth]{figs/ASTS/walk/gt_0029.jpg}
\end{subfigure}
\vspace{-5pt}
\caption{Qualitative comparison between our MCNet model and ConvLSTM. We display predictions starting from the $12^{\text{th}}$ frame, in every $3$ timesteps. The first $3$ rows correspond to KTH dataset for the action of jogging and the last $3$ rows correspond to Weizmann dataset for the action of walking.}
\label{fig:kth_qualitative}
\vspace{-.6cm}
\end{figure}
\paragraph{Experimental settings.}
The KTH human action dataset~\citep{Kth} contains 6 categories of periodic motions on a simple background: running, jogging, walking, boxing, hand-clapping and hand-waiving.
We use person 1-16 for training and 17-25 for testing, and also resize frames to 128x128 pixels.
We train our network and baseline by observing 10 frames and predicting 10 frames into the future on the KTH dataset.
We set $\beta=0.02$ for training.
We also select the walking, running, one-hand waving, and two-hands waving sequences from the Weizmann action dataset~\citep{ActionsAsSpaceTimeShapes_pami07} for testing the networks' generalizability.
For all the experiments, we test the networks on predicting 20 time steps into the future.
As for evaluation, we use the same SSIM and PSNR metrics as in~\citet{Mathieu15}.
The evaluation on KTH was performed on sub-clips within each video in the testset.
We sample sub-clips every 3 frames for running and jogging, and sample sub-clips every 20 frames (skipping the frames we have already predicted) for walking, boxing, hand-clapping, and hand-waving.
Sub-clips for running, jogging, and walking were manually trimmed to ensure humans are always present in the frames.
The evaluation on Weizmann was performed on all sub-clips in the selected sequences.
\paragraph{Results.}
Figure~\ref{fig:kth_quantitative} summarizes the quantitative comparisons among our MCnet, ConvLSTM baseline and their residual variations.
In the KTH test set, our network outperforms the ConvLSTM baseline by a small margin.
However, when we test the residual versions of MCnet and ConvLSTM on the dataset~\citep{ActionsAsSpaceTimeShapes_pami07} with similar motions, we can see that our network can generalize well to the unseen contents by showing clear improvements, especially in long-term prediction.
One reason for this result is that the test and training partitions of the KTH dataset have simple and similar image contents so that ConvLSTM can memorize the average background and human appearance to make reasonable predictions.
However, when tested on unseen data, ConvLSTM has to internally take care of both scene dynamics and image contents in a mingled representation, which gives it a hard time for generalization.
In contrast, the reason our network outperforms the ConvLSTM baseline on unseen data is that our network focuses on identifying general motion features and applying them to a learned content representation.
Figure \ref{fig:kth_qualitative} presents qualitative results of multi-step prediction by our network and ConvLSTM.
As expected, prediction results by our full architecture preserves human shapes more accurately than the baseline.
It is worth noticing that our network produces very sharp prediction over long-term time steps; it shows that MCnet is able to capture periodic motion cycles, which reduces the uncertainty of future prediction significantly.
More qualitative comparisons are shown in the supplementary material and the \href{https://goo.gl/nG8ve1}{\color{blue} project website}.
\subsection{UCF-101 dataset}
\label{sec:ucf}
\paragraph{Experimental settings.}
This section presents results on the challenging real-world videos in the UCF-101~\citep{Ucf} dataset.
Having collected from YouTube, the dataset contains 101 realistic human actions taken in a wild and exhibits various challenges, such as background clutter, occlusion, and complicated motion.
We employed the same network architecture as in the KTH dataset, but resized frames to 240x320 pixels, and trained the network to observe 4 frames and predict a single frame.
We set $\beta=0.001$ for training.
We also trained our convolutional LSTM baseline in the same way.
Following the same protocol as \citet{Mathieu15} for data pre-processing and evaluation metrics on full images, all networks were trained on Sports-1M \citep{sports1m} dataset and tested on UCF-101 unless otherwise stated.\footnote{We use the code and model released by \citet{Mathieu15} at \url{https://github.com/coupriec/VideoPredictionICLR2016}}
\paragraph{Results.}
Figure \ref{fig:ucf101_quantitative} shows the quantitative comparisons between our network trained for single-step-prediction and \citet{Mathieu15}.
We can clearly see the advantage of our network over the baseline. The separation of motion and contents in two encoder pathways allows our network to identify key motion and content features, which are then fed into the decoder to yield predictions of higher quality compared to the baseline.\footnote{We were not able to get the model fine-tuned on UCF-101 from the authors so it is not included in Figure \ref{fig:ucf101_quantitative}}
In other words, our network only moves what shows motion in the past, and leaves the rest untouched.
We also trained a residual version of MCnet on UCF-101, indicated by ``MCnet + RES UCF101", to compare how well our model generalizes when trained and tested on the same or different dataset(s).
To our surprise, when tested with UCF-101, the MCnet trained on Sports-1M (MCnet + RES) roughly matches the performance of the MCnet trained on UCF-101 (MCnet + RES UCF101), which suggests that our model learns effective representations which can generalize to new datasets.
Figure \ref{fig:ucf101_qualitative} presents qualitative comparisons between frames generated by our network and \citet{Mathieu15}.
Since the ConvLSTM and \citet{Mathieu15} lack explicit motion and content modules, they lose sense of the dynamics in the video and therefore the contents become distorted quickly.
More qualitative comparisons are shown in the supplementary material and the \href{https://goo.gl/nG8ve1}{\color{blue} project website}.
\vspace{-.03cm}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.49\linewidth] {figs/ucf101_psnr_compare.eps} \hspace{-0.1cm}
\includegraphics[width=0.49\linewidth] {figs/ucf101_ssim_compare.eps} \hspace{0.1cm}
\caption{
Quantitative comparison between our model, convolutional LSTM~\cite{convlstm}, and \cite{Mathieu15}. Given 4 input frames, the models predict 8 frames recursively, one by one.}
\vspace{-.6cm}
\label{fig:ucf101_quantitative}
\end{figure*}
\newpage
\vspace{-1cm}
\begin{figure}[htb!]
\hspace*{-1.cm}
\centering
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\parbox{0.1cm}{\rotatebox{-90}{t=11}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=9}} \hspace{2.3cm} \parbox{0.1cm}{\rotatebox{-90}{t=7}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=5}} \hspace{.6cm}
}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{G.T.}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/2541/gt_0004_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2541/gt_0006_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2541/gt_0008_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2541/gt_0010_grid.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{MCnet}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/2541/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2541/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2541/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2541/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{ConvLSTM}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/2541/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2541/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2541/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2541/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{\cite{Mathieu15}}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/2541/pred_5_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2541/pred_7_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2541/pred_9_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2541/pred_11_grid.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-1cm} \\
\centering
\hspace*{-.7cm}
\hspace*{-1.1cm}
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\parbox{0.1cm}{\rotatebox{-90}{t=11}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=9}} \hspace{2.3cm} \parbox{0.1cm}{\rotatebox{-90}{t=7}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=5}} \hspace{.6cm}
}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/391/gt_0004_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/391/gt_0006_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/391/gt_0008_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/391/gt_0010_grid.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/391/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/391/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/391/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/391/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/391/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/391/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/391/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/391/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/391/pred_5_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/391/pred_7_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/391/pred_9_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/391/pred_11_grid.jpg}
\end{subfigure}
\hspace*{-1.1cm}
\vspace{-5pt}
\caption{Qualitative comparisons among MCnet and ConvLSTM and \cite{Mathieu15}. We display predicted frames (in every other frame) starting from the $5^{\text{th}}$ frame. The green arrows denote the top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More clear motion prediction can be seen in the \href{https://goo.gl/nG8ve1}{\color{blue} project website}.}
\label{fig:ucf101_qualitative}
\vspace{-0cm}
\end{figure}
\newpage
\section{Introduction}
\label{sec:intro}
\vspace*{-0.1in}
Understanding videos has been one of the most important tasks in the field of computer vision.
Compared to still images, the temporal component of videos provides much richer descriptions of the visual world, such as interaction between objects, human activities, and so on.
Amongst the various tasks applicable on videos, the task of anticipating the future has recently received increased attention in the research community.
Most prior works in this direction focus on predicting high-level semantics in a video such as action~\citep{Vondrick15,Ryoo11,Lan14}, event~\citep{Yuen10,Hoai13} and motion~\citep{Pintea14,Walker14,Pickup14,WalkerDGH16}.
Forecasting semantics provides information about \textit{what will happen} in a video, and is essential to automate decision making.
However, the predicted semantics are often specific to a particular task and provide only a partial description of the future.
Also, training such models often requires heavily labeled training data which leads to tremendous annotation costs especially with videos.
In this work, we aim to address the problem of prediction of future frames in natural video sequences.
Pixel-level predictions provide dense and direct description of the visual world, and existing video recognition models can be adopted on top of the predicted frames to infer various semantics of the future.
Spatio-temporal correlations in videos provide a self-supervision for frame prediction, which enables purely unsupervised training of a model by observing raw video frames.
Unfortunately, estimating frames is an extremely challenging task; not only because of the inherent uncertainty of the future, but also various factors of variation in videos leading to complicated dynamics in raw pixel values.
There have been a number of recent attempts on frame prediction~\citep{Srivastava15,Mathieu15,Oh15,Goroshin15,Lotter15,Ranzato14}, which use a single encoder that needs to reason about all the different variations occurring in videos in order to make predictions of the future, or require extra information like foreground-background segmentation masks and static background~\citep{Vondrick16}.
We propose a Motion-Content Network (MCnet) for robust future frame prediction.
Our intuition is to split the inputs for video prediction into two easily identifiable groups, motion and content, and independently capture each information stream with separate encoder pathways.
In this architecture, the \textit{motion} pathway encodes the local dynamics of spatial regions, while the \textit{content} pathway encodes the spatial layout of the salient parts of an image.
The prediction of the future frame is then achieved by transforming the content of the last observed frame given the identified dynamics up to the last observation.
\iffalse
Modeling the motion and content in videos with separate encoder pathways provides number of benefits for future frame prediction.
First, decomposing the two sources of information significantly simplifies of the prediction task.
Specifically, it allows our network to focus on identifying temporal and spatial features separately, which in turn reduces the prediction problem to converting content features from the last observed time step to the next using the motion features.
\fi
Somewhat surprisingly, we show that such a network is end-to-end trainable \emph{without individual path way supervision}. Specifically, we show that an asymmetric architecture for the two pathways enables such decompositions without explicit supervision.
The contributions of this paper are summarized below:
\begin{itemize}
\item We propose MCnet for the task of frame prediction, which separates the information streams (motion and content) into different encoder pathways.
\item The proposed network is end-to-end trainable and naturally learns to decompose motion and content without separate training, and reduces the task of frame prediction to transforming the last observed frame into the next by the observed motion.
\item We evaluate the proposed model on challenging real-world video datasets, and show that it outperforms previous approaches on frame prediction.
\end{itemize}
The rest of the paper is organized as follows.
We briefly review related work in Section~\ref{sec:relatedwork}, and introduce an overview of the proposed algorithm in Section~\ref{sec:overview}.
The detailed configuration of the proposed network is described in Section~\ref{sec:architecture}.
Section~\ref{sec:train_infer} describes training and inference procedure.
Section~\ref{sec:experiments} illustrates implementation details and experimental results on challenging benchmarks.
\section{Related work}
\commenttext{Section needs revising and update with current literature.}
\label{sec:relatedwork}
\vspace*{-0.1in}
\ifdefined\paratitle {\color{blue} [future prediction in a video]\\ }\fi
The problem of visual future prediction has received growing interests in the computer vision community.
It has led to various tasks depending on the objective of future prediction, such as human activity~\citep{Vondrick15,Ryoo11,Lan14}, event~\citep{Yuen10,Hoai13} and geometric path~\citep{Walker14}.
Although previous work achieved reasonable success in specific tasks, they are often limited to estimating predefined semantics, and require fully-labeled training data.
To alleviate this issue, approaches predicting representation of the future beyond semantic labels have been proposed.
\citet{Walker14} proposed a data-driven approach to predict the motion of a moving object, and coarse hallucination of the predicted motion.
\citet{Vondrick15} proposed a deep regression network to predict feature representations of the future frames.
These approaches are supervised and provide coarse predictions of how the future will look like.
Our work also focuses on unsupervised learning for prediction of the future, but to a more direct visual prediction task: frame prediction.
\ifdefined\paratitle {\color{blue} [frame prediction on videos]\\ }\fi
Compared to predicting semantics, pixel-level prediction has been less investigated due to the difficulties in modeling evolution of raw pixels \sh{over time}.
Fortunately, recent advances in deep learning provide a powerful tool for sequence modeling, and enable the creation of novel architectures for modeling complex sequential data.
\cite{Ranzato14} applied a recurrent neural network developed for language modeling to frame prediction by posing the task as classification of each image region to one of quantized patch dictionaries.
\cite{Srivastava15} applied a sequence-to-sequence model to video prediction, and showed that Long Short-Term Memory (LSTM) is able to capture pixel dynamics.
\cite{Oh15} proposed an action-conditional encoder-decoder network to predict future frames in Atari games.
In addition to the different choices of architecture, some other works addressed the importance of selecting right objective function:
\cite{Lotter15} used adversarial loss with combined CNN and LSTM architectures, and \cite{Mathieu15} employed similar adversarial loss with additional regularization using a multi-scale encoder-decoder network.
\cite{FinnGL16} constructed a network that predicts transformations on the input pixels for next frame prediction.
\cite{DBLP:journals/corr/PatrauceanHC15} proposed a network that by explicitly predicting optical flow features is able to predict the next frame in a video.
\cite{Vondrick16} proposed a generative adversarial network for video which, by generating a background-foreground mask, is able to generate realistic-looking video sequences.
However, none of the previously mentioned approaches exploit spatial and temporal information separately in an unsupervised fashion.
In terms of the way data is observed, the closest work to ours is \citet{visualdynamics16}.
The differences are (1) Our model is deterministic and theirs is probabilistic, (2) our motion encoder is based on convolutional LSTM \citep{convlstm} which is a more natural module to model long-term dynamics, (3) our content encoder observes a single scale input and theirs observes many scales, and (4) we directly generate image pixels values, which is a more complicated task.
We aim to exploit the existing spatio-temporal correlations in videos by decomposing the motion and content in our network architecture.
\ifdefined\paratitle {\color{blue} [Disentangling motion and contents in videos]\\ }\fi
To the best of our knowledge, the idea of separating motion and content has not been investigated in the task of unsupervised deterministic frame prediction.
The proposed architecture shares similarities to the two-stream CNN~\citep{Simonyan14}, which is designed for action recognition to jointly exploit the information from frames and their temporal dynamics.
\sh{However, in contrast to their network we aim to learn features for temporal dynamics directly from the raw pixels, and we use the identified features from the motion in combination with spatial features to make pixel-level predictions of the future. }
\section{Appendix}
\begin{appendix}
\begin{figure}[!htb]
\hspace*{-.7cm}
\centering
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\hspace{-.4cm}
\parbox{2cm}{\centering G.T.} \hspace{-.3cm} \parbox{2cm}{\centering ConvLSTM} \hspace{-.3cm} \parbox{2cm}{\centering MCnet}
}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=12}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/ours_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/bl_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/gt_0011.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=15}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/ours_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/bl_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/gt_0014.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=18}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/ours_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/bl_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/gt_0017.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{20pt}
\caption*{t=21}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/ours_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/bl_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/gt_0020.jpg}
\caption*{Boxing}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=24}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/ours_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/bl_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/gt_0023.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=27}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/ours_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/bl_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/gt_0026.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=30}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/ours_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/bl_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/boxing/gt_0029.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-.7cm} \\
\centering
\hspace*{-.7cm}
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\hspace{.1cm}
\parbox{2cm}{\centering G.T.} \hspace{-.3cm} \parbox{2cm}{\centering ConvLSTM} \hspace{-.3cm} \parbox{2cm}{\centering MCnet}
}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/running/ours_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/bl_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/gt_0011.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/running/ours_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/bl_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/gt_0014.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/running/ours_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/bl_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/gt_0017.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{9pt}
\includegraphics[width=1\linewidth]{figs/KTH/running/ours_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/bl_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/gt_0020.jpg}
\caption*{Running}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/running/ours_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/bl_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/gt_0023.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/running/ours_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/bl_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/gt_0026.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/running/ours_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/bl_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/running/gt_0029.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-.7cm} \\
\centering
\hspace*{-.7cm}
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\hspace{.1cm}
\parbox{2cm}{\centering G.T.} \hspace{-.3cm} \parbox{2cm}{\centering ConvLSTM} \hspace{-.3cm} \parbox{2cm}{\centering MCnet}
}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/walking/ours_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/bl_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/gt_0011.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/walking/ours_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/bl_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/gt_0014.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/walking/ours_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/bl_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/gt_0017.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{9pt}
\includegraphics[width=1\linewidth]{figs/KTH/walking/ours_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/bl_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/gt_0020.jpg}
\caption*{Walking}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/walking/ours_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/bl_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/gt_0023.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/walking/ours_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/bl_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/gt_0026.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/walking/ours_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/bl_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/walking/gt_0029.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-.7cm} \\
\centering
\caption{Qualitative comparisons on KTH testset. We display predictions starting from the $12^{\text{th}}$ frame, for every $3$ timesteps. More clear motion prediction can be seen in the \href{https://goo.gl/nG8ve1}{\color{blue} project website}.}
\label{fig:kth_qualitative2}
\vspace{-.5cm}
\end{figure}
\begin{figure}[t!]
\vspace{-10cm}
\hspace*{-.7cm}
\centering
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\hspace{-.4cm}
\parbox{2cm}{\centering G.T.} \hspace{-.3cm} \parbox{2cm}{\centering ConvLSTM} \hspace{-.3cm} \parbox{2cm}{\centering MCnet}
}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=12}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/ours_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/bl_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/gt_0011.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=15}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/ours_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/bl_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/gt_0014.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=18}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/ours_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/bl_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/gt_0017.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{21pt}
\caption*{t=21}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/ours_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/bl_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/gt_0020.jpg}
\caption*{Handclapping}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=24}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/ours_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/bl_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/gt_0023.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=27}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/ours_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/bl_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/gt_0026.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=30}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/ours_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/bl_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handclapping/gt_0029.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-.7cm} \\
\centering
\hspace*{-.7cm}
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\hspace{.1cm}
\parbox{2cm}{\centering G.T.} \hspace{-.3cm} \parbox{2cm}{\centering ConvLSTM} \hspace{-.3cm} \parbox{2cm}{\centering MCnet}
}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/ours_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/bl_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/gt_0011.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/ours_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/bl_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/gt_0014.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/ours_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/bl_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/gt_0017.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{9pt}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/ours_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/bl_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/gt_0020.jpg}
\caption*{Handwaving}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/ours_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/bl_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/gt_0023.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/ours_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/bl_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/gt_0026.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/ours_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/bl_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/handwaving/gt_0029.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-.7cm} \\
\centering
\caption{Qualitative comparisons on KTH testset. We display predictions starting from the $12^{\text{th}}$ frame, for every $3$ timesteps. More clear motion prediction can be seen in the \href{https://goo.gl/nG8ve1}{\color{blue} project website}.}
\label{fig:kth_qualitative3}
\vspace{-.5cm}
\end{figure}
\clearpage
\newpage
\begin{figure}[!hbt]
\hspace*{-.7cm}
\hspace*{-1.1cm}
\centering
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\parbox{0.1cm}{\rotatebox{-90}{t=11}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=9}} \hspace{2.3cm} \parbox{0.1cm}{\rotatebox{-90}{t=7}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=5}} \hspace{.6cm}
}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{G.T.}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/151/gt_0004_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/151/gt_0006_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/151/gt_0008_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/151/gt_0010_grid.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{MCnet}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/151/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/151/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/151/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/151/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{ConvLSTM}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/151/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/151/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/151/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/151/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{\cite{Mathieu15}}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/151/pred_5_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/151/pred_7_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/151/pred_9_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/151/pred_11_grid.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-1cm} \\
\centering
\hspace*{-.7cm}
\hspace*{-.8cm}
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\parbox{0.1cm}{\rotatebox{-90}{t=11}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=9}} \hspace{2.3cm} \parbox{0.1cm}{\rotatebox{-90}{t=7}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=5}} \hspace{.6cm}
}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/2231/gt_0004_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2231/gt_0006_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2231/gt_0008_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2231/gt_0010_grid.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/2231/{S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2231/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2231/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2231/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/2231/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2231/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2231/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/2231/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/2231/pred_5_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2231/pred_7_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2231/pred_9_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/2231/pred_11_grid.jpg}
\end{subfigure}
\vspace{-5pt}
\caption{Qualitative comparisons on UCF-101. We display predictions (in every other frame) starting from the $5^{\text{th}}$ frame. The green arrows denote the top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More clear motion prediction can be seen in the \href{https://goo.gl/nG8ve1}{\color{blue} project website}.}
\label{fig:ucf101_qualitative2}
\vspace{-.5cm}
\end{figure}
\newpage
\section{Qualitative and quantitative comparison with considerable camera motion and analysis} \label{sec:extquant}
In this section, we show frame prediction examples in which considerable camera motion occurs.
We analyze the effects of camera motion on our best network and the corresponding baselines. First, we analyze qualitative examples on UCF101 (more complicated camera motion) and then on KTH (zoom-in and zoom-out camera effect).
\paragraph{UCF101 Results.}
As seen in Figure \ref{fig:ucf101_qualitative3} and Figure \ref{fig:ucf101_qualitative4}, our model handles foreground and camera motion for a few steps.
We hypothesize that for the first few steps, motion signals from images are clear.
However, as images are predicted, motion signals start to deteriorate due to prediction errors.
When a considerable amount of camera motion is present in image sequences, the motion signals are very dense.
As predictions evolve into the future, our motion encoder has to handle large motion deterioration due to prediction errors, which cause motion signals to get easily confused and lost quickly.
\vspace*{1cm}
\begin{figure}[!hbt]
\hspace*{-.7cm}
\centering
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\parbox{0.1cm}{\rotatebox{-90}{t=11}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=9}} \hspace{2.3cm} \parbox{0.1cm}{\rotatebox{-90}{t=7}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=5}} \hspace{.6cm}
}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{G.T.}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/1921/gt_0004_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1921/gt_0006_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1921/gt_0008_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1921/gt_0010_grid.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{MCnet}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/1921/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1921/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1921/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1921/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{ConvLSTM}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/1921/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1921/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1921/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1921/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{\cite{Mathieu15}}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/1921/pred_5_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1921/pred_7_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1921/pred_9_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1921/pred_11_grid.jpg}
\end{subfigure}
\vspace{-5pt}
\caption{Qualitative comparisons on UCF-101. We display predictions (in every other frame) starting from the $5^{\text{th}}$ frame. The green arrows denote the top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More clear motion prediction can be seen in the \href{https://goo.gl/nG8ve1}{\color{blue} project website}.}
\label{fig:ucf101_qualitative3}
\vspace{-.5cm}
\end{figure}
\newpage
\begin{figure}[!hbt]
\hspace*{-1cm}
\centering
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\parbox{0.1cm}{\rotatebox{-90}{t=11}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=9}} \hspace{2.3cm} \parbox{0.1cm}{\rotatebox{-90}{t=7}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=5}} \hspace{.6cm}
}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{G.T.}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/1861/gt_0004_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1861/gt_0006_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1861/gt_0008_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1861/gt_0010_grid.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{MCnet}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/1861/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1861/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1861/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1861/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{ConvLSTM}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/1861/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1861/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1861/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1861/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{\cite{Mathieu15}}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/1861/pred_5_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1861/pred_7_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1861/pred_9_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1861/pred_11_grid.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-1cm} \\
\centering
\hspace*{-.7cm}
\hspace*{-.8cm}
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\parbox{0.1cm}{\rotatebox{-90}{t=11}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=9}} \hspace{2.3cm} \parbox{0.1cm}{\rotatebox{-90}{t=7}} \hspace{2.1cm} \parbox{0.1cm}{\rotatebox{-90}{t=5}} \hspace{.6cm}
}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/1501/gt_0004_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1501/gt_0006_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1501/gt_0008_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1501/gt_0010_grid.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/1501/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1501/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1501/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1501/S1M_MCNET_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{{figs/UCF101/1501/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0004_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1501/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0006_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1501/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0008_grid}.jpg}
\includegraphics[width=1\linewidth]{{figs/UCF101/1501/S1M_CONVLSTM_FAST_RES_ADV_L1_K=4_alpha=1.0_beta=0.001_num_step=1_lr=0.0001_0010_grid}.jpg}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\caption*{}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/UCF101/1501/pred_5_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1501/pred_7_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1501/pred_9_grid.jpg}
\includegraphics[width=1\linewidth]{figs/UCF101/1501/pred_11_grid.jpg}
\end{subfigure}
\hspace{-.8cm}
\vspace{-5pt}
\caption{Qualitative comparisons on UCF-101. We display predictions (in every other frame) starting from the $5^{\text{th}}$ frame. The green arrows denote the top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More clear motion prediction can be seen in the \href{https://goo.gl/nG8ve1}{\color{blue} project website}.}
\label{fig:ucf101_qualitative4}
\vspace{-.5cm}
\end{figure}
\newpage
\paragraph{KTH Results.}
We were unable to find videos with background motion in the KTH dataset, but we found videos where the camera is zooming in or out for the actions of boxing, handclapping, and handwaving.
In Figure \ref{fig:kth_qualitative4}, we display qualitative for such videos.
Our model is able to predict the zoom change in the cameras, while continuing the action motion.
In comparison to the performance observed in UCF101, the background does not change much.
Thus, the motion signals are well localized in the foreground motion (human), and do not get confused with the background and lost as quickly.
\begin{figure}[hbt!]
\vspace{.1cm}
\hspace*{-.7cm}
\centering
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\hspace{-.4cm}
\parbox{2cm}{\centering G.T.} \hspace{-.3cm} \parbox{2cm}{\centering ConvLSTM} \hspace{-.3cm} \parbox{2cm}{\centering MCnet}
}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=12}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/ours_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/bl_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/gt_0011.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=15}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/ours_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/bl_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/gt_0014.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=18}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/ours_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/bl_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/gt_0017.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{21pt}
\caption*{t=21}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/ours_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/bl_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/gt_0020.jpg}
\caption*{Boxing}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=24}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/ours_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/bl_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/gt_0023.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=27}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/ours_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/bl_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/gt_0026.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\caption*{t=30}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/ours_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/bl_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/b_cameramotion/gt_0029.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-.7cm} \\
\centering
\hspace*{-.7cm}
\begin{subfigure}{0.04\linewidth}
\raggedleft
\rotatebox{90}{
\hspace{.1cm}
\parbox{2cm}{\centering G.T.} \hspace{-.3cm} \parbox{2cm}{\centering ConvLSTM} \hspace{-.3cm} \parbox{2cm}{\centering MCnet}
}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/ours_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/bl_0011.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/gt_0011.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/ours_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/bl_0014.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/gt_0014.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/ours_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/bl_0017.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/gt_0017.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{9pt}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/ours_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/bl_0020.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/gt_0020.jpg}
\caption*{Handclapping}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/ours_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/bl_0023.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/gt_0023.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/ours_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/bl_0026.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/gt_0026.jpg}
\end{subfigure}
\begin{subfigure}{0.13\linewidth}
\vspace{-7pt}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/ours_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/bl_0029.jpg}
\includegraphics[width=1\linewidth]{figs/KTH/hc_cameramotion/gt_0029.jpg}
\end{subfigure}
\vspace{.1cm}
\hspace*{-.7cm} \\
\centering
\caption{Qualitative comparisons on KTH testset. We display predictions starting from the $12^{\text{th}}$ frame, in every $3$ timesteps. More clear motion prediction can be seen in the \href{https://goo.gl/nG8ve1}{\color{blue} project website}.}
\label{fig:kth_qualitative4}
\vspace{-.5cm}
\end{figure}
\newpage
\section{Extended quantitative evaluation}
In this section, we show additional quantitative comparison with a baseline based on copying the last observed frame through time for KTH and UCF101 datasets.
Copying the last observed frame through time ensures perfect background prediction in videos where most of the motion comes from foreground (i.e. person performing an action).
However, if such foreground composes a small part of the video, it will result in high prediction quality score regardless of the simple copying action.
In Figure \ref{fig:extra_quantitative} below, we can see the quantitative comparison in the datasets.
Copying the last observed frame through time does a reasonable job in both datasets, however, the impact is larger in UCF101.
Videos in the KTH dataset comprise simple background with minimal camera motion, which allows our network to easily predict both foreground and background motion, resulting in better image quality scores.
However, videos in UCF101 contain more complicated and diverse background which in combination with camera motion present a much greater challenge to video prediction networks.
From the qualitative results in Section~\ref{sec:extquant} and Figures~\ref{fig:ucf101_qualitative}, \ref{fig:ucf101_qualitative2}, \ref{fig:ucf101_qualitative3}, and \ref{fig:ucf101_qualitative4}, we can see that our network performs better in videos that contain isolated areas of motion compared to videos with dense motion.
A simple copy$/$paste operation of the last observed frame, ensures very high prediction scores in videos where very small motion occur.
The considerable score boost by videos with small motion causes the simple copy$/$paste baseline to outperform MCnet in the overall performance on UCF101.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.49\linewidth] {figs/kth_psnr_compare_bg.eps} \hspace{0.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_thresh=0.0}.eps} \hspace{0.1cm} \\
\includegraphics[width=0.49\linewidth] {figs/kth_ssim_compare_bg.eps} \hspace{0.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_thresh=0.0}.eps} \hspace{0.1cm}
\vspace{-.2cm}
\caption{Extended quantitative comparison including a baseline based on copying the last observed frame through time.}
\label{fig:extra_quantitative}
\end{figure*}
\newpage
\section{UCF101 Motion Disambiguation Experiments}
Due to the observed bias from videos with small motion, we perform experiments by measuring the image quality scores on areas of motion.
These experiments are similar to the ones performed in \cite{Mathieu15}.
We compute DeepFlow optical flow \citep{deepflow} between the previous and the current groundtruth image of interest, compute the magnitude, and normalize it to $[0,1]$.
The computed optical flow magnitude is used to mask the pixels where motion was observed.
We set the pixels where the optical flow magnitude is less than 0.2, and leave all other pixels untouched in both the groundtruth and predicted images.
Additionally, we separate the test videos by the average $\ell_2$-norm of time difference between target frames.
We separate the test videos into deciles based of the computed average $\ell_2$-norms, and compute image quality on each decile.
Intuitively, the $1^{st}$ decile contains videos with the least overall of motion (i.e. frames that show the smallest change over time), and the $10^{th}$ decile contains videos with the most overall motion (i.e. frames that show the largest change over time).
As shown in Figure~\ref{fig:extra_quantitative2}, when we only evaluate on pixels where rough motion is observed, MCnet reflects higher PSNR and SSIM, and clearly outperforms all the baselines in terms of SSIM.
The SSIM results show that our network is able to predict a structure (i.e. textures, edges, etc) similar to the grountruth images within the areas of motion.
The PSNR results, however, show that our method outperforms the simple copy$/$paste baseline for the first few steps, but then our method performs slightly worse.
The discrepancies observed between PSNR and SSIM scores could be due to the fact that some of the predicted images may not reflect the exact pixel values of the groundtruth regardless of the structures being similar.
SSIM scores are known to take into consideration features in the image that go beyond directly matching pixel values, reflecting more accurately how humans perceived image quality.
\begin{figure*}[htb!]
\centering
\vspace{.2cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_thresh=0.2}.eps} \hspace{0.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_thresh=0.2}.eps} \hspace{0.1cm}
\vspace{-.2cm}
\caption{Extended quantitative comparison on UCF101 including a baseline based on copying the last observed frame through time using motion based pixel mask.}
\label{fig:extra_quantitative2}
\end{figure*}
Figures \ref{fig:extra_quantitative3} and \ref{fig:extra_quantitative4} show the evaluation by separating the test videos into deciles based on the average $\ell_2$-norm of time difference between target frames.
From this evaluation, it is proven that the copy last frame baseline scores higher in videos where motion is the smallest.
The first few deciles (videos with small motion) show that our network is not just copying the last observed frame through time, otherwise it would perform similarly to the copy last frame baseline.
The last deciles (videos with large motion) show our network outperforming all the baselines, including the copy last frame baseline, effectively confirming that our network does predict motion similar to the motion observed in the video.
\begin{figure*}[htb!]
\vspace{-.6cm}
\centering
\caption*{$10^{th}$ decile}
\vspace{-.4cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_perc=100_thresh=0.2}.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_perc=100_thresh=0.2}.eps} \vspace{-.6cm}\hspace{0.1cm}
\caption*{$9^{th}$ decile}
\vspace{-.4cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_perc=90_thresh=0.2}.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_perc=90_thresh=0.2}.eps} \vspace{-.6cm}\hspace{0.1cm}
\caption*{$8^{th}$ decile}
\vspace{-.4cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_perc=80_thresh=0.2}.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_perc=80_thresh=0.2}.eps} \vspace{-.6cm}\hspace{0.1cm}
\caption*{$7^{th}$ decile}
\vspace{-.4cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_perc=70_thresh=0.2}.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_perc=70_thresh=0.2}.eps} \vspace{-.6cm}\hspace{0.1cm}
\caption*{$6^{th}$ decile}
\vspace{-.4cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_perc=60_thresh=0.2}.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_perc=60_thresh=0.2}.eps} \hspace{0.1cm}
\caption{Quantitative comparison on UCF101 using motion based pixel mask, and separating dataset by average $\ell_2$-norm of time difference between target frames.}
\label{fig:extra_quantitative4}
\end{figure*}
\newpage
\begin{figure*}[htb!]
\vspace{-.6cm}
\centering
\caption*{$5^{th}$ decile}
\vspace{-.4cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_perc=50_thresh=0.2}.eps}
\hspace{.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_perc=50_thresh=0.2}.eps}
\vspace{-.6cm}\hspace{0.1cm}
\caption*{$4^{th}$ decile}
\vspace{-.4cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_perc=40_thresh=0.2}.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_perc=40_thresh=0.2}.eps} \vspace{-.6cm}\hspace{0.1cm}
\caption*{$3^{rd}$ decile}
\vspace{-.4cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_perc=30_thresh=0.2}.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_perc=30_thresh=0.2}.eps} \vspace{-.6cm}\hspace{0.1cm}
\caption*{$2^{nd}$ decile}
\vspace{-.4cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_perc=20_thresh=0.2}.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_perc=20_thresh=0.2}.eps} \vspace{-.6cm}\hspace{0.1cm}
\caption*{$1^{st}$ decile}
\vspace{-.4cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_psnr_compare_perc=10_thresh=0.2}.eps} \hspace{.1cm}
\includegraphics[width=0.49\linewidth] {{figs/ucf101_ssim_compare_perc=10_thresh=0.2}.eps} \hspace{0.1cm}
\caption{Quantitative comparison on UCF101 using motion based pixel mask, and separating dataset by average $\ell_2$-norm of time difference between target frames.}
\label{fig:extra_quantitative3}
\end{figure*}
\clearpage
\section{Adversarial Training} \label{sec:GANs}
\cite{Mathieu15} proposed an adversarial training for frame prediction.
Inspired by \cite{NIPS2014_5423}, they proposed a training procedure that involves a generative model $G$ and a discriminative model $D$.
The two models compete in a two-player minimax game.
The discriminator $D$ is optimized to correctly classify its inputs as either coming from the training data (real frame sequence) or from the generator $G$ (synthetic frame sequence).
The generator $G$ is optimized to generate frames that \textit{fool} the discriminator into believing that they come from the training data.
At training time, $D$ takes the concatenation of the input frames that go into $G$ and the images produced by $G$.
The adversarial training objective is defined as follows:
\begin{equation*}
\min_{G} \max_{D} \ \ \log D\left(\left[{\mathbf{x}}_{1:t},{\mathbf{x}}_{t+1:t+T}\right]\right) +\log\left(1-D\left(\left[{\mathbf{x}}_{1:t},G\left({\mathbf{x}}_{1:t}\right)\right]\right)\right),
\end{equation*}
where $\left[.,.\right]$ denotes concatenation in the depth dimension, ${\mathbf{x}}_{1:t}$ denotes the input frames to $G$, ${\mathbf{x}}_{t+1:t+T}$ are the target frames, and $G\left({\mathbf{x}}_{1:t}\right)=\hat{{\mathbf{x}}}_{t+1:t+T}$ are the frames predicted by $G$.
In practice, we split the minimax objective into two separate, but equivalent, objectives: $\mathcal{L}_{\text{GAN}}$ and $\mathcal{L}_{\text{disc}}$.
During optimization, we minimize the adversarial objective alternating between $\mathcal{L}_{\text{GAN}}$ and $\mathcal{L}_{\text{disc}}$.
$\mathcal{L}_{\text{GAN}}$ is defined by
\begin{equation*}
\mathcal{L}_{\text{GAN}} = -\log D\left(\left[{\mathbf{x}}_{1:t},G\left({\mathbf{x}}_{1:t}\right)\right]\right) ,
\end{equation*}
where we optimize the parameters of $G$ to minimize $\mathcal{L}_{\text{GAN}}$ while the parameters of $D$ stay untouched.
As a result, $G$ is optimized to generate images that make $D$ believe that they come from the training data.
Thus, the generated images look sharper, and more realistic.
$\mathcal{L}_{\text{disc}}$ is defined by
\begin{equation*}
\mathcal{L}_{\text{disc}} = -\log D\left(\left[{\mathbf{x}}_{1:t},{\mathbf{x}}_{t+1:t+T}\right]\right) -\log\left(1-D\left(\left[{\mathbf{x}}_{1:t},G\left({\mathbf{x}}_{1:t}\right)\right]\right)\right),
\end{equation*}
where we optimize the parameters of $D$ to minimize $\mathcal{L}_{\text{disc}}$, while the parameters of $G$ stay untouched.
$D$ tells us whether its input came from the training data or the generator $G$.
Alternating between the two objectives, causes $G$ to generate very realistic images, and $D$ not being able to distinguish between generated frames and frames from the training data.
\end{appendix}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,154
|
\section*{Acknowledgements}
One of us (C. G.) is happy to thank S. Gao for useful contributions.
This work was supported in part by by the Natural Sciences and
Engineering Research Council of Canada, in part by the FCAR fund of the
Qu\'ebec Government, and in part by the US Department of Energy
under grant DE-FG02-87ER40328.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,259
|
Q: consulta a una relación reflexiva Tengo la siguiente relacion y lo que busco es obtner todos los registros de los hijos con sus padres.
lo máximo que pude hacer es listar todos los hijos, pero tambien necesito el nombre de los padres
select personas.nombre from personas
inner join hijo_padre hijo on hijo.persona_hijo_id = personas.id;
A: Debes hacer una consulta left join, por ejemplo con este esquema:
CREATE TABLE persona (
id integer auto_increment primary key,
name varchar(50)
);
CREATE TABLE padre_hijo (
id integer auto_increment primary key,
padre_id integer not null,
hijo_id integer not null
);
INSERT INTO persona (name) VALUES
('Juan'), ('Antonio'), ('Ana'), ('Lucia'), ('Andres'), ('Marta');
INSERT INTO padre_hijo (padre_id, hijo_id) VALUES
(1, 5), (3, 5), (2, 6), (4, 6);
Podriamos hacer la siguiente query
SELECT
h.name as nombre_hijo,
p.name as nombre_padre
FROM padre_hijo as cn
LEFT JOIN persona as p on cn.padre_id = p.id
LEFT JOIN persona as h on cn.hijo_id = h.id
O esta para concatenar los nombres de todos los padres
SELECT
h.name as nombre_hijo,
GROUP_CONCAT(p.name, ' ') as padres
FROM padre_hijo as cn
LEFT JOIN persona as p on cn.padre_id = p.id
LEFT JOIN persona as h on cn.hijo_id = h.id
GROUP BY h.id
A: La respuesta más simple sería:
SELECT
P.nombre
FROM
personas P,
hijo_padre H
WHERE
H.persona_hijo_id = P.id
and
H.persona_padre_id = P.id;
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,021
|
\section{Introduction}
Our understanding of how galaxies originate and distribute on large scales
in the universe has greatly improved in the last two decades.
While the standard model of hierarchical galaxy clustering
\markcite{WR1987}(White \& Rees 1978) has been successful in explaining
the clustering pattern of galaxies revealed by redshift surveys,
it predicts a large number of low-mass galaxies ($L<10^{10}L_{\odot}$)
beyond that estimated from the observed luminosity function of galaxies
(\markcite{WF1991}White \& Frenk 1991;
\markcite{CAFNZ1994} Cole {\it et al.} 1994).
The hierarchical model should therefore involve some mechanism which
suppresses the formation of such small galaxies. Main mechanisms so
far proposed include an energy feedback from supernovae that prevents
the collapse of a forming galaxy (Dekel \& Silk 1986; Lacey \& Silk 1992)
and a photoionization by ultraviolet background radiation that keeps
the gas hot and unable to collapse (Dekel \& Rees 1987; Efstathiou 1992).
Significant body of new observations of nearby dwarf galaxies has reveald
a web of filaments, loops and expanding super giant shells which are
imprinted in the ionized gas around individual galaxies
(\markcite{MFD1992}Meurer, Freeman \& Dopita 1992;
\markcite{MHW1995}Marlowe, Heckman \& Wyse 1995;
\markcite{H1996}Hunter 1996). Since the traces of energetic winds
are oriented from supernovae or massive stars, it is evident that the
heat input from them greatly affects the dynamics of small galaxies.
This feedback of energy into the interstellar medium must play a
decisive role in the early stage of galaxy evolution when star formation
rate is expected to be much higher.
Dekel \& Silk (1986) showed that the supernova feedback mechanism nicely
accounts for the observed correlations between metallicity, color and
luminosity of galaxies (see also Vader 1986; Yoshii \& Arimoto 1987).
There is however a clear distinction in structural and chemical quantities
bewteen dwarf ellipticals (dEs) and normal ellipticals in spite of
their morphological similarity. The central concentration of dEs is
relatively low and their luminosity profiles are best fitted by an
exponential function, whereas the profiles of normal ellipticals are
known to follow de Vaucouleurs' law
(\markcite{FL1983}Faber \& Lin 1983;
\markcite{BST1984}Binggeli, Sandage \& Tarenghi 1984;
\markcite{IWO1986}Ichikawa, Wakamatsu \& Okamura 1986;
\markcite{CB1987}Caldwell \& Bothun 1987).
Moreover, the color of many dEs becomes redder towards outer radii of
the system (Vader {\it et al.} 1988; Kormendy \& Djorgovski 1989;
Chaboyer 1994), and this trend of color gradient is clearly opposite
to normal galaxies.
The origin of these striking features of dEs remains yet to be explained
(for a review see Ferguson \& Binggeli 1994). In particular, no attempts
have ever been made to examine whether the supernova feedback mechanism
is viable also in this context.
In this paper, we use three dimensional simulation code with a
cosmologically motivated initial condition and investigate the formation
and evolution of a dE galaxy taking into account the dynamical responses
of the system from supernova-driven winds. Our simulation shows that
such winds propagating outwards from inside the system collide with the
infalling gas and produce the super shell in which stars are formed.
This specific process of star formation turns out to reproduce the
observed features of dEs, and therefore the heating by supernovae proves
to be an ideal suppressing mechanism against the efficient formation of
low-mass galaxies in the hierachical clustering model.
\section{Numerical Method}
Our simulation uses a hybrid $N$-body/hydrodynamics code which is
applicable to a complex system consisting of dark matter, stars and
gas. The gas is allowed to form stars and is subject to
physical processes such as the radiative cooling and the energy feedback
from supernovae and massive stars. The cooling rate of the gas is
calculated assuming the primordial composition, and the effect of
photoionization by ultraviolet background radiation is ignored for
simplicity. Chemical and photometric evolution of the system can
also be simulated by this code. The collisionless dynamics for dark
matter particles and stars is treated by the $N$-body method and the
gas dynamics by the method of smoothed particle hydrodynamics (SPH)
(\markcite{HK1989}Hernquist \& Katz 1989; \markcite{M1992}Monaghan 1992).
Our numerical technique is essentially similar to that adopted by
\markcite{S1996}Steinmetz (1996). We only briefly describe
how to calculate the self-gravity, star formation, and energy feedback.
The details will be given in a forthcoming paper
(\markcite{MYTN1996c}Mori {\it et al.} 1996b).
Self-gravity calculations are run on the hardware GRAPE-3AF
(\markcite{GRAPE1990}Sugimoto {\it et al.} 1990) by using the ``Remote-GRAPE''
system. This remote system is newly developed in order to allow an access
to the GRAPE-3AF from local workstations which are not physically connected
to the host workstation. Thus, self-garvity calculations can be performed
in parallel with other calculations, so that the calculation time is
considerably shortened. The performance analysis of this system is
reported by
\markcite{NMN1996}Nakasato {\it et al.} (1996) and
\markcite{MYTN1996c}Mori {\it et al.} (1996b).
Stars are assumed to form in rapidly cooling, Jeans unstable and converging
regions at a rate which is inversely proportional to the local dynamical
time (\markcite{K1992}Katz 1992;
\markcite{NW1993}Navarro \& White 1993;
\markcite{SM1994}Steimetz \& M\"uller 1994).
When a star particle is formed, we identify this with approximately
$10^4 $ single stars and distribute the associated mass of the star
particle over the single stars according to Salpeter's (1955) initial
mass functiion. The lower and upper mass limits are taken as
$m_l=0.1M_\odot$ and $m_u=50M_\odot$, respectively.
Our SPH algorithm for treating the energy feedback from massive stars
is a more physically motivated one and is different from those adopted
by previous authors
(\markcite{K1992}Katz 1992; \markcite{NW1993}Navarro \& White 1993;
\markcite{MH1994}Mihos \& Hernquist 1994).
When a star particle is formed and identified with a stellar assemblage
as described above, stars more massive than 8 $M_{\odot}$ start to explode
as Type II supernovae (SNe II) with the explosion energy of $10^{51}$ ergs
and their outer layers are blown out with synthesized metals leaving the
remnant of 1.4 $M_{\odot}$. We can regard this assemblage as continuously
releasing the energy at an average rate of
$8.44\ 10^{35}$ ergs sec$^{-1}$ per star during the explosion period
from $t(m_u)=5.4\ 10^6$ yrs until $t(8M_\odot)=4.3\ 10^7$ yrs
where $t(m)$ is the lifetime of a star of mass $m$.
Prior to the onset of SN explosions, however, their progenitors
develope stellar winds and also release the energy of $10^{50}$ ergs
into the interstellar medium at an average rate of
$7.75\ 10^{34}$ ergs sec$^{-1}$ per star. Consequently, once a new
star particle is formed, the energy from stellar winds is supplied to the
gas particles within a sphere of radius $R_{snr}$, and the energy,
metals and material from SNe II are subsequently supplied to the same
region. The radius $R_{snr}$ is set equal to the maximum extension of the
shock front in the adiabatic phase of supernova remnant and is given by
$R_{snr}=32.9 E_{51}^{\;1/4}\,n^{-1/2}$ pc
(\markcite{SS1979}Shull \& Silk 1979) where $E_{51}$ is the released energy
in units of $10^{51}$ergs and $n$ is the number density of the gas in units
of cm$^{-3}$ which surrounds the star particle. The gas within $R_{snr}$
remains adiabatic until multiple SN phase ends at $t(8M_\odot)$, and then
it cools according to the adopted cooling rate of the gas.
We compute the chemical evolution using the new calculations
of stellar nucleosynthesis products (Tsujimoto {\it et al.} 1995).
\section{Simulation Result}
Following a standard model of the cold dark matter (CDM) universe
($\Omega_0=1$, $H_0=50$ km$\,$sec$^{-1}$Mpc$^{-1}$), we consider
a less massive protogalaxy as a gas sphere with mass of $10^{9}\,M_{\odot}$
embedded in a $1 \sigma$ density peak having a total mass of
$10^{10}M_{\odot}$ with a baryon to dark matter ratio equal to 1/9.
The distribution of dark matter halo is assumed to have a King profile
with the central concentration index of $c=1$. This two-component system
is made to settle in a virial equilibrium from which the gas temperature
and the velocity dispersion of dark matter are estimated as an initial
condition. Our simulation uses $10^4$ particles for each of gas and dark
matter particles. The gravitational softening parameter is adopted as
78.9 pc for gas particles and 36.6 pc for collisionless particles.
As soon as we start a simulation, the gas in the central region of
the protogalaxy rapidly cools and begins to contract owing to the
self-gravity of dark matter and gas. When the gas temperature becomes
close to $10^4 K$ and stops decreasing, a quasi-isothermal contraction
is established. A further increase of the gas density causes a burst
of star formation in the central region. Thereafter, as massive stars
explode as SNe II, the surrounding gas aquires the thermal energy and
the gas temperature rises up to about $10^6$K. At the same time, the
gas is gradually polluted with synthesized metals from SNe II. About
5\% of the initial gas mass is used up in this formation of the
first generation stars.
The shock waves propagate outwards and the supernova-driven spherical
outflow occurs from inside. This outflow collides with the infalling
gas and the high-density super shell is eventually formed. While the gas
is continuously swept up by the super shell, the gas density further
increases due to the enhanced cooling rate in the already dense shell.
Then the intense formation of stars begins within the super shell,
and subsequent SN explosions further accelerate the outward expansion
of the shell. Star formation continues in the expanding shell for
about $10^8$ yrs until the gas density in the shell becomes too low to
form new stars. About 26\% of the initial gas mass is turned into stars
in this stage. The remaining gas in the shell is blown out to the
intergalactic space at supersonic speed.
The ejected gas has already enriched to the yield value
$y_Z\approx Z_\odot$ having the metal abundance of
$\log Z/Z_\odot \sim 0.0$.
Figure 1 shows the ring-like
distribution of gas particles and newly born star particles near the
$X-Y$ sectional plane at the elapsed time of $\sim 2\ 10^7$ yrs in the
simulation. It is evident from this figure that the star-forming site
is well confined in the shell.
In such a way, a total of about 31\% of the initial gas mass has
turned into stars before the dwarf galaxy is formed. The baby stars
initially have the velocity vectors of the gas from which the stars are
formed. Therefore, the first generation stars have zero systematic
velocity, but the later generation stars has a large outward
radial velocity component. The oscillation of swelling and contraction of
the system continues for several $10^8$ yrs, and the system becomes
settled in a quasi-steady state in $3\ 10^9$ yrs.
The resulting stellar system forms a loosely bound virialized system
due to the significant mass loss and has a large velocity dispersion
and a large core. Consequently the surface mass distribution is
approximately exponential (Figure 2a) and differs from the de Vaucouleur's
profile which is more concentrated towards the galaxy center.
In order to enable a more direct comparison
with the observation, we have computed the photometric evolution up to
10 Gyrs based on the method of stellar population synthesis, using the
updates of stellar evolutionary tracks compiled by Kodama \& Arimoto
(1996). The resulting surface $B$-band brightness distribution at 10 Gyrs
is obviously exponential (Figure 2b). The effective radius within
which a half of the total light is contained is 1.42 kpc.
The integrated blue luminosity of the system is $M_B=-14.5$ mag.
Stars are formed for the most part before the gas is fully polluted to
the yield value $y_Z\approx Z_\odot$ of the synthesized metals.
The average metal abundance of the stars in the system is as low as
$\log Z/Z_\odot\sim -1.74$. This metallicity is consistent with
a range covered by the observations, but is much lower than those
of normal galaxies (Dekel \& Silk 1986; Yoshii \& Arimoto 1987).
One outstanding feature discovered by our simulation is that the radial
distribution of metal abundance in this system has a {\it positive}
gradient (Figure 2c) which is in sharp contrast to the observed negative
gardient for massive galaxies (Carollo, Danziger \& Buson 1993).
We note that the star-forming site moves outwards with the expanding shell
and the gas in this shell is gradually enriched with synthesized metals
from SNe II. Stars of later generations are necessarily born at larger radii
with larger metallicities, leading to emergence of the positive metallicity
gradient in the resulting stellar system.
Since the $V-K$ color sensitively traces the metallicity of underlying
stellar population (Yoshii \& Arimoto 1991), we calculate the radial
distribution of the integrated $V-K$ color (Figure 2d), and the result
is consistent with the observed trend of the inverse color gradient for
dwarf galaxies (Vader {\it et al.} 1988; Kormendy \& Djorgovski 1989;
Chaboyer 1994).
\section{Summary \& Discussion}
A three-dimensional $N$-body/SPH simulation code, combined with stellar
population synthesis, is used to follow the dynamical and chemical
evolution of a dwarf protogalaxy with $10^{10}M_\odot$ (baryonic/dark=1/9)
which originates from a $1\sigma$ CDM perturbation. This less massive
galaxy receives significant dynamical responses from the heat input by
stellar winds and supernovae.
The first star burst near the center of the system produces a supersonic
spherical outflow of the gas. This outflow collides with the infalling
gas and gives rise to an expanding dense shell. Then, stars begin to
form in the expanding shell with its site propegating outwards with
the shell. We find from the simulation that this consecutive process
of star formation creates the exponential brightness profile and the
inverse color gradient of the system in agreement with the observations
of dwarf galaxies.
\markcite{A1994}Athanassoula (1994) performed one-dimensional simulations
of the dynamical evolution of dE galaxies including the energy feedback
from supernovae. The models without dark halo are shown to give a better
agreement with observations than those with dark halo. Our more realistic,
three-dimensional simulations however indicate that the dark halo is
necessary and plays a vital role to form the bound stellar system,
otherwise the system is blown out to disrupt completely.
In general, the color gradient of galaxies is created by the gradient
in either metallicity or age of the underlying stellar population.
Simple models of chemical evolution of galaxies usually predict the
negative metallicity gradient which corresponds to the color becoming
redder towards the galaxy center. Since dwarf galaxies have the inverse
color gradient, Vader {\it et al.} (1988) were led to interpret this
observed trend in terms of the positive age gradient. We note however
that stars with very low metallicities must have been formed on very
short timescales and therefore no appreciable age difference results.
The above puzzling situation indicates that previous results based on
simple models of chemical evolution can not be applied to small systems
like dwarf galaxies. We demonstrate in this paper that dynamical
modelling is the only proper way to investigate the evolution of dawrf
galaxies. Successful reproduction of their basic features in our
simulation suggests that the stellar energy feedback mechanism is indeed
a likely mechanism against the efficient formation of low-mass galaxies
in the CDM universe.
\acknowledgments
We are grateful to T. Shigeyama and M. Chiba for many fruitful discussions,
to T. Kodama for providing us the tables of population synthesis prior to
the publication, and to N. Nakasato for preparing the Remote-GRAPE library.
This research has been supported in part by the Grant-in-Aid for Scientific
Research (05242102, 06233101) and Center-of-Excellence Research (07CE2002)
of the Ministry of Education, Science, and Culture in Japan.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,224
|
\section{Introduction}
Images captured from drones differ from images of general objects captured using ordinary methods. Unusual aspect ratios, irregular points of view and lack of distinguishing details of objects in drone images are some of the differences between regular images and drone images. For example, MSCOCO \cite{lin2014microsoft} - a large-scale object detection, segmentation, and captioning dataset containing images of common objects taken in their general contexts. The foreground and the background are usually well separated and the images are of high resolution and quality. These object characteristics that help models in better identification are missing in drone images, as seen in Fig. \ref{eady_state}.
Along with small object size, density of clustering of objects is high in drone footage.
\begin{figure*}
\centering
\subfloat[]{\includegraphics[width=0.33\textwidth]{in1}}\hfill
\subfloat[]{\includegraphics[width=0.33\textwidth]{in2}}\hfill
\subfloat[]{\includegraphics[width=0.33\textwidth]{in3}}\\
\subfloat[]{\includegraphics[width=0.33\textwidth]{in4}}\hfill
\subfloat[]{\includegraphics[width=0.33\textwidth]{in5}}\hfill
\subfloat[]{\includegraphics[width=0.33\textwidth]{in6}}\\
\caption{Samples from the VisDrone2019 dataset.}
\label{eady_state}
\end{figure*}
\section{Related work}
In most scenarios the objects of interest occupies only a small region in the image. Object detection algorithms which are based on CNNs are broadly classified as two stage detectors and single stage detectors. Briefly, in the case of two stage detectors, the image is first passed through a pre-trained CNN to extract high level features. On the extracted feature maps, a fully convolutional network called Region Proposal Network (RPN) is applied to attain two outputs namely the probability if a region has an object or not and the co-ordinates of the bounding box. The region proposal network is trained so as to efficiently extract a predefined number (k=2000) regions from images. RPNs in two stage detectors decides if a region is a background or requires further processing. This enables them to achieve good generalization and performance. However, this enhanced performance comes at the cost of requiring large inference time.
Single stage detectors overcome the challenges posed by RPNs by using a fixed number of proposals, acquired through dense sampling of regions at different scales and aspect ratios.
\subsection{Two stage detectors}
R-CNN, one of the first successful detector, uses selective search to extract 2000 regions from the image, followed by a feature extractor and SVM to get the object scores and offset values. Fast-RCNN \cite{girshick2015fast} generated region proposals on the feature map obtained by the convolution network instead of the input image. This made the pipeline faster as the convolution operation is only done once instead of 2000 times. Faster RCNN \cite{ren2015faster} further refines this work by replacing the selective search algorithm with a CNN.
\subsection{One stage detectors}
Presently, Single Shot MultiBox Detector(SSD)\cite{liu2016ssd} and YOLOv3\cite{redmon2018yolov3} are the most widely used one stage object detection models. SSD uses a modified VGG-16 model pretrained on ImageNet as its backbone with additional convolutional feature layers with progressively decreasing sizes. The Deconvolutional SSD (DSSD) improves on the SSD by use of deconvolution modules, which upsample the data and combine them with the feature layers of SSD. YOLO has a similar detection pipeline as SSD but trades off accuracy for speed by using a fixed grid cell aspect ratio and a lighter backbone. One-stage detectors, however, suffer from heavy imbalance between the foreground and background examples due to fixed sampling of candidate boxes. RetinaNet \cite {lin2017focal} tackles this problem by introducing focal loss, a variant of cross entropy loss that weighs down the loss assigned to well-classified examples. However, this still doesn't replace the heavy computation of fixed candidate boxes where a majority of them are part of the background. We evaluate a new detection network, CenterNet, which uses a keypoint estimation network to find potential objects, thus significantly reducing the inference time.
\section{Materials and Methods}
\subsection{Dataset}
We use the VisDrone2019 DET dataset for object detection in videos, and the VisDrone2019 VID dataset for object detection in videos. Both these datasets are prepared by the AISKYEYE team at the Lab of Machine Learning and Data Mining in Tianjin University, China.
\subsection{CenterNet}
Most successful object detectors, such as the aforementioned SSD and YOLOv3, enumerate a nearly exhaustive list of potential object locations and classify each of them. This is wasteful, inefficient, and requires additional post-processing. CenterNet takes a different approach - it models an object as a single point - the center point of its bounding box. It uses keypoint estimation to find center points and regresses to all other object properties, such as size, 3D location, orientation, and even pose. This approach is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors.
We use CenterNet with an HourGlass-104 backbone \cite{zhou2019objects} for the task of detecting objects from aerial imagery.
\subsubsection{Pre-processing}
The training and validation data provided by the AISKYEYE team were used to training and validating the models. The images were re-sized to 1024 $\times$ 1024 and normalized using ImageNet \cite{deng2009imagenet} statistics.
\subsubsection{Training}
The model was initialized with weights pre-trained on the COCO \cite{lin2014microsoft} dataset. The learning rate was initialized at 2.5 e$^{-4}$ and was reduced by a factor of 10 at the end of the 90 and 120 epochs. The parameters of the network was optimized by using ADAM \cite{kingma2014adam} as the optimizer.
\subsubsection{Testing}
During inference, the images were re-sized to 2048$\times$2048 and normalized using ImageNet statistics. We make use of 2 test time augmentation namely i) horizontal flip, ii) multi-scale testing at 0.5, 0.75, 1, 1.25 1.5 times the input resolution.
\section{Results}
\par The performance of the evaluated on a variety of data-set viz; held out test data and challenge data.
\subsection{Evaluation Metric}
\par The performance of an model was evaluated on the basis of Average precision and average recall. The precision was measured at various IOU thresholds [0.5:0.05:0.95] by the bounding box generated by the algorithm and the ground truth. The maximum number of detection (1,10,100 and 500) in the images were varied to compute average recall.
\subsection{Effect of backbone}
CenterNet provides the flexibility to use numerous models such as ResNet-18, DLA-34 and Hourglass-104. On the validation data (n=548 images), the performance of CenterNet with each of the aforementioned networks as the backbone is given in Table \ref{backbone}. For each model, the images were re-sized to 512 $\times$ 512 and fed as input to each network. Additionally, we compare the performance of CentreNet with YOLOv3 (416x416).
\begin{table}[]
\caption{Performance of CenterNet with different backbone on the validation data (n=548 images).}
\centering
\begin{tabular}{cc}
\hline
Backbone & mAP \\ \hline
ResNet18 & 13.36 \\ \hline
DLA-34 & 24.18 \\ \hline
Hourglass-104 & 31.97 \\ \hline
YOLOv3 (416x416) & 8 \\ \hline
\end{tabular}
\end{table}
\label{backbone}
\subsection{Effect of test time augmentation}
Augmentation of data during inference is often used technique in literature to minimize bias-variance. We compare the effect of having horizontal flip as a test-time augmentation scheme. Table \ref{augmentation} compares the performance of a Hourglass-104 CenterNet model with and without test-time augmentation.
\begin{table}[]
\centering
\caption{Effect of test-time augmentation.}
\begin{tabular}{cc}
\hline
Augmentation& mAP \\ \hline
No flip & 31.97 \\ \hline
Horizontal flip & 32.99 \\ \hline
\end{tabular}
\label{augmentation}
\end{table}
\par From the table, we observe that adding test-time augmentation improves the performance of the network by 1 \%.
\subsection{Effect of multi-scale testing}
On images acquired from drones, various classes such as people, pedestrians to name few occupy fewer pixels than relatively larger objects such as cars, buses and trucks. Re-sizing images to 512 $\times$ 512 may lead to loss of discriminative features for smaller objects. In such scenarios, re-sizing images to larger dimension (say 1024 or 2048) is the often used technique. To further enhance the performance, we make use of multi-scale technique, wherein the images are scaled to different level such as 0.5, 0.75, 1, 1.25 \& 1.5.
\par We also study the association of the scaling parameter with respect to input resolution. From the experiments carried out, we observed that at 512x512 resolution using higher scales i.e. 1-2 produced higher performance than lower scales (0.5-1.5). A possible reason for this would be using lower scales would result in images which having even lower resolution than the input resolution (512x512). At higher input resolution (2048x2048) scales from 0.5-1.5 produce better results than higher scales (1-2). From our experiments we observe that, increasing the resolution beyond 2048 by scaling (say 4096, scale =2) do not enhance the performance of the model. Apart from multi-scale testing, we also study the effect of horizontal flipping the images along with multi-scale testing. We observe that removing horizontal flip from the multi-scale testing during inference leads a dip in performance by approximately 2 \%.
\par Based on the performance, we set the input resolution of the image to 2048 and the scale to be 0.5, 0.75, 1, 1.25 and 1.5. The performance of the model across each class in given in Fig. \ref{steady_state}.
\begin{table}[]
\centering
\caption{Effect of input resolution. At each resolution, we infer at various scales.For each range of scale, the step size associated to scaling parameter is set to 0.25. Additionally at each scale we also include test-time augmentation (horizontal flip).}
\begin{tabular}{ccc}
\hline
Input Resolution &Scale & mAP \\ \hline
512 & 0.5-1.5& 43.17 \\ \hline
512 & 1-2.5& 49.10 \\ \hline
2048 & 0.5-1.5 & 58.03\\ \hline
2048 & 1-2 & 51.99\\ \hline
2048 (no-flip) & 0.5-1.5 & 56.88 \\ \hline
\end{tabular}
\label{augmentation}
\end{table}
\begin{figure*}
\centering
\subfloat[]{\includegraphics[width=0.50\textwidth]{1}}\hfill
\subfloat[]{\includegraphics[width=0.50\textwidth]{2}}\\
\subfloat[]{\includegraphics[width=0.60\textwidth]{MAPU}}\\
\caption{Performance of model on the validation data (n=528). a) Class-wise performance on the validation data. b) Performance of the model on sample data from validation data (0000001-02999), c) Performance on validation data (0000069-01878). }
\label{steady_state}
\end{figure*}
\subsection {Performance on Challenge data}
The performance of all the competitors were evaluated on the data-set provided by the challenge organizers. For the task of detecting objects from images (Track 1), the data-set comprises of 3,190 images. On the test data, our algorithm stands 7$^{th}$ on the leader-board with an overall mAP of 27.83 \%. Table \ref{track1_metricwise}, illustrates performance of the algorithm across various classes. Table \ref{track1_metricwise}, compares the performance of the proposed solution against other top performing algorithms.
\par Additionally, we also participate in Track 2 of the challenge, i.e. detection of objects on videos. The dataset comprises of 33 sequences (12,968 images). The same model used in Track 1 without any sort of re-training or fine-tuning was utilized for Track 2. On the leader-board, the proposed algorithm was placed 5$^{th}$ and Table \ref{track2_metricwise} compares the performance of the solution with other top performing algorithms. Table \ref{track1andtrack2} illustrates the class-wise performance of the solution for Track1 and Track2 of the Visdrone-2019 dataset.
\begin{table*}[]
\caption{Comparison of the proposed solution against top performing techniques on the Track1 challenge data (n=3190) image. In the table, an entry in \textcolor{red}{red} depicts the best performance in that particular when compared other competing algorithms.}
\centering
\begin{tabular}{ccccccccc}
\hline
Position & Method & AP{[}\%{]} & AP50{[}\%{]} & AP75{[}\%{]} & AR1{[}\%{]} & AR10{[}\%{]} & AR100{[}\%{]} & AR500{[}\%{]} \\ \hline
1 & DPNet-ensemble & {\color[HTML]{FE0000} 29.62} & 54 & {\color[HTML]{FE0000} 28.7} & 0.58 & 3.69 & 17.1 & 42.37 \\ \hline
2 & RRNet & 29.13 & {\color[HTML]{FE0000} 55.82} & 27.23 & 1.02 & {\color[HTML]{FE0000} 8.5} & {\color[HTML]{FE0000} 35.19} & 46.05 \\ \hline
3 & ACM-OD & 29.13 & 54.07 & 27.38 & 0.32 & 1.48 & 9.46 & 44.53 \\ \hline
4 & S+D & 28.59 & 50.97 & 28.29 & 0.5 & 3.38 & 15.95 & 42.72 \\ \hline
5 & BetterFPN & 28.55 & 53.63 & 26.68 & 0.86 & 7.56 & 33.81 & 44.02 \\ \hline
6 & HRDet & 28.39 & 54.53 & 26.06 & 0.11 & 0.94 & 12.95 & 43.34 \\ \hline
7 & \textbf{CN-DhVaSa(ours)} & 27.83 & 50.73 & 26.77 & 0 & 0.18 & 7.78 & {\color[HTML]{FE0000} 46.81} \\ \hline
20 & TridentNet & 22.51 & 43.29 & 20.5 & {\color[HTML]{FE0000} 1.17} & 8.3 & 28.98 & 39.84 \\ \hline
\end{tabular}
\label{track1_metricwise}
\end{table*}
\begin{table*}[]
\caption{Comparison of the proposed solution against top performing techniques on the Track2 challenge data (n=33) sequeces. In the table, an entry in \textcolor{red}{red} depicts the best performance in that particular when compared other competing algorithms.}
\centering
\begin{tabular}{lllllllll}
\hline
Position & Method & AP{[}\%{]} & AP50{[}\%{]} & AP75{[}\%{]} & AR1{[}\%{]} & AR10{[}\%{]} & AR100{[}\%{]} & AR500{[}\%{]} \\ \hline
1 & DBAI-Det & {\color[HTML]{FE0000} 29.22} & {\color[HTML]{FE0000} 58} & {\color[HTML]{FE0000} 25.34} & {\color[HTML]{FE0000} 14.3} & {\color[HTML]{FE0000} 35.58} & {\color[HTML]{FE0000} 50.75} & {\color[HTML]{FE0000} 53.67} \\ \hline
2 & AFSRNet & 24.77 & 52.52 & 19.38 & 12.33 & 33.14 & 45.14 & 45.69 \\ \hline
3 & HRDet+ & 23.03 & 51.79 & 16.83 & 4.75 & 20.49 & 38.99 & 40.37 \\ \hline
4 & VCL-CRCNN & 21.61 & 43.88 & 18.32 & 10.42 & 25.94 & 33.45 & 33.45 \\ \hline
5 & \textbf{CN-DhVaSa(ours)} & 21.58 & 48.09 & 16.76 & 12.04 & 29.6 & 39.63 & 40.42 \\ \hline
\end{tabular}
\label{track2_metricwise}
\end{table*}
\begin{table*}[]
\caption{Category wise performance of top performing algorithms on Track 1 and Track 2 challenge data.}
\centering
\begin{tabular}{ccccccccccc}
\hline
Input & ped & people & bicycle & car & van & truck & tricycle & awn & bus & motor \\ \hline
Image & 31.05 & 12.99 & 9.08 & 51.92 & 38.33 & 31.14 & 24.24 & 21.06 & 40.94 & 20.35 \\ \hline
Video & 27.86 & 6.59 & 12.47 & 33.92 & 29.91 & 40.55 & 13.99 & 12.91 & 24.48 & 6.98 \\ \hline
\end{tabular}
\label{track1andtrack2}
\end{table*}
\
\section{Conclusion}
Typically, one stage detectors such as YOLOv3 and SSD don't perform particularly well in instances of small object detection - specifically detection in aerial imagery. In this paper, we make use of Centernet for detection of objects from images and videos.
\par We evaluate various backbone networks such as ResNet-18, DLA-34 and Hourglass-104. From the experiments, we observe that the HourGlass-104 backbone produced the best performance when compared to other networks and a standard YOLOv3.
\par We observe that test-time augmentation such as horizontal flipping of image and multi-scale testing aid in enhancing the overall performance of the model.
\par On the the challenge data provided by the organizers, the solution attained 7$^{th}$ position for the task of detecting objects from images. When compared to other techniques, the solution was attained best average recall with 500 detection per image.
Without any fine-tuning or re-training, the model used for detecting objects from images was used to Track 2 (detection of objects from videos). On the leader-board for object detection from videos, the solution achieved a competitive 5$^{th}$ position.
{\small
\bibliographystyle{ieee}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,084
|
Bözbergtunnel steht für:
ein Autobahntunnel zwischen Basel und Zürich, siehe Autobahn A3 (Schweiz) #Streckenverlauf
zwei Eisenbahntunnel, siehe Bözbergstrecke #Neuer Bözbergtunnel
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,344
|
Die Vacomagi waren ein keltischer Volksstamm in Schottland, der nur von einer Erwähnung in Claudius Ptolemäus Geographia bekannt ist. Nach dieser Beschreibung und der ungefähren Lage ihrer Nachbarn waren sie in der Landschaft am Fluss Spey im heutigen Moray und am entsprechenden Teil der schottischen Nordküste beheimatet. Nach Ptolemäus hießen ihre Städte oder Hauptsiedlungsräume "Bannatia", "Tamia", "Pinnata Castra" und "Tuesis".
Siehe auch
Liste keltischer Stämme
Quelle
Claudius Ptolemäus, Geographia, 2. Buch, 2. Kapitel: Albion island of Britannia, LacusCurtius Website der University of Chicago, 2008, abgerufen am 23. April 2010
Literatur
Belege und Fußnoten
Pikten
Keltischer Stamm
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,418
|
// Copyright 2015 Eivind Vegsundvåg
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package ninja.eivind.hotsreplayuploader.versions;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.time.Instant;
/**
* Utility model for communicating with Github.com's release API. Contains only the fields we need.
*/
@JsonIgnoreProperties(ignoreUnknown = true)
public class GitHubRelease {
@JsonProperty
private String id;
@JsonProperty("tag_name")
private String tagName;
@JsonProperty("html_url")
private String htmlUrl;
@JsonProperty
private Boolean prerelease;
@JsonProperty("published_at")
private Instant publishedAt;
public GitHubRelease() {
}
public GitHubRelease(final String tagName, final String htmlUrl, final boolean prerelease) {
this.tagName = tagName;
this.htmlUrl = htmlUrl;
this.prerelease = prerelease;
}
@Override
public String toString() {
return "GitHubRelease{" +
"id='" + id + '\'' +
", tagName='" + tagName + '\'' +
", htmlUrl='" + htmlUrl + '\'' +
", prerelease=" + prerelease +
", publishedAt=" + publishedAt +
'}';
}
public Instant getPublishedAt() {
return publishedAt;
}
public void setPublishedAt(final Instant publishedAt) {
this.publishedAt = publishedAt;
}
public String getId() {
return id;
}
public void setId(final String id) {
this.id = id;
}
public String getTagName() {
return tagName;
}
public void setTagName(final String tagName) {
this.tagName = tagName;
}
public String getHtmlUrl() {
return htmlUrl;
}
public void setHtmlUrl(final String htmlUrl) {
this.htmlUrl = htmlUrl;
}
public boolean isPrerelease() {
if (prerelease == null) {
return false;
}
return prerelease;
}
public void setPrerelease(final boolean prerelease) {
this.prerelease = prerelease;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,270
|
Q: VB.Net Update doesn't update my database This is the code that I am trying to run. It will run without errors, but it does not update my database.
It will work when it is not Parameterized, but when I add parameters in it starts acting up. Here is the problematic code.
Public Sub updateItem()
Dim sqlConnection1 As New OleDb.OleDbConnection(dbProvider + dbSource)
Dim cmd As New OleDb.OleDbCommand
cmd.CommandText = "Update Inventory set PartNumber='@PartNumber', Brand='@Brand', PartDescription='@PartDescription', PartCost=@PartCost, InventoryOnHand=@InventoryOnHand, PartSupplier='@PartSupplier' where PartNumber = '@PartNumMatch' and Brand = '@PartManMatch';"
cmd.Parameters.AddWithValue("@PartNumber", partNumberText.Text().ToUpper())
cmd.Parameters.AddWithValue("@Brand", ManufacturerText.Text())
cmd.Parameters.AddWithValue("@PartDescription", partDescriptionText.Text())
cmd.Parameters.AddWithValue("@PartCost", Convert.ToDouble(partCostText.Text()))
cmd.Parameters.AddWithValue("@InventoryOnHand", Convert.ToInt32(quantityText.Text()))
cmd.Parameters.AddWithValue("@PartSupplier", partSupplierText.Text())
cmd.Parameters.AddWithValue("@PartNumMatch", partNumberText.Text().ToUpper().Trim())
cmd.Parameters.AddWithValue("@PartManMatch", ManufacturerText.Text().ToUpper().Trim())
cmd.CommandType = CommandType.Text
cmd.Connection = sqlConnection1
Try
sqlConnection1.Open()
cmd.ExecuteNonQuery()
sqlConnection1.Close()
Catch ex As Exception
MessageBox.Show(ex.Message)
sqlConnection1.Close()
End Try
'SQl statement to try to update the selected row's data matched against the database.
'update listview here.
End Sub
I am almost sure that the syntax is correct because my insert works. Here is the code to my insert.
Private Sub addItem()
'SQL statement here to add the item into the database, if successful, move the information entered to listview.
Dim sqlConnection1 As New OleDb.OleDbConnection(dbProvider + dbSource)
Dim cmd As New OleDb.OleDbCommand
'Dim reader As SqlDataReader
cmd.CommandText = "Insert into Inventory ([PartNumber], [Brand], [PartDescription], [PartCost], [InventoryOnHand], [PartSupplier]) values (@PartNumber, @Brand, @PartDescription, @PartCost, @InventoryOnHand, @PartSupplier);"
cmd.Parameters.AddWithValue("@PartNumber", partNumberText.Text().ToUpper().Trim())
cmd.Parameters.AddWithValue("@Brand", ManufacturerText.Text().ToUpper().Trim())
cmd.Parameters.AddWithValue("@PartDescription", partDescriptionText.Text().Trim())
cmd.Parameters.AddWithValue("@PartCost", partCostText.Text())
cmd.Parameters.AddWithValue("@InventoryOnHand", quantityText.Text())
cmd.Parameters.AddWithValue("@PartSupplier", partSupplierText.Text().Trim())
cmd.CommandType = CommandType.Text
cmd.Connection = sqlConnection1
Dim found As Boolean = False
Try
sqlConnection1.Open()
cmd.ExecuteNonQuery()
MessageBox.Show(cmd.CommandText)
sqlConnection1.Close()
Catch ex As Exception
MessageBox.Show(ex.Message)
sqlConnection1.Close()
End Try
End Sub
I know that the where clause is right, I have hard-coded the value's and I have also pushed the value's being compared to message box's and compared them directly to the information in the database.
Thanks in advance for any and all opinions and I hope we can get it figured out.
A: The parameters placeholders should not be enclosed in single quotes
cmd.CommandText = "Update Inventory set PartNumber=@PartNumber, Brand=@Brand, " +
"PartDescription=@PartDescription, PartCost=@PartCost, " +
"InventoryOnHand=@InventoryOnHand, PartSupplier=@PartSupplier " +
"where PartNumber = @PartNumMatch and Brand = @PartManMatch;"
You don't need to do that, it only confuses the code that tries to replace the parameter placeholder with the actual value. They will be treated as literal strings
A: Try this,
cmd.CommandText = "Update Inventory set PartNumber=@PartNumber, Brand=@Brand, " +
"PartDescription=@PartDescription, PartCost=@PartCost, " +
"InventoryOnHand=@InventoryOnHand, PartSupplier=@PartSupplier " +
"where PartNumber = @PartNumMatch and Brand = @PartManMatch;"
cmd.Parameters.AddWithValue("@PartDescription", partDescriptionText.Text())
cmd.Parameters.AddWithValue("@PartCost", Convert.ToDouble(partCostText.Text()))
cmd.Parameters.AddWithValue("@InventoryOnHand", Convert.ToInt32(quantityText.Text()))
cmd.Parameters.AddWithValue("@PartSupplier", partSupplierText.Text())
cmd.Parameters.AddWithValue("@PartNumMatch", partNumberText.Text().ToUpper().Trim())
cmd.Parameters.AddWithValue("@PartManMatch", ManufacturerText.Text().ToUpper().Trim())
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,301
|
MISUSE OF DRUGS ACT, 2016
2. Interpretation.
3. Classification and designation of controlled drugs and precursors.
4. Legitimate activity involving controlled drugs.
OFFENCES INVOLVING CONTROLLED DRUGS
5. Importation and exportation.
6. Manufacture and cultivation.
7. Trafficking.
8. Possession, purchase and use.
9. Possession with intent to traffic.
10. Organisation, management and financing of drug trafficking.
11. Use of premises to commit offence.
12. Diversion of precursors, equipment and material.
13. Regulation of precursors.
14. Inspections of persons and establishments.
15. Aiding or attempting the commission of offence.
16. Conspiracy to commit offence.
EVIDENCE AND INVESTIGATION
17. Certificate relating to controlled drug.
18. Certificate of foreign law.
19. Presumption of intent to traffic.
20. Presumption of possession.
21. Presumption relating to premises.
22. Presumption relating to incoming vessel or aircraft.
23. Presumption relating to vehicle etc.
24. Presumption of use.
25. Power of search and seizure.
26. Power of arrest.
27. Procedure following seizure.
28. Secure destruction of controlled drugs, plants and seeds.
29. Urine and blood samples.
30. Fingerprints, measurements and photographs.
31. Protection of informers.
32. Undercover officer.
33. Powers of investigation.
34. Controlled delivery.
35. Obstruction of justice.
COURT PROCEDURE FOR DRUG USERS
36. Identification of drug users and drug dependent persons.
37. Assessing drug dependency.
38. Dealing with drug users.
39. Dealing with drug dependent persons.
40. Court-ordered admission to approved institution.
ALTERNATIVE MEASURES FOR DRUG USERS
41. Formal caution for controlled drug.
42. Indicative quantities for personal consumption.
43. Drug dependent persons not charged with an offence.
44. Voluntary admission for treatment and rehabilitation.
45. Voluntary admission to residential programme.
46. Agreement to drug testing.
47. Sentencing for offences under this Act.
48. Aggravating factors.
49. Mitigating factors.
50. Travel restriction order.
51. Transitional provision.
52. Jurisdiction.
54. Regulations.
55. Repeal and savings.
THIRD SCHEDULE
FOURTH SCHEDULE
SI 34 of 2016.
AN ACT to provide for effective measures against abuse and diversion of controlled drugs and precursors; facilitate the investigation and prosecution of offences involving controlled drugs, in particular drug trafficking; promote the treatment, education, rehabilitation, recovery and social reintegration of drug users and drug dependent persons; ensure the availability of controlled drugs for legitimate medical and scientific use; facilitate implementation of Seychelles' commitments under the international drug control conventions and for matters connected therewith or incidental thereto.
[Date of Commencement: 1st June 2016]
This Act may be cited as the Misuse of Drugs Act, 2016.
In this Act, unless the context otherwise requires—
"1961 Convention" means the Single Convention on Narcotic Drugs of 1961 as amended by its 1972 Protocol;
"1971 Convention" means the Convention on Psychotropic Substances of 1971;
"1988 Convention" means the United Nations Convention against Illicit Traffic in Narcotic Drugs and Psychotropic Substances of 1988;
"approved facility" means a place declared under this Act to be an approved facility for the purposes of drug testing, the assessment of drug dependency, or the provision of outpatient treatment or harm reduction services to drug dependent persons, including syringe and needle exchange programmes;
"approved institution" means a place declared under this Act to be an approved institution for the purposes of inpatient treatment and rehabilitation of drug dependent persons or residential education and social reintegration programmes for drug users;
"article liable to seizure" means anything whatsoever, including cash and instrumentalities of use, cultivation or manufacture, by means of or in respect of which an offence under this Act or a related money laundering offence has been or is being committed, or which contains or constitutes evidence of such an offence;
"cannabis" means any part, excluding seeds, of a plant of the genus cannabis from which the resin has not been extracted, by whatever name it may be designated;
"cannabis plant" means a plant of the genus Cannabis;
"cannabis resin" means the separated resin, whether crude or purified, obtained from the cannabis plant;
"chief officer of NDEA" means a person appointed under section 12(1) of the NDEA Act;
"child" means a person who has not attained the age of majority;
"Class A drug", "Class B drug" or "Class C drug" means a controlled drug specified in Part I, II or III respectively of the First Schedule in accordance with section 3;
"Commissioner of Police" means a person appointed under Article 160(1) of the Constitution;
"controlled delivery" means the practice of allowing unlawful or suspicious consignments of controlled drugs or articles liable to seizure to pass into, within, out of, or through Seychelles, with the knowledge and under the supervision of NDEA or police, with a view to the investigation and identification of persons involved in offences under this Act;
"controlled drug"—
(a) means all narcotic drugs, whether synthetic or natural, plants and preparations classified in Schedules I, II, III and IV of the 1961 Convention, and all psychotropic substances classified in Schedules I, II, III and IV of the 1971 Convention, which are specified in the First Schedule to this Act; and
(b) includes any other narcotic or psychotropic substance, plant or preparation, including any new psychoactive substance, which may be included in the First Schedule to this Act by the Minister under section 3(2); but
(c) does not include a preparation containing a Class B or Class C drug that is compounded in such a way as to present no or a negligible risk of abuse and from which the substance cannot be recovered by readily applicable means;
"corresponding law" means a law in force providing for the control and regulation in a country other than Seychelles of the manufacture, trafficking, use, export or import of a narcotic or psychotropic substance, preparation or product in pursuance of the Conventions or any other treaty, agreement or arrangement to which the Republic and the government of that country are parties;
"cultivation by enhanced indoor means" in relation to a controlled drug, means cultivation of a plant inside a building or a structure involving at least one of the following processes—
(a) nurturing the plant in nutrient enriched water, with or without mechanical support;
(b) application of an artificial source of light or heat;
(c) suspending the plant's roots and spraying them with nutrient solution;
"document" has the meaning ascribed to it by section 2 of the Evidence Act;
"drug dependent person" means a person who through the use of a controlled drug has developed a psychological or physical dependency upon the effect of that drug; and
"dependency on a controlled drug" has a corresponding meaning;
"informer" means a person who has given information to NDEA or police with respect to an offence under this Act;
"manufacture" includes any process of production of a controlled drug or the refining or transformation of one controlled drug into another;
"Minister" means the Minister responsible for Home Affairs;
"NDEA'' means the National Drugs Enforcement Agency constituted by the NDEA Act;
"NDEA Act" means the National Drugs Enforcement Agency Act, 2008;
"NDEA agent" means a person appointed under section 13(1) of the NDEA Act;
"new psychoactive substance" means a substance of abuse, either in a pure form or a preparation, that is not controlled by the 1961 Convention or the 1971 Convention but which may pose a public health threat;
"officer" means a police officer or NDEA agent;
"offender" means a person who has been convicted of an offence under this Act;
"organised criminal group" means a structured group of three or more persons, existing for a period of time and acting in concert with the aim of committing one or more act which constitutes criminal conduct as specified in paragraph (a)
to (d) of section 3(9) of the Anti-Money Laundering Act, 2006;
"police" means the Seychelles Police Force constituted by the Police Force Act;
"precursor" means the substances and preparations thereof frequently used in the unlawful manufacture of controlled drugs, as classified in Tables I and II of the 1988 Convention, which are listed in the Third Schedule to this Act, but does not include a preparation containing a precursor that is compounded in such a way that the substance cannot be recovered by readily applicable means;
"repealed Act" means the Misuse of Drugs Act, 1990;
"traffic" means—
(a) to sell, broker, supply, transport, send, deliver or distribute;
(b) to offer to do anything mentioned in paragraph (a); or
(c) to do or offer to do any act preparatory to or for the purposes mentioned in paragraph (a); and
"trafficking" has a corresponding meaning; and
"undercover officer" means a person authorised under section 32(1)(a) of this Act.
(1) Controlled drugs and preparations thereof shall be classified in the First Schedule to this Act according to the degree of control to which they should be subject, as follows—
(a) Class A: Drugs that are subject to special measures of control in view of the particular harms that their non-medical or non-scientific use can cause, including those classified in Schedule IV of the 1961 Convention and in Schedule I of the 1971 Convention;
(b) Class B: Drugs having a medical and/or scientific use which should be subject to control in view of the harms that their non-medical or non-scientific use can cause, including those classified in Schedule II of the 1971 Convention, and in Schedule II and Schedule I of the 1961 Convention, except the drugs included in its Schedule IV;
(c) Class C: Drugs having a medical and/or scientific use which should be subject to control in view of the harms that their non-medical or non-scientific use can cause, but of a less substantial degree than Schedule II drugs, including those preparations classified in Schedule III of the 1961 Convention and in Schedule III and Schedule IV of the 1971 Convention.
(2) The Minister may in consultation with the Minister responsible for health amend the First and Third Schedules.
(3) The Minister may, in consultation with the Minister responsible for health by notice published in the Gazette appoint an independent advisory body to review the First and Third Schedules on a continuing basis and recommend amendments as appropriate, including new inclusions, deletions, or transfer of controlled drugs from one class to another.
(1) A controlled drug may be manufactured, imported or exported, and dealt with in Seychelles for medical or scientific purposes in accordance with regulations made under this Act.
(2) In any proceedings under this Act a person claiming to have acted pursuant to a provision of this Act or to regulations made under subsection (1) shall bear the burden of proving that fact.
A person who imports or exports a controlled drug in contravention of this Act commits an offence and is liable on conviction to the penalty specified in the Second Schedule.
(1) A person who manufactures a controlled drug in contravention of this Act commits an offence and is liable on conviction to the penalty specified in the Second Schedule.
(2) A person who cultivates a controlled drug in contravention of this Act commits an offence and is liable on conviction to the penalty specified in the Second Schedule.
(3) A person who possesses or purchases any instrument, utensil, apparatus or equipment intended to facilitate the manufacture of a controlled drug in contravention of this Act commits an offence and is liable on conviction to the penalty specified in the Second Schedule.
(4) Where an offence of cultivation under subsection (2) is committed using enhanced indoor means, the Court shall treat the offence as aggravated in nature.
(l) A person who traffics in any quantity of a controlled drug, whether on his or her own behalf or on behalf of another person, whether the other person is in Seychelles or not, in contravention of this Act commits an offence of trafficking and is liable on conviction to the penalty specified in the Second Schedule.
(2) A person who traffics in a substance, preparation or product which purports to be a controlled drug but is not, or which purports to be a controlled drug but is so low in purity as not to be usable as such, whether on his or her own behalf or on behalf of another person, whether the other person is in Seychelles or not, also commits an offence of trafficking and is liable on conviction to the penalty specified for an offence under subsection (1).
(3) Where a person is charged with an offence under this section and the Court is of the opinion that the person is not guilty of that offence but is guilty of an offence under section 8 or section 9, the Court may convict the person of the offence under section 8 or section 9 even though the person was not charged with that offence.
(4) Where a person is convicted of an offence of trafficking in more than 1.5 kilogrammes of cannabis or cannabis resin or more than 250 grammes of any other controlled drug, the Court shall treat the offence as aggravated in nature.
(1) A person who possesses, purchases, or uses a controlled drug in contravention of this Act commits an offence and is liable on conviction to the penalty specified in the Second Schedule.
(2) A person who possesses or purchases any pipe, syringe, utensil, apparatus or other article intended to facilitate the use of a controlled drug in contravention of this Act commits an offence and is liable on conviction to the penalty specified in the Second Schedule.
(3) Notwithstanding subsections (1) and (2), a person who possesses a clean syringe or needle obtained from an approved facility pursuant to a regulated exchange programme, or who uses a controlled drug by means of such syringe or needle, does not contravene this Act if the syringe or needle—
(a) is, or was immediately before use, in its original packaging;
(b) is possessed or used only by the person who obtained it from the approved facility; and
(c) where applicable, is declared to an officer before any search is conducted.
(1) A person who possesses a controlled drug, whether lawfully or not, with intent to traffic in contravention of this Act commits an offence of trafficking and is liable on conviction to the penalty specified for an offence under section 7(1).
(2) Where a person is charged with an offence under this section and the Court is of the opinion that the person is not guilty of that offence but is guilty of an offence under section 8, the Court may convict the person of the offence under section 8 even though the person was not charged with that offence.
A person who organises, manages or finances an offence under section 5, 6, 7 or 9 of this Act commits an offence of trafficking and is liable on conviction to the penalty specified for an offence under section 7(1).
(1) An owner, occupier or person in charge of or concerned with the management of any place or premises who permits or suffers such place or premises or any part thereof to be—
(a) used in connection with the import or export of a controlled drug in contravention of section 5;
(b) used for the manufacture or cultivation of a controlled drug in contravention of section 6; or
(c) acquired, maintained, or used for the purpose of trafficking in a controlled drug in contravention of section 7,
commits an offence and is liable on conviction to the penalty specified in the Second Schedule.
(2) An owner, occupier or person in charge of or concerned with the management of any place or premises who permits or suffers such place or premises or any part thereof to be acquired, maintained, or used for the purpose of the use of a controlled drug in contravention of section 8(1) commits an offence and is liable on conviction to the penalty specified in the Second Schedule.
(1) A person who manufactures, imports, exports, traffics, purchases, or possesses or has in his or her control a precursor or any equipment or material, including seeds—
(a) for the purpose of using it in or for the cultivation or manufacture of a controlled drug in contravention of section 6; or
(b) knowing that the precursor, equipment or material is to be used for a purpose specified in paragraph (a),
(2) Notwithstanding any other written law, an import or export permit shall not be granted for any precursor, equipment or material if there are reasonable grounds to suspect that the consignment is destined for the cultivation or manufacture of a controlled drug in contravention of this Act.
(1) This section applies to every person who manufactures, imports, exports, trades or distributes whether in wholesale or retail any precursor.
(2) A person referred to in subsection (1) shall enter in a register any acquisition or transfer of any precursor at the time of acquisition or transfer, without leaving any blank space or erasing or overwriting any previous entry, indicating the date of the acquisition or transfer, the name and the quantity of the precursor acquired or transferred, and the name, address and occupation of both the purchaser and vendor, provided that a retailer need not enter the name or details of the purchaser.
(3) The register maintained under subsection (2) shall be kept for at least 5 years after the last entry, for presentation whenever required by the chief officer of NDEA or the Commissioner of Police or upon an order of Court.
(4) A person referred to in subsection (1) shall immediately notify the chief officer of NDEA or the Commissioner of Police of—
(a) any order or transaction that appears suspicious, particularly as regards the quantity of the precursor ordered or purchased, the repetition of such orders and purchases, modes of payment or transport used in connection with the order or purchase or any loss or theft; and
(b) any proposed export of a precursor, which notification shall in any event be no later than 7 working days prior to the export.
(1) Every person licensed to manufacture, import, export, transport, trade or distribute whether in wholesale or retail any precursor or any equipment or material designed or known to be used in the cultivation or manufacture of controlled drugs shall be subject to, and shall provide all reasonable assistance to facilitate, inspections carried out at least every 2 years in such manner as may be prescribed.
(2) Where there are reasonable grounds to suspect that any precursor, equipment or material, including seeds, is to be used in the cultivation or manufacture of a controlled drug in contravention of this Act, an officer may, without a warrant, seize that precursor, equipment or material and detain it in accordance with this Act.
(3) The nature and quantity of any precursor seized under subsection (2) shall be recorded and reported to the chief officer of NDEA or the Commissioner of Police.
(1) A person who—
(a) aids, abets, counsels, incites or procures another person to commit an offence under this Act;
(b) does or omits to do any act for the purpose of enabling another person to commit an offence under this Act; or
(c) attempts to commit or does any act preparatory to or in furtherance of the commission of an offence under this Act,
commits an offence is liable to the punishment for the offence.
(2) A person who aids, abets, counsels, incites or procures the commission in any place outside Seychelles of an act which would if done in Seychelles constitute an offence under this Act, and which is punishable under a corresponding law in that place, commits an offence and is liable on conviction to the penalty specified in the Second Schedule.
A person who agrees with another person or persons that a course of conduct shall be pursued which, if pursued—
(a) will necessarily amount to or involve the commission of an offence under this Act by one or more of the parties to the agreement; or
(b) would necessarily amount to or involve the commission of an offence under this Act by one or more of the parties to the agreement but for the existence of facts which renders the commission of the offence impossible,
commits an offence and is liable to the punishment provided for the offence.
(1) The Minister may, for the purposes of this Act appoint a forensic analyst or other expert person for examining, testing and certifying suspected controlled drug, plant, seed and other articles seized under this Act.
(2) A certificate purporting to be signed by a forensic analyst or other expert person appointed by the Minister under subsection (1) and purporting to relate to a controlled drug, plant or seed or to a sample thereof, shall be admitted in evidence in any proceedings for an offence under this Act, on its production by the prosecution without proof of signature and, until the contrary is proved, the certificate shall be prima facie evidence of all matters contained therein.
(3) A forensic analyst or other expert person signing a certificate under subsection (2) shall not be required to attend Court or give evidence unless a notice for attendance is filed in Court and served on the Attorney General at least 21 days before the date fixed for trial, which notice shall specify the grounds on which the person's attendance is required.
(4) Where a notice has been served under subsection (3), but the Court is of the view that the grounds specified in the notice do not raise a genuine issue about the evidential value of the certificate, the Court may direct that the attendance of the person is not required.
A certificate purporting to be issued by or on behalf of the government of a country other than Seychelles and purporting to state the terms of a corresponding law in that country shall be admitted in evidence, in any proceedings for an offence under this Act, on its production by the prosecution without proof of signature and the certificate shall be conclusive evidence—
(a) that it is issued by or on behalf of the government of that country;
(b) that the terms of the law are as stated in the certificate; and
(c) that any facts stated in the certificate as constituting an offence under the corresponding law do constitute the offence.
(1) A person who is proved or presumed to have had in his or her possession or custody or under his or her control—
(a) 100 grammes or more of opium;
(b) 3 grammes or more of morphine;
(c) 2 grammes or more of diamorphine (heroin) or cocaine; or
(d) 25 grammes or more of—
(i) cannabis; or
(ii) cannabis resin,
shall be presumed, until the person proves the contrary, to have had the controlled drug in his or her possession with intent to traffic in contravention of section 9 of this Act.
(2) Where the presumption in subsection (1) is not engaged, it shall be a question of fact whether a person possessed any controlled drug with intent to traffic.
(3) In determining whether a controlled drug was possessed with intent to traffic under subsection (1) or subsection (2), the Court shall have regard to all relevant circumstances, including where applicable any evidence that the person has engaged in a deliberate pattern of activity whereby amounts in his or her possession at any time are maintained at a level below a threshold specified in subsection (1).
(1) A person who is proved to have had in his or her possession or custody or under his or her control—
(a) anything containing a controlled drug;
(b) the key of anything containing a controlled drug;
(c) the key of any place or premises or any part thereof in which a controlled drug is found, or
(d) a document of title relating to a controlled drug, or any other document intended for the delivery, or which would require the delivery to the person, of a controlled drug,
shall be presumed, until the person proves the contrary, to have possessed the controlled drug.
(2) The fact that a person never had physical possession of a controlled drug shall not be sufficient to rebut the presumption in subsection (1).
(3) Where one of two or more persons with the knowledge and consent of the other person or persons has any controlled drug in that person's possession, all of the persons shall be deemed to be in possession of the controlled drug.
(1) Where a pipe, syringe, utensil, apparatus or other article intended for the use of a controlled drug is found in any place or premises, it shall be presumed, until the contrary is proved, that the place or premises is used for the purpose of the use of a controlled drug.
(2) A person found in or escaping from any place or premises which is proved or presumed to be used for the purpose of the use of a controlled drug shall, until the person proves the contrary, be presumed to have used a controlled drug in the place or premises.
(3) A person found in or escaping from any place or premises in which plants are being cultivated in contravention of section 6 shall be presumed, until the person proves the contrary, to have been cultivating the plants.
(4) A person found in or escaping from any place or premises in which a controlled drug is being manufactured in contravention of section 6 shall be presumed, until the person proves the contrary, to have been manufacturing the controlled drug.
Where a controlled drug is found in any vessel or aircraft arriving from any place outside Seychelles, it shall be presumed, until the contrary is proved, that the controlled drug has been imported with the knowledge of the master or captain.
Where a controlled drug is found in a vehicle, vessel or aircraft, other than a vessel or aircraft referred to in section 22, it shall be presumed, until the contrary is proved, that the controlled drug is in the possession of the owner of the vehicle, vessel or aircraft and of the person in charge of the vehicle, vessel or aircraft for the time being.
Where a controlled drug is found in the urine or blood of a person as a result of a test carried out under this Act, the Road Transport Act, or the Criminal Procedure Code, the person shall be presumed, until the person proves the contrary, to have used the controlled drug.
(1) An officer may at any time, without a warrant—
(a) stop and search any person whom the officer reasonably suspects of having in his or her possession a controlled drug or an article liable to seizure;
(b) enter and search any place or premises in which the officer reasonably suspects that there is to be found a controlled drug or an article liable to seizure; and
(c) search any person found in the place or premises referred to in paragraph (b).
(2) An officer or an officer of customs may at any time, without a warrant—
(a) stop, board and search any vessel, aircraft or vehicle if the officer reasonably suspects that there is to be found in the vessel, aircraft or vehicle a controlled drug or an article liable to seizure under this Act;
(b) search any person found in a vessel, aircraft or vehicle referred to in paragraph (a); and
(c) stop and search any person entering or leaving Seychelles whom the officer reasonably suspects to have committed an offence under this Act.
(3) An officer or an officer of customs exercising functions under subsection (1) or subsection (2)—
(a) may, with such assistance as the officer deems necessary in the circumstances, use such force as is reasonably necessary in the circumstances;
(b) shall ensure that any woman searched is searched by a female officer;
(c) shall seize and detain any controlled drug; and
(d) may seize and detain any article liable to seizure and any vessel, aircraft or vehicle in which a controlled drug or article liable to seizure has been found.
(1) An officer may arrest without warrant a person who has committed, or whom the officer reasonably suspects to have committed, an offence under this Act and may search the person arrested.
(2) An officer exercising functions under subsection (1), when making an arrest—
(a) shall ensure that any woman searched is searched by a female officer;
This section of the article is only available for our subscribers. Please click here to subscribe to a subscription plan to view this part of the article.
Please click here to login
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,793
|
Home / Articles / 2018 / Industry 4.0, industrial robots & 3D printing are on the cusp of changing how manufacturing is done
Industry 4.0, industrial robots & 3D printing are on the cusp of changing how manufacturing is done
The future's so bright
By Mike Bacidore
We love predictions. We love the way they create awe-inspiring futures that we either welcome with great anticipation or fear with crippling horror. The bigger and bolder, the better. If it's the future, it undoubtedly will be spectacular.
I still remember sitting in Mr. Carls' fifth-grade class at St. Patrick Grade School when he informed us all that we would be alive in the year 2000. We all gasped and marveled at a time so distant, when summers would surely last forever and bubble gum would be free. Oh, how lucky we would be to live in the year 2000.
Hannover Extra
Dr. Irene Petrick, market innovation director, Internet-of-Things Group at Intel, will present details about the "Industry 4.0 Demands the Co-Evolution of Workers and Manufacturing Operations" study and how factories and manufacturing plants can better handle the transition to intelligent facilities at Hannover Messe on April 27 at 10 am in Hall 8, Stand D17.
Now that we've chugged past Y2K, not to mention the uneventful slide right through the end of the Mayan calendar in 2012, we seem to have steered clear of destruction and are headed straight toward a future of self-flying cars (finally!) and underwater coastal cities, thanks to the rising oceans as the polar ice caps melt. The future has never loomed larger.
Data and communication are sure to play a huge role in that future. There's no limit to the power of information and the algorithms and artificial intelligence that can turn it into actionable heroics. A recent report, "Industry 4.0 Demands the Co-Evolution of Workers and Manufacturing Operations," penned by Dr. Irene Petrick, market innovation director, and Dr. Faith McCreary, principal engineer, Internet-of-Things Group at Intel, indicates that Industry 4.0 and the Industrial Internet of Things (IIoT) are quickly altering how products are manufactured. But, according to authors, who collected information from 145 manufacturing professionals, "When we envision intelligent factories of the future, we often put technology in a starring role, but technology alone will not ensure a successful transition to an intelligent factory." The success of that transition is in the hands of factory personnel.
"Fully 98% of the workers who participated believed that they had direct or indirect influence over technology adoption and implementation decisions. These individuals are potential allies in the path to the future, if only we can harness their interest in change," according to the report.
As many employees fear the impact of technology, futurists see a different path. The London School of Economics (LSE) published a study entitled, "Robots at Work," on the use of industrial robots. "Productivity has improved by around 15% due to industrial robots," said Guy Michaels, LSE's head of research. "At the same time, the proportion of low-skilled labor dropped, and pay increased slightly. Industrial robots don't have any significant impact on the number of employees overall."
A recent study by the Centre for European Economic Research on behalf of the German Federal Ministry for Education and Research revealed similar findings in the country with the world's third-densest industrial robot workforce. The number of people employed in Germany reached 44 million in 2017, the highest figure since reunification (Figure 1). And the rapid spread of industrial robots hasn't made a dent in employment figures.
Figure 1: Workers use robots to carry out machine tending mainly on CNC machines.
(Source: Universal Robots)
"The modernization of production shifts hazardous, unhealthy and monotonous work to the machines," explains Junji Tsuda, president of the International Federation of Robotics. "In the vast majority of cases, only certain activities of a job are automated and not the entire spectrum of an employee's work."
As much as big data and industrial robots may change manufacturing, 3D printing could make an even bigger ripple, with its localized production and batch-of-one capabilities. No one knows what lies in store, but, like death and taxes, technology is inevitable. Buckle up. Here comes the future.
ALSO READ: Congratulations, your job has just been automated
About the author: Mike Bacidore
Mike Bacidore is the editor in chief for Control Design magazine. He is an award-winning columnist, earning a Gold Regional Award and a Silver National Award from the American Society of Business Publication Editors. Email him at mbacidore@putman.net.
Manufacturing's information revolution
How Industry 4.0 can help to turn machines into data centers
Leading digitalization by example
Rittal is providing best-practice examples of Industry 4.0 in its products, manufacturing…
Simplify press maintenance and management through IIoT innovation
Press builder Eagle Press & Equipment is using machine-mount I/O and the technology of IO-Link
Machine design with maintenance in mind
When it comes to machine maintenance, pay me now, or you will pay me later
Codimag improves productivity by limiting service costs
Press technology, motion control and automation combine with remote-monitoring solution to…
4 solutions to the 3 biggest cybersecurity challenges
Industrial control systems need protection, too
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,859
|
The history of the names of the days of the week is a tangled one. The Greeks named the days of the week after their gods, but when the Romans were supreme, they substituted the names of their favorite gods for the original Greek names. However, with English being a Germanic language, it's perhaps no surprise that our current week has several days named after Germanic gods.
Sunday was the day of the sun, whether you were Latin, Greek or Germanic, while Monday was the day of the moon. Tuesday is named after the God of War (who was Mars in Latin and Ares in Greek). However, the English form comes from Tiu/Tiwa, the Germanic/English name of the god of war and the sky.
Wednesday is named after Wodin, the main Teutonic god, who is similar to the Norse god, Odin. In Latinate languages such as French and Spanish, this day is named after the messenger of the gods, Mercury. Thursday is named after Thor, the Norse god of thunder. In Latinate languages, this day is named after the chief Roman god, Jupiter, who created thunder and lightning.
Friday is named after the Teutonic goddess of love, fertility and beauty, Freya. In Latinate languages, this day is named after the Roman goddess Venus, who had similar responsibilities. Finally, Saturday is named after the planet Saturn.
The interesting thing is that the pattern is pretty much the same across different languages. For instance, Friday in Italian is Venerdi, which comes after Venere (Venus), the same goddess.
Yes, I found that interesting too, Daniel. It just shows how much the Romans influenced many modern languages.
Thanks, Ravi. That's really interesting. Makes you wonder where the Romans took their names from.
I knew that the weekdays were named in honor of pagan gods, but where the names originated was something I didn't know.
so our day off is Shabbat (Saturday) instead of Sunday.
Daniel Scocco and Ravi,,, it makes me wonder too. Perhaps somebody ever heard about the chain culture between the Norway etc or north culture and the hindi culture. I mean there is a match point each other in their culture.
like the same or almost same name in their God and Goddes.
Roshawn and Merav, it seems same with islamic culture, or other wise almost same. But in Indonesia, we call day on the week by the ordinal number or the name day that every country have in the phrase.
number 6 for Friday (6= sittah in arabic numerical) but we call it JUM'AH or JUM'AT because it GOD Gift for moslem.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,633
|
Biografia
Ha rappresentato la ai tre edizioni consecutive dei Giochi olimpici estivi da a , vincendo due medaglie di bronzo nei 100 m dorso e nei 200 m dorso.
Dopo il ritiro è diventato allenatore di nuoto al Grenoble Alp'38 in Francia.
Palmarès
Olimpiadi
Pechino 2008: bronzo nei nei 100 m dorso e nei 200 m dorso
Mondiali
Barcellona 2003: argento nei 100m dorso e nella 4x100m misti.
Montreal 2005: argento nella 4x100m misti.
Melbourne 2007: bronzo nella 4x100m misti.
Mondiali in vasca corta
Indianapolis 2004: bronzo nei 200m dorso e nella 4x100m misti.
Europei
Budapest 2006: oro nei 100m dorso, nei 200m dorso e nella 4x100m misti.
Eindhoven 2008: oro nella 4x100m misti, argento nei 200m dorso e bronzo nei 100m dorso.
Europei in vasca corta
Trieste 2005: argento nei 50m dorso, nei 100m dorso e nei 200m dorso.
Helsinki 2006: oro nei 100m dorso e nei 200m dorso.
Istanbul 2009: oro nei 100m dorso.
Europei giovanili
Malta 2001: argento nella 4x100m misti e bronzo nei 200m dorso.
Linz 2002: oro nei 50m dorso e nei 200m dorso, argento nei 100m dorso e nella 4x100m misti.
Altri progetti
Collegamenti esterni
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,764
|
Q: Detection of iphone turn modem mode on/off I need the way of detection is iPhone currently have modem mode (wifi spot) on.
When it's on at the top of the screen system draw big blue area, that move the content are down.
Sure I can constantly check the content area size to detect it's changes - but it is not good solution. Is there any events what can come to my app so I can do some things in this moment?
Thanks.
A: To be clear, do you need to detect if personal hotspot is on or simply trying to adjust your interface to an enlarged status bar?
To detect personal hotspot, there's a solution here by detecting the network interface
https://stackoverflow.com/a/16856241/2763891
As far as I know, the status bar doubles it's regular size when:
*
*a phone call is going on
*personal hotspot is on
*an app is using the microphone at the background
*...
Usually, view autosizing fits the new size of your view automatically. If you're positioning your views manually, UIApplicationDelegate provides notification on changes of the frame of the status bar.
- (void)application:(UIApplication *)application didChangeStatusBarFrame:(CGRect)oldStatusBarFrame
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 9,822
|
Come out to Bethel CRC on Wednesday, November 28th for the big annual combined Youth Group fundraiser. Enjoy a savory potato bar, and take home some delicious baked goods from the Bake Sale. Food will be served from 5:00-7:00 p.m.
If you are bringing baked goods for the Bake Sale, please drop them off at Bethel CRC by 5:30 p.m. on Wednesday.
Funds raised this evening go toward Youth Group events, particularly the summer SERVE projects.
Worship: 10:00 a.m. & 6:00 p.m.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,791
|
his update of tantra for the 21st century manages to be snappy and cutting-edge at the same time it remains faithful to the profound truths of an infamously renegade spiritual tradition. For over a thousand years, Tantra has shocked, scandalized, and yet continually infiltrated and revitalized the most ancient philosophies and religions of the world. In this little volume, you will learn why - and discover how it is even more relevant than ever in this period of drastic transformation and change. The principles of tantra lie at the heart of yoga, alchemy, Buddhism, holistic medi- cine, sexuality, and clarify the central dynamic that faces humanity as we attempt to attain a new planetary consciousness - one that promises move us beyond linear time, three-dimensional space and the con- fines of causality! Step into a new and elec- tric way of living..... Rudolph Ballentine, MD is the author of Diet and Nutrition and Radical Healing and former President of the Himalayan Institute. He worked closely with and studied under the guidance of Swami Rama for 20 years.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,871
|
La dieta del sosiego
_"Marc David es la voz más importante que ha establecido el estrecho vínculo existente entre el estrés, la digestión, el metabolismo, el peso y la salud. Nos hace recordar que nuestra relación con los alimentos es tan importante como los propios alimentos. En un mundo de dietas ' de alto contenido de frivolidad', él se aparta de la multitud y nos indica el camino de la nutrición, el placer y la sanación."_
MARK HYMAN, DOCTOR EN MEDICINA,
COAUTOR DE _U LTRAMETABOLISM: THE SIMPLE PLAN_
_FOR AUTOMATIC WEIGHT LOSS_ [ _U LTRAMETABOLISMO:_
_E L PLAN SENCILLO PARA BAJAR DE PESO AUTOMÁTICAMENTE_]
_"La dieta del sosiego nos proporciona el eslabón perdido del metabolismo, y esto nos libera para que podamos disfrutar más que nunca de la comida al mismo tiempo que bajamos de peso y recuperamos la salud. Un millón de gracias, Marc David."_
CHRISTIANE NORTHRUP, DOCTORA EN MEDICINA,
AUTORA DE _M OTHER-DAUGHTER WISDOM: CREATING A LEGACY OF_
_P HYSICAL AND EMOTIONAL HEALTH_ [SABIDURÍA DE MADRE A HIJA:
CÓMO CREAR UN LEGADO DE SALUD FÍSICA Y EMOCIONAL]
La dieta del
sosiego
Comer por placer,
para obtener energía
y para adelgazar
MARC DAVID
Traducción por Ramón Soto
Inner Traditions en Español
Rochester, Vermont
Reconocimientos
Aunque un libro es una creación muy personal, todo aquel que haya contribuido a mi formación o influido en mi vida notará de algún modo su propio arrebol en estas páginas. A Mark Hyman: gracias por ser un gran amigo, poderoso aliado, hermano del alma y compañero de conspiración intelectual. A Kathy Jackman: has sido el ángel de mis proyectos, la hermana del alma y la animadora cósmica para llegar a la verdad. Douglas Brady nunca ha titubeado en su función de compañero del alma y guerrero del corazón. Gracias por tu presencia. David Cohen ha desempeñado con creces el papel del hermano que nunca tuve, mientras que Rusty Cohen se ha definido mágicamente como el primo especial que siempre quise tener. Dharani Burnham ha sido una amiga amorosa, dulce inspiración y ama de casa de alto nivel. Joan Berry ha sido una mujer verdaderamente sabia y una elegante sanadora. Has sido genial.
Gracias a James "Kimo" Nelson, mi hermano mayor en el viaje, ¡ahu'i ho!; a Pier Paolo de Angelis, mi "lobello" italiano; a Mark Kelso, mágica musa masculina: siempre te tengo presente, incluso cuando estás conmigo; a Kathie Swift y Jim Conzo, mis nutricionistas preferidos en el mundo; a Tom Jackman, compañero de búsquedas de tesoros para la vida.
También doy las gracias a otros amigos: Michael "Magic" Johnson, Arti Ross-Kelso, Jordan Blank, Toinette Lippe, Tara y Francis Grace, Gudni Gunarsson, Lisa "Dove" Settli, Brooke Loenig, Alexis Miles, Christopher Brinton, David Piver, Jonathan Kalman, Carolyn "Sudha" Lundeen, Lori Davis, Lorna Sass, Stephen y Sandy Muss, Dan y Deborah Howard, Carl Bendix, John Dekadt, Greg Zelonka, Alex Souri, Alex Bloomstein, Doug McKenzie, Esther Cohen y Rivka Zahler. Un agradecimiento especial a Stefanie Clements por ser una musa de proyectos y una extraordinaria diseñadora gráfica.
Mucho amor a mi tribu familiar que se ha mantenido fiel y presente: los Goldstone (Rhonda, Tony, Jonathan y Andrew), juntos hemos pasado tantos ratos maravillosos. Brad Cohen siempre me inspira por el solo hecho de ser Brad, es decir, por su generosidad y cariño. Jeffrey Cohen siempre me hace sonreír y tramar mi siguiente jugada. Rick Cohen es mi líder predilecto y me deja sin palabras por su gran corazón. ¡Qué tal, Brendan y Courtney!
Otros Cohen que merecen agradecimientos: Jason, Mitchell, Jodi, Matthew, Ben, Pilar, tía Bunny y tío David. También llevo en mi corazón a Ceil Sherry, Arnie y Shelly Bengis, David Bengis, Reeva y Dennis Goldblum y Gabriella Bengis, quien ha sido para mi hijo la mejor madre que pueda pedir un hombre.
Mi profunda gratitud a los empleados, invitados y familias extendidas del Centro Kripalu de Yoga y Salud y de Canyon Ranch en la zona de Berkshire: dos luces maravillosamente radiantes en el mundo de la sanación y la transformación. La Universidad Estatal de Sonoma ha sido un lugar especial para educar mente y alma. Un grito de todo corazón a Joshua Rosenthal, autor, amigo y extraordinario líder en materia de nutrición y a los increíbles estudiantes y empleados del Instituto de Nutrición Integral (Institute for Integrative Nutrition). Grandes agradecimientos a algunos de mis autores y expertos en nutrición favoritos: Jeff Bland, Ann Louise Gittleman, Sally Fallon, John Robbins, Ward Dean, Annmarie Colbin, Sid Baker, Leo Galland, Linda Page y Andrew Weil. Hay cabida para todos bajo este techo.
Algunos de los líderes del pensamiento más memorables que han influido en mí son James Hillman, Larry Dossey, Ken Wilber, Robert Bly, Lao Tzu, Oscar Wilde y Martin Luther King. Michael Marcus, del restaurante japonés Bizen, me ha mantenido bien alimentado y en constante asombro ante el lado místico del sushi. La tienda Mana Foods, la pescadería Paia y el Delicatessen Haiku también me han servido comidas memorables. El pueblo Jacob's Pillow y su festival, por su parte, me han permitido bailar de forma inolvidable.
Gracias al Pueblo de las Rocas, la Nación de las Ballenas y Delfines, los Mets, los Patriots, los alumnos y maestros de la Escuela Waldorf de Great Barrington y de la Escuela Waldorf de Haleakala, los jamaiquinos, Hera Dura, la empresa Apple Computer, Disney, el equipo Peapod de Johnson & Johnson, los Bad Dogs, los Venecianos y los Brasileños.
Por lo que a mí respecta, los lugares han sido a menudo tan especiales y necesarios como las personas. Doy las gracias al río Green River, el monte Monument Mountain, Kennedy Park, las playas Big Beach, Little Beach y Baldwin Beach, La Perouse, la isla Big Island, los "Gunks", Hawk Meadow, Belice, Boulder, Buena Vista y Brooklyn.
Estoy muy agradecido a todo el maravilloso personal de la editorial Inner Traditions/Healing Arts Press, incluidos Jon Graham, Susan Davidson y Jeanie Levitan. Gracias por la excelente labor que realizan.
Mis respetos y honores a todos mis antepasados y familiares que ya no se encuentran entre nosotros: mis abuelos Charles y Molly Cohen y Jack y Esther Weinstein; mis tíos Sid, Jerry y Eddy y, por supuesto, a mis queridos padres, Sid y Rachel Cohen, quienes me proporcionaron una base de gran cariño y dedicación.
Por último, doy las gracias al Creador de todo lo que existe, por una vida hermosa y por mandarme a Skye, mi hijo especial, compañero en el coleccionismo de rocas, fanático de los deportes y mi favorito para toda la vida.
Contenido
Reconocimientos
Prefacio
Introducción
SEMANA 1: El poder metabólico de la relajación
SEMANA 2: El poder metabólico de la calidad
SEMANA 3: El poder metabólico de la conciencia
SEMANA 4: El poder metabólico del ritmo
SEMANA 5: El poder metabólico del placer
SEMANA 6: El poder metabólico del pensamiento
SEMANA 7: El poder metabólico del relato
SEMANA 8: El poder metabólico de lo sagrado
POSDATA: Su viaje metabólico
Notas
Bibliografía
Prefacio
Quizás nunca antes ha leído ni leerá otro libro sobre dietas como éste, un libro sin precedentes y capaz de cambiarle la vida. La información que encontrará aquí estará reñida con gran parte de lo que se le ha dicho antes acerca de las comidas sanas y la manera de bajar de peso. Pondrá en entredicho algunos de los consejos más apreciados que han ofrecido los expertos. No le impondrá una fórmula más sobre dietas ni le indicará precisamente lo que tiene que comer, cuándo tiene que comerlo ni en qué cantidad. Ni tampoco lo seducirá con un sistema difícil de seguir y, por lo tanto, condenado al abandono.
En lugar de ello, este libro le indicará cómo optimizar el metabolismo independientemente de lo que usted decida comer. Le enseñará la manera de acceder a la sabiduría de la mayor autoridad dietética que hay en la Tierra: el nutricionista que todos llevamos por dentro. Al hacerlo, lo pondrá en contacto con la información más importante desde el punto de vista de su salud, energía y peso.
Si usted ha intentado aligerar su carga siguiendo todas las dietas de moda sin obtener ningún resultado duradero, este libro le mostrará a qué se debe ese fracaso y qué hacer al respecto. Si se siente frustrado y confundido por la información sobre tantos sistemas contradictorios de nutrición que inundan las ondas hertzianas, estas páginas le proporcionarán las nuevas perspectivas y la ayuda que ha esperado y que definitivamente merece. _La dieta del sosiego_ le presenta un programa de ocho semanas que es distinto a cualquier otro que usted haya emprendido antes: un programa fácil de seguir que producirá en su cuerpo y en su ser cambios significativos, duraderos y profundos. En última instancia, le ayudará a incluir los dones del alma en su alimentación y, al hacerlo, le despertará una llama interior que es la verdadera fuente de su energía.
Disfrútelo, pues usted está a punto de volver a nacer con un nuevo metabolismo.
Introducción
_La vida no puede esperar a que las ciencias_
expliquen científicamente el Universo... La vida
nos es disparada a quemarropa.
JOSÉ ORTEGA Y GASSET
En el folclor polinesio, Maui es un semidiós sin igual; la bella isla hawaiana fue nombrada así en su honor. Maui era un embaucador muy listo de fuerzas sobrehumanas; su hazaña más extraordinaria y memorable consistió en capturar al sol.
Los problemas comenzaron poco después de que Maui alzó el cielo para que los seres humanos pudieran caminar erguidos y para hacer espacio de modo que el Sol pudiera alcanzar una posición elevada sobre los mundos inferiores. El Sol, de manera egoísta, procedió a surcar el cielo raudamente en lugar de describir un arco sin premura. Por esta razón, los humanos tenían muy poco tiempo para pescar, practicar la agricultura o secar sus telas tradicionales de tapa. Se enfermaban y se sentían infelices.
Siguiendo los sabios consejos de su abuela, Maui ideó un plan para ayudar a los seres humanos en su sufrimiento. Durante muchos días se ocultó en el extremo oriental del volcán más alto, Haleakala, y calculó el recorrido diario del Sol. Luego regresó a casa, donde trenzó dieciséis fuertes cuerdas hechas del cabello de su hermana, con la intención de usar su fuerza legendaria para atrapar al Sol con un lazo.
A la mañana siguiente, cuando el Sol se alzó sobre Haleakala y comenzó su caprichoso vuelo por el cielo, Maui atrapó el primer rayo que apareció y lo ató a un fuerte árbol de wiliwili. Pronto había amarrado los dieciséis rayos del Sol.
Al quedar inmovilizado, el Sol se vio a merced de Maui y sabiamente accedió a llegar a un trato. A cambio de su vida, el Sol prometió cruzar el cielo lenta y ponderadamente, de modo que los seres humanos tuvieran las condiciones necesarias para alimentarse y prosperar. Todos quedaron felices y el Sol se sintió tan honrado que hasta el día de hoy ha mantenido su palabra.
No es por coincidencia que el Sol se ha convertido en símbolo del metabolismo. Es la máxima fuente de energía del planeta Tierra. Reconocemos su importancia al referirnos a la parte media del torso humano (el centro de la actividad metabólica en el cuerpo) como plexo solar, que significa en latín "lugar de recepción del sol". Y, en reconocimiento a la suprema importancia del metabolismo, se nos ha enseñado a hacer grandes esfuerzos para mantener su eficiencia. Tanto el metabolismo como el Sol nos benefician más cuando recibimos ambos en su justa medida. Si nos excedemos con cualquiera de los dos, nos quemamos o nos agotamos.
_El sosiego_
Si ha decidido leer este libro, seguramente lo está haciendo porque desea acelerar su metabolismo: desea más energía metabólica para bajar de peso, tener un aspecto esbelto, estar más saludable y tener más energía. No obstante, aunque tengan todo lo último en materia de dietas, remedios y artefactos para bajar de peso, la mayoría de las personas no obtienen los resultados que desean.
Si usted ha tratado de fortalecer su metabolismo pero no lo ha conseguido, la causa principal es que su vida marcha a un ritmo demasiado acelerado.
El ritmo vertiginoso al que marcha nuestra cultura es contrario a una vida feliz y sana. Sufrimos una avalancha de malestares del cuerpo y dolencias del alma cuyo origen es muy sencillo: el ritmo de vida. Me refiero al paso acelerado que nos hace avanzar inconscientemente a lo largo del día, que nos empuja más allá de la capacidad natural del organismo y nos deja insatisfechos y agotados al final del día.
Cuando avanzamos por la vida a una velocidad excesiva, es inevitable que comamos rápidamente, lo que destruye el metabolismo y ocasiona trastornos digestivos. Entonces comemos en función de la respuesta fisiológica de estrés, la que reduce nuestra capacidad de quemar calorías. Esto hace que los alimentos nos produzcan escaso placer, lo cual merma la producción de energía a nivel celular y nos induce a comer más. Nos produce falta de aire, lo que a su vez reduce la absorción de oxígeno y contribuye a una mayor acumulación de grasa. Además, nos impulsa a abandonar nuestro verdadero carácter y propósito de haber venido al mundo, dejándonos con pensamientos tóxicos y emociones perturbadoras que hacen que el cuerpo envejezca y el corazón se endurezca.
Los extraño de todo esto es que, a pesar de nuestras mejores intenciones, a menudo tratamos de remediar estos males recurriendo a estrategias que nos hacen sentir peor. Resulta irónico cómo nos equivocamos al creer que los males provocados por nuestro rápido ritmo de vida pueden curarse con soluciones rápidas. Consumimos así medicamentos digestivos y analgésicos que producen efectos secundarios debilitantes. Nos castigamos a nosotros mismos con ejercicios excesivos por el delito de la gula. Abusamos de nosotros mismos con difíciles dietas y nos privamos del placer de los alimentos. Y nos sometemos a terapias médicas que nunca van realmente a los verdaderos motivos del mal funcionamiento de nuestro organismo.
Maui nos enseño una gran lección. Para aprovechar la energía del Sol, no lo hizo ir más rápido sino más lento. Hizo que el Sol se alinearea con su curso y ritmo naturales y, de este modo, dominó una gran fuerza metabólica.
¿Está usted listo para dominar su propio poder metabólico con esta misma sabiduría?
Afortunadamente, hay un remedio eficaz para este mal de premura. Se llama sosiego. Debemos trabajar menos para obtener más. Debemos dejar de pelear por la comida y comenzar a aceptarla. Debemos dejar de castigar a nuestros organismos y comenzar a satisfacer sus necesidades. Debemos sosegarnos y disfrutar para entonces obtener los resultados que hemos buscado... y lo conseguiremos mucho más pronto de lo que esperamos.
La verdad ineludible es que sólo podemos lograr y mantener el metabolismo óptimo cuando comemos, hacemos ejercicios y vivimos en un óptimo estado emocional. Nuestra disposición mental actúa directamente hasta tal punto sobre el metabolismo que lo que pensamos y hacemos influye profundamente en cómo digerimos los alimentos. El poder metabólico no tiene que ver solamente con lo que uno come, sino con lo que uno es mientras come. Y no sólo se trata de cuántas calorías uno quema sino de hasta qué punto lo inspira la vida.
Imagínese entonces una relación con los alimentos y con su organismo que le proporcione sustento y lo satisfaga cada día. Imagínese tener la confianza de que puede relajarse y disfrutar de los alimentos que usted elije comer. Imagínese lo bien que se sentiría si la comida fuera un puro placer y si el ejercicio fuera una delicia. Imagínese que se cuida con sanos hábitos para toda la vida, no porque esté obligado a hacerlo, sino porque en realidad le hacen sentir bien. Si usted está dispuesto a elegir esa vida, estará dispuesto a elegir el "sosiego".
La dieta del sosiego es cuestión de hacer que la vida sea más serena para poder acelerar el metabolismo. Cuando digo "sosiego" quiero decir tener más conciencia: Ser abierto. Centrado. Presente. Equilibrado. Cree esta experiencia para usted mismo y para su cuerpo y mente, y la respiración se alineará naturalmente en un estado de sinergia. Ocurrirán cambios inmediatos en los sistemas nerviosos, endocrino e inmunológico y en la red de neuropéptidos en todo el organismo. El resultado es que quemará calorías a un ritmo óptimo. Digerirá y absorberá los nutrientes con la máxima eficiencia. Hará que el oxígeno circule y sea parte de un proceso de combustión donde se libere la mayor cantidad posible de energía. Su función inmunológica se potenciará. Podrá salirse del paradigma del estrés y la tensión y asumir su propio ritmo natural. El resultado es que sentirá más vida, energía y abundancia. Añada a esto la elección de alimentos de calidad y comenzará a crear el tipo de metabolismo que en el fondo usted sabe que le corresponde tener.
_Una nueva perspectiva sobre la nutrición_
Al prepararse para ver la alimentación y el metabolismo de una forma totalmente nueva dejará atrás varios mitos sobre la nutrición, entre ellos los siguientes:
**Mito #1: La mejor manera de bajar de peso es comer menos y hacer más ejercicios**.
La intuición nos dice que esta fórmula es correcta, pero es lamentablemente incompleta. La mayoría de las personas encuentran que este método falla una y otra vez. Si diera resultado a largo plazo, lo habría hecho desde hace mucho tiempo. Como verá más adelante, la nutrición insuficiente puede desacelerar el metabolismo, y lo mismo sucede con el exceso de ejercicio. El castigo no nos lleva a ninguna parte. La nutrición verdadera y el movimiento gozoso del cuerpo lo llevará adonde usted quiera ir.
**Mito #2: Uno come en exceso debido a la falta de voluntad.**
Afortunadamente, los expertos también se equivocan en este caso. Como descubrirá, usted tiene más fuerza de voluntad que lo que nunca ha imaginado. Comemos en exceso no porque somos debiluchos sino porque fisiológicamente nos sentimos impulsados a actuar así cuando nuestras comidas son deficientes en cuanto a relajación, tiempo, placer, conciencia y nutrientes de alta calidad.
**Mito #3: Siempre que uno consuma los alimentos adecuados en las cantidades adecuadas, gozará de buena salud y bajará de peso.**
Este principio parece tener base científica pero ha causado más daños que beneficios. Como verá, podemos consumir los alimentos más sanos del universo y en las cantidades precisas, pero si los ingerimos con ansiedad y premura, la respuesta fisiológica de estrés provocará un marcado aumento en la excreción de nutrientes y un profundo declive en la capacidad de quemar calorías. Lo que uno come constituye únicamente la mitad de la ecuación de la buena nutrición. La otra mitad radica en la manera en que uno come.
**Mito #4: Los expertos son la mejor fuente de información confiable y científicamente precisa sobre nutrición.**
Ojalá que esto fuera cierto. Es cierto que los expertos ocupamos un alto sitial, pero nos encanta discrepar unos con otros y constantemente cambiamos de parece. En realidad, los conocimientos de nutrición más definitivos están literalmente dentro de su cuerpo, en lo que se llama sistema nervioso entérico o SNE: el cerebro de los intestinos. Ésa es su guía dietética diaria más fiel y precisa. El sistema nervioso entérico tiene sus propias reglas metabólicas, que son las reglas por las que se rige su organismo. Este experto interno le ayudará a escoger a cuáles expertos externos puede seguir.
Los principios que aprenderá en este libro fueron definiéndose a partir de toda una vida de explorar los temas de la alimentación y la sanación. He tenido la buena fortuna de haber acumulado una experiencia diversa en el mundo de la nutrición. Fui conferenciante y consejero de nutrición durante más de diez años en un maravilloso centro de turismo de salud de fama mundial, el centro Canyon Ranch en la zona de Berkshire. Durante más de quince años ofrecí asesoría y fui líder de talleres y administrador en el Centro Kripalu de Yoga y Salud, uno de los más grandes retiros holísticos de salud en Estados Unidos y otro increíble laboratorio de sanación y transformación. Cursé estudios universitarios y de postgrado sobre nutrición, obtuve mi maestría en psicología de la alimentación en la Universidad Estatal de Sonoma, en California, recibí formación clínica en Harvard en medicina del cuerpo y la mente, trabajé como interno con numerosos médicos clínicos y sanadores que utilizaban terapias de punta en materia de nutrición, presté asistencia en investigaciones sobre el cáncer vinculadas con la nutrición en los laboratorios del Centro Oncológico Memorial Sloan-Kettering, y comencé una extensa carrera profesional como consultor de negocios para empresas relacionadas con alimentos, vitaminas y salud, lo cual me permitió adquirir experiencia en el desarrollo de productos, asignación de marcas, comunicaciones y salud empresarial, trabajé dedicadamente con organizaciones de renombre como Johnson and Johnson y Walt Disney Corporation. En mi calidad de médico clínico especializado en nutrición he obtenido buenos resultados con niños, ancianos, personas ricas y pobres, sanas y enfermas, prisioneros y atletas. He dado asesoría a personas que sufrían trastornos bioquímicos y de alimentación, y muchísimas personas que deseaban bajar de peso.
La misión de cada uno de nosotros en el mundo suele definirse a partir del camino personal que nos toca tomar. Desde que nací estuve afectado por asma y alergias intensas y estuve a punto de morir en varias ocasiones. Me llevaban de un médico a otro, sin que ninguno pudiera proporcionarme alivio. Nunca pude corretear como un niño normal. Tenía una situación de salud desesperada. A los cinco años oí decir que las frutas y vegetales eran beneficiosos para la salud. Hasta ese momento mi dieta consistía básicamente en cereal de cacao en el desayuno, refrescos Kool Aid y crema de malvaviscos en el almuerzo, y papas fritas y emparedados de salami en la cena. Pedí a mi madre que comprara manzanas y guisantes con zanahorias enlatados porque, según mi escaso entendimiento, eso era lo que yo conocía como frutas y vegetales.
Milagrosamente, mi salud comenzó a mejorar y, a medida que mi madre me ayudó a incorporar otros pequeños cambios en la dieta, mi salud se recuperó aún más. Así fue como, desde una temprana edad, pude establecer la conexión de que los alimentos que ingería tenían un efecto en mi salud. En esa época mi padre, que se hizo quiropráctico en 1965, estaba aprendiendo sobre vitaminas y homeopatía, y traía a casa montones de muestras de productos. Probar todas esas píldoras fue uno de los puntos cumbre de mi niñez: me catapultó al siguiente nivel de bienestar. Me convencí de que la buena nutrición era la clave del bienestar y así comenzó mi fascinación para toda la vida con los alimentos, la sanación, la transformación personal y el metabolismo.
_Una nueva definición de metabolismo_
Muchas personas usan el término _metabolismo_ , pero pocos saben lo que significa. De hecho, si uno preguntara a cien médicos y nutricionistas reunidos en una sala "¿Cuál es la definición de metabolismo?" lo más probable es que reciba cien respuestas diferentes. Por tanto, no es de sorprender que el ciudadano medio esté confundido a este respecto.
Vayamos a los elementos básicos y examinemos la definición clásica de _metabolismo:_ El metabolismo es la suma de todas las reacciones químicas que tienen lugar en el organismo.
¿Le sorprende que sea así de fácil? Por supuesto, podemos hablar del metabolismo de tejidos específicos como el hígado y la tiroides. Podemos hablar del metabolismo de sustancias específicas como el colesterol. También podemos hablar del metabolismo de distintos sistemas orgánicos, como el metabolismo digestivo. Pero las personas que dicen "quiero acelerar mi metabolismo" se refieren en realidad al metabolismo consistente en el consumo de calorías, conocido también como eficiencia térmica.
Una vez comprendido esto, si deseábamos aumentar el metabolismo teníamos que ocuparnos de poner en marcha la eficiencia química de nuestro organismo mediante ejercicios, medicamentos, nuevos suplementos o combinaciones mágicas de alimentos. Estos métodos han tenido ciertamente su utilidad, pero ya no se corresponden adecuadamente con la realidad metabólica.
Esto se debe a que el metabolismo no tiene lugar solamente en el cuerpo. Opera por igual y de forma simultánea en el cuerpo, la mente, las emociones y el espíritu. Sorprendentes investigaciones en las ciencias del cuerpo y la mente han demostrado de manera concluyente la conexión entre lo que pensamos y sentimos y la química del organismo. La ciencia ha revelado los efectos profundos de los procesos químicos del estrés, la relajación, el placer y la depresión, y los efectos que pueden tener en nuestras vidas la oración, los animales domésticos y otros seres humanos.
Efectivamente, todo lo que sucede en nuestro mundo desde el nacimiento hasta la muerte es parte integral del metabolismo. Todo lo captado por nuestros sentidos que tiene un efecto en el sistema nervioso pasa por alguna forma de digestión, asimilación y eliminación. En este preciso instante estamos metabolizando elementos de nuestra última comida, de las palabras impresas en esta página y de importantes detalles de acontecimientos decisivos que han tenido lugar durante la semana en curso o incluso en épocas anteriores de nuestras vidas. Metabolizamos nuestros sueños, temores y fantasías, nuestros júbilos y tristezas, nuestros celos y alegrías, la belleza que nos rodea, las traiciones que hemos sufrido y nuestros momentos afortunados y desafortunados. Añádanse a todo esto el yogur congelado, los emparedados de pollo y el sushi que comemos. No es de sorprender que tomemos tantos remedios para la digestión.
Al tomar en consideración todos estos elementos, llegamos a una nueva definición de metabolismo:
**El metabolismo es la suma de todas las reacciones químicas que tienen lugar en el organismo y, además, la suma de todos nuestros pensamientos, sentimientos, creencias y experiencias.**
Esta definición no sólo es más precisa y completa desde el punto de vista científico, sino que quizás usted reconozca también que es correcta desde el punto de vista intuitivo. Si lo puede reconocer, quiere decir que usted está en sincronía con disciplinas como el Ayurveda y la medicina china, que durante miles de años han señalado el carácter inseparable de la mente, el cuerpo y el cosmos. Es muy probable que usted haya tenido muchos momentos en que su metabolismo resultó transformado por algún factor distinto a los alimentos, medicamentos o ejercicios. ¿Puede recordar algún momento en que haya estado sentado en casa, sintiéndose por el piso y con pena de sí mismo; un momento en que, si alguien le hubiera preguntado cómo estaba su metabolismo, habría respondido: "perezoso y lento"? Pero de pronto suena el teléfono y quien llama es un interés amoroso o alguien que le trae buenas noticias sobre dinero. Su estado de ánimo sube por los cielos al instante. Se siente vivo y optimista. En ese momento, si alguien volviera a preguntarle cómo estaba su metabolismo, usted respondería: "viento en popa y a toda vela".
¿Qué ha sucedido? Usted ha experimentado una enorme subida de energía, sin haber tomado café ni ningún medicamento o estupefaciente. Lo que aceleró su organismo fue un cambio en su mundo emocional. Así de rápido puede cambiar el metabolismo.
En esencia, el tema de este libro es como recuperar el poder metabólico. Es analizar las circunstancias en que hemos regalado nuestro poder, los hemos derrochado y hemos quedado con menos. Muchos estamos acostumbrados a creer la idea errónea de que de un modo u otro se nos ha timado con el poder metabólico que nos debería corresponder. Creemos que no tenemos suficiente energía para hacer lo que necesitamos hacer, porque en algún punto el sistema ha fallado. Creemos que para repararlo basta con una vitamina, un medicamento, una dieta o un programa de ejercicios. Si al menos pudiéramos encontrar al experto adecuado con la respuesta precisa, todo saldría bien.
La verdad es que nacemos con un gran poder metabólico. Si usted ha llegado hasta el día de hoy, el milagroso organismo en que habita ha desempeñado muy bien su cometido. Las fuerzas del universo nos traen al mundo con un impulso sostenedor de la vida que nos permite ir reuniendo por el camino todo lo que necesitamos para seguir cobrando altura. Pero nos quedamos atrapados en la fisiología de supervivencia basada en el reflejo de "luchar o huir" y perdemos energía en grandes cantidades debido al estrés crónico de bajo nivel, la velocidad, la insuficiencia de nuestra respiración, nuestra conciencia y nuestro placer, y debido a los ritmos discordantes y a un relato personal negativo. También derrochamos energía cuando perdemos nuestra dignidad y nuestra autoridad interior al capitular ante un empleo, el dinero, los "expertos", las emociones perniciosas y el ritmo de vida excesivamente rápido, por mencionar sólo unos cuantos factores.
Reclame su energía personal en estas esferas y reclamará un caudal de fuerza metabólica. Es así de sencillo. Y es así de profundo. El poder personal y el poder metabólico son una misma cosa.
_Los ocho metabolizadores universales_
La esencia de la dieta del sosiego radica en los ocho metabolizadores universales. Considero que éstos representan algunas de las piezas más importantes que faltan en nuestro rompecabezas metabólico colectivo, la próxima generación de poderosos factores biológicos de rejuvenecimiento que serán esenciales para nuestra salud en el nivel más profundo de la realidad médica.
A pesar de que existen desde hace mucho tiempo, los ocho metabolizadores universales han sido pasados por alto hasta ahora por varias razones fundamentales. En primer lugar, avanzamos demasiado rápidamente para percatarnos de ellos, pues su poder químico se activa únicamente cuando se ha alcanzado el nivel preciso de "lentitud" o sosiego. En segundo lugar, hemos creído que los estimuladores metabólicos tienen que ser exclusivamente algún tipo de alimento, fármaco o ejercicio físico, pero los ocho metabolizadores universales se encuentran en una categoría distinta. Califiquemos a estos estimuladores metabólicos de _transustanciales_ , lo que significa "muy por encima del reino de la materia". No es posible tocarlos, ni verlos, ni embotellarlos ni venderlos en Internet, pero son tan fundamentales para el metabolismo como las vitaminas, minerales, agua y ejercicio, o quizás lo sean más. Sin ellos, nunca podríamos llegar a convertirnos en las criaturas vitales y expresivas que debemos ser.
Los ocho metabolizadores universales son:
• La relajación| • La calidad
---|---
• La conciencia| • El ritmo
• El placer| • El pensamiento
• El relato | • Lo sagrado
Como verá en los capítulos subsiguientes, cada uno de estos metabolizadores es una llave que abre una puerta a un medio completamente nuevo de transformar su metabolismo nutricional, a menudo de una manera tan fácil que resulta inesperada y sorprendente. Como psicólogo nutricional, he visto a demasiadas personas frustrarse con dietas bajas en calorías, bajas en grasa y bajas en vitalidad. Pude observar cómo esas mismas personas hacían ejercicios un mes tras otro o un año tras otro, y aún seguían quejándose de la lentitud de sus metabolismos. Otros se obligaban a bajar de peso recurriendo a dietas restrictivas, pero vivían en una prisión culinaria que no daba cabida ni al placer ni a la libertad condicional. Era evidente que necesitaban algo más.
Encontré el ingrediente misterioso (los metabolizadores universales) cuando descubrí el yoga. Mientras tomaba lecciones de respiración y conciencia corporal, pasó algo extraordinario. Inesperadamente, tuve más energía y claridad que nunca antes en mi vida. Mi digestión se fortaleció de repente. Adelgacé a ojos vistas, mi ansia de dulces desapareció, mi apetito se normalizó y tomé una conciencia completamente nueva de los alimentos y de su disfrute... todo esto como consecuencia de absorber más oxígeno al respirar y de prestar más atención. No adopté un sistema de autoflagelación, no me dediqué a ello obsesivamente y no luché contra los alimentos.
A medida que en mis consultas fui incorporando técnicas sencillas de respiración y conciencia del cuerpo basadas en el yoga, mis clientes comenzaron a experimentar logros en materia nutricional. Me maravillé al comprobar que los que tenían dolencias crónicas obtenían avances y alivio rápidamente. Muchos problemas digestivos desaparecieron en cuestión de días cuando los clientes aprendieron las técnicas de la alimentación sin estrés. Al fin lograron bajar de peso los que aceptaron lo sagrado, se pusieron en sintonía con la sabiduría de sus intestinos (los mensajes del sistema nervioso entérico) y se permitieron a sí mismos sentir más placer. Otros dijeron adiós a las adicciones y los excesos en cuanto a la comida, con lo que aumentaron sus niveles de energía y su agudeza mental y descubrieron una nueva relación con los alimentos.
En resumen, estas personas lograron más haciendo menos. Dejaron de luchar contra la comida y comenzaron a aceptarla. Se relajaron al comer y aumentaron así su metabolismo. Escogieron un sano placer en lugar del dolor. Trabajaron con los ritmos naturales en lugar de ir contra éstos. Dejaron de ser víctimas de los alimentos, de sus cuerpos y de las normas de otras personas y, en lugar de ello, asumieron la responsabilidad de hacer cambios sencillos pero profundos que potenciaron su metabolismo. Se sosegaron y confiaron en la vida.
He tenido la gran satisfacción de ser testigo de muchas transformaciones metabólicas en mi trabajo en Canyon Ranch y en el Centro Kripalu, con clientes empresariales y en talleres con miles de estudiantes. Sorprendentemente, estos cambios son bastante comunes y están al alcance de cualquiera.
_El sosiego da buen resultado_
Sandy estuvo haciendo dietas durante seis años sin obtener resultados duraderos. Pasaba de un sistema a otro, pero siempre volvía a recuperar rápidamente todo el peso que perdía. Se quejaba de un constante reflujo gástrico (acidez estomacal) y de tener momentos en que comía excesivamente. Vivía en una batalla implacable con los alimentos, que le consumía una parte importante de su energía vital. Aunque sus médicos le habían indicado que gozaba de perfecta salud, Sandy estaba convencida de que su problema era un metabolismo perezoso. Estaba cansada de batallar con los alimentos y los ejercicios, pero no sabía hacia dónde mirar.
En menos de seis semanas de trabajo conmigo, Sandy bajó 15 libras y en cuatro meses ya había adelgazado un total de 45 libras al mismo tiempo que ingería más alimentos con grasa y hacía menos ejercicios. Su guerra con los alimentos había terminado, y al fin había obtenido lo que deseaba. A continuación explico lo que hicimos.
Comenzamos por centrarnos en la calidad. Cuando conocí a Sandy, su dieta consistía en muy pocos alimentos frescos o hechos en casa. Consumía muchos productos edulcorados artificialmente y producidos en masa, con grasa de baja calidad. Apenas consumía ningún alimento de baja toxicidad y rico en nutrientes. Basándonos en las pautas que encontrará en el capítulo 2, mejoramos la calidad de la dieta de Sandy. Al hacerlo, comenzó a disminuir naturalmente la cantidad de alimentos que comía. Cuando el cuerpo no recibe la nutrición de calidad que desea, no siempre tiene un mecanismo suficientemente sofisticado para pedir alimentos de mejor calidad, sino que pide a gritos más cantidad.
Seguidamente, examinamos el ritmo. Sandy tenía la costumbre de no desayunar, almorzar muy poco y con premura, y servirse una gran cena después del trabajo alrededor de las ocho de la noche. Como Sandy, la mayoría de las personas no se dan cuenta de que el organismo metaboliza más eficazmente los alimentos al mediodía, específicamente cuando el sol se encuentra en su cenit. Las investigaciones demuestran que las calorías se queman mejor en el almuerzo. Tarde en la noche y temprano en la mañana son los momentos menos eficientes para metabolizar alimentos. Los luchadores de sumo no engordan comiendo toneladas de helado, sino que comen el mismo arroz, vegetales y sushi que sus compatriotas. La diferencia radica en que consumen estos alimentos en grandes cantidades y a horas avanzadas de la noche.
Sandy no se daba cuenta de que estaba siguiendo la dieta de los luchadores de sumo. Le recomendé que desayunara bien, que tomara un buen almuerzo y que comiera poco en la cena. Consumiría más calorías, pero las concentraría en la hora de mayor eficiencia metabólica. Al tomar más tiempo para comer, estaría literalmente mezclando más oxígeno con su comida, con lo cual lograría una mayor capacidad de quemar calorías y una digestión más robusta.
Luego, como la propia Sandy había dicho que ella era de comer rápido, le pedí que se relajara y respirara. Hay un fenómeno que los científicos llaman respuesta digestiva de la fase cefálica. Cefálica significa "de la cabeza". La respuesta digestiva de la fase cefálica es un término complicado para referirse a cómo el cuerpo experimenta el sabor, aroma, satisfacción, estimulo visual y placer en general de una comida. Según el estudio de investigación que se analice, entre el 20 y el 80 por ciento de nuestra capacidad de quemar calorías, nuestra capacidad digestiva y nuestra asimilación de nutrientes específicos vienen directamente de la respuesta digestiva de la fase cefálica, o sea, la fase de la digestión que tiene lugar en la cabeza. Al comer con premura, Sandy reducía significativamente su metabolismo. Su forma atropellada de ingerir los alimentos obligaba a su cuerpo a reaccionar con estrés, con lo cual disminuían drásticamente su digestión y su capacidad de quemar calorías. Después de incorporar sencillos ejercicios de respiración profunda, el aumento de la oxigenación y de la circulación sanguínea en su sistema digestivo estimuló la eficiencia térmica, o sea, su capacidad de quemar calorías. La respiración y la relación también revirtieron su bloqueo digestivo inducido por el estrés, con lo cual desapareció por completo su reflujo gástrico crónico.
Después de estos resultados satisfactorios, le pedí a Sandy que hiciera algo que al principio parecía descabellado. Le sugerí que disfrutara la comida, que se permitiera a sí misma sentirse alimentada y que no se sintiera culpable, sin importar lo que comiera. Esto fue especialmente difícil para Sandy, pues ella había pasado gran parte de su vida adulta luchando contra la comida. Por primera vez, Sandy estaba considerando verdaderamente la posibilidad de no infligirse dolor, sino placer. Ciertamente, el placer es un potente metabolizador que hace aumentar la oxigenación y la circulación sanguínea y reduce la producción de cortisol e insulina, lo que ayuda a quemar grasas y desarrollar el tejido muscular. Además, hace que predomine el sistema nervioso parasimpático, el cual activa plenamente el metabolismo digestivo y la capacidad de quemar calorías.
Por último, abordamos el mayor problema de Sandy: el comer en exceso. Para sorpresa de ella, le expliqué que nunca había logrado dominar su problema de sobreingesta por una sencilla razón: el problema no existía en realidad. He podido comprobar que alrededor de nueve de cada diez personas que dicen comer en exceso en realidad tiene un problema distinto: que no comen cuando están comiendo. Debido a la deficiencia de un importantísimo metabolizador universal (la conciencia) muchos comemos como si estuviéramos dormidos. Al no percatarnos de lo que ingerimos, eludimos por completo el mecanismo de saciedad del cuerpo. El resultado es que seguimos teniendo hambre.
Como recordará de sus clases de biología en la secundaria, todo los organismos en el planeta (sean amebas, lagartos, leones o seres humanos) están programados para dos funciones en común: buscar el placer y evitar el dolor. Cuando comemos, estamos buscando el placer de los alimentos y evitando el dolor del hambre. Si no prestamos atención a los alimentos, el cerebro interpreta esta experiencia omitida como hambre y nos envía la señal de que comamos más. Creemos erradamente que nuestro problema es de falta de voluntad, cuando en realidad lo único que necesitamos es estar más presentes cuando comemos.
A Sandy le pareció sorprendente el resultado neto de su labor. Logró catalizar un cambio permanente en su peso y, por primera vez desde su adolescencia, se sintió estimulada por los alimentos. El hecho de sosegarse y de trabajar con la sabiduría del organismo le permitió aumentar su tasa metabólica.
¿Ya está comenzando a ver las posibilidades de sus propios logros metabólicos?
Cada uno de los ocho metabolizadores universales que enumeré más arriba se analiza en su propio capítulo en este libro. Cada capítulo representa una semana del programa de sosiego de ocho semanas. Cada capítulo comienza con reflexiones y resultados de investigaciones, y conclusiones que le ayudan a familiarizarse con los principios del estimulador metabólico de esa semana, y termina con medios y técnicas prácticos que le ayudarán a concentrarse en la aplicación de esos principios. Usted se sentirá como un cliente personal mío y experimentará beneficios inmediatos, duraderos y gratificantes.
El programa de sosiego de ocho semanas sienta las bases para que ocurra algo especial en su mundo metabólico. Porque, al darse a sí mismo la posibilidad de explorar su especial relación con los alimentos, deshacerse del miedo y de la culpabilidad y tratar a su cuerpo de forma digna y amorosa, también potenciará su metabolismo. La química del cuerpo humano es así de simple y de elegante. Haga que este programa le resulte entretenido, véalo como una oportunidad de explorar, interésese cada vez más en sus propios matices nutricionales y tendrá garantizado el éxito. Mantenga un diario y escriba en él sus actividades y reflexiones al final de cada día. Anote lo que comió y cómo se sintió después de ello. Anote sus apreciaciones, concéntrese en lo positivo y reconozca sus avances, independientemente de si han sido pasos pequeños o grandes saltos.
¿Está listo para deshacerse de los hábitos que no funcionan y hacer suyos los que sí funcionan? ¿Está preparado para abrirse a toda la gama de abundantes estimuladores metabólicos que pueden prender la llama del cuerpo y del alma? Debido a que la ciencia se ha limitado a una visión estrecha de lo que puede significar realmente la salud, nuestros expertos en actividad física se sienten satisfechos cuando hemos quemado suficientes calorías o alcanzado el ritmo cardiaco deseado. Nuestros gurúes de las dietas se sienten satisfechos cuando tomamos leche con suficiente vitamina D y jugos con suficiente vitamina C. Apenas nos hemos dado cuenta de que nuestra dieta colectiva ha sido deficiente en algunos nutrientes importantes que han existido desde hace tiempo, pero que de algún modo hemos pasado por alto: la "vitamina A" (amor), la "vitamina F" (felicidad) y la "vitamina E" (espíritu). No encontrará estos nutrientes esenciales enumerados en su caja de cereales, pero no se deje engañar por ello. Si algo nutre verdaderamente el alma, entonces literalmente nutre el cuerpo. Y estos nutrientes son el combustible del metabolismo.
SEMANA 1
El poder metabólico de la relajación
Si el tiempo, tan efímero,
tiene que ser testigo de nuestra muerte,
colmémoslo de buena comida
y buena plática, y perfumémoslo luego de jovialidad.
M. F. K. FISHER
Gandhi dijo una vez: "En la vida hay cosas más importantes que la premura". No parecería ser así, a juzgar por la manera en que muchos comemos. Comer bajo estrés no es sólo común, sino que es aceptable para la sociedad y muchas veces constituye un requisito previo para mantener un empleo, una familia o una vida. Una oficinista llamada Eva nos cuenta una situación muy típica.
"Siempre estoy recargada de trabajo, por lo que tengo que adaptar las comidas a mi horario laboral, lo que significa comer algún bocado sin apartarme de mi escritorio, cuando tenga la oportunidad. Normalmente hago dos comidas al día en la oficina, pero no son comidas verdaderas. Trabajo, tomo un bocado, atiendo el teléfono, tomo un bocado, escribo algo, tomo un bocado, voy de un lado a otro del trabajo, tomo un bocado. Sé que debo comer más despacio, pero mi horario nunca me lo permitiría."
Si usted es rápido y furioso al comer, es hora de cambiar de velocidad. Porque mientras más despacio coma, más rápido será su metabolismo.
¿Puede usted recordar las sensaciones de su cuerpo cuando come en un estado de ansiedad o estrés? La mayoría de las personas indican tener síntomas como acidez estomacal, cólicos o retortijones, gases, dolores digestivos, eructos y hambre intensa. Durante el estrés, el organismo asume automáticamente la respuesta clásica de luchar o huir. Esta función del sistema nervioso central evolucionó con el paso de millones de años hasta convertirse en un admirable mecanismo de seguridad que nos protege en situaciones de peligro para la vida: al hacer frente a un atacante, al experimentar desastres naturales, al evadir rápidamente a cualquier persona o cualquier cosa o al imponernos a ellos por la fuerza.
En los momentos en que se activa la respuesta de estrés, el ritmo cardiaco se acelera, la presión sanguínea aumenta, la respiración va más rápido, y se liberan en el sistema circulatorio las hormonas que ayudan a proporcionar energía inmediata, como la adrenalina, la noradrenalina y el cortisol. El flujo sanguíneo se desvía del centro del cuerpo hacia la cabeza para poder pensar con rapidez, y a los brazos y piernas para tener la energía necesaria a la hora de luchar o huir. Lo más importante es que el sistema digestivo se bloquea. Tiene todo el sentido del mundo que, si uno está tratando de mantener a raya a un gorila enojado, no debe desperdiciar energía en digerir el desayuno. Todas las funciones metabólicas del organismo tienen que ajustarse expresamente a la supervivencia.
Imagínese a sí mismo corriendo con toda prisa y ansiedad de su apartamento a su oficina mientras mordisquea una magdalena, o almorzando rápidamente mientras está abrumado por el trabajo y pensando en cualquier cosa menos en sus alimentos, o ingiriendo su almuerzo o cena mientras se siente enojado porque el universo no está cooperando con sus modestas exigencias. En esos momentos el cuerpo no tiene la menor idea de que lo que está experimentado no es cuestión de vida o muerte, pues está programado genéticamente para iniciar la respuesta de luchar o huir desde el mismo instante en que el cerebro perciba el estrés. Esto significa que, según la intensidad del estrés que usted esté experimentando, se van activando los distintos cambios fisiológicos que caracterizan la respuesta de luchar o huir, incluido cierto grado de bloqueo digestivo.
De modo que, si alguna vez usted ha comido en un estado de ansiedad y luego ha tenido la sensación de que la comida se le ha quedado paralizada en el estómago, eso es exactamente lo que está pasando. Los alimentos están esperando varios minutos y horas hasta que el cuerpo vuelva a su funcionamiento digestivo normal.
El famoso capitalista Malcolm Forbes afirmó una vez en defensa de las comidas rápidas que, "el hecho de que las comidas rápidas puedan obtenerse a toda prisa no significa que son chatarra". Quizás eso sea cierto. Pero lo que no reconoció en esa afirmación es que el simple hecho de ingerir un alimento con prisa no significa que el cuerpo lo asimilará más rápidamente. Uno puede consumir la comida más sana que haya en todo el sistema solar pero, si la ingiere en un estado de ansiedad, su digestión se verá drásticamente menoscabada, o sea, su estado de ánimo habrá afectado a sus alimentos. Se reduce el contenido de enzimas salivales en la boca, la descomposición de macromoléculas de proteínas, grasas y carbohidratos en el estómago se ve menoscabada, y la circulación sanguínea al intestino delgado se reduce hasta en cuatro veces, lo que se traduce en una menor asimilación de vitaminas, minerales y otros nutrientes. Por tanto, no sólo importa lo que comemos, sino el estado mental en que nos encontramos cuando comemos.
_La conexión entre el estrés y el metabolismo_
La clave para entender el vínculo profundo entre el metabolismo y el estrés es el sistema nervioso central (SNC). La parte del SNC que ejerce la mayor influencia en la función gastrointestinal se denomina sistema nervioso autónomo (SNA). Este aspecto del sistema nervioso es responsable de hacer que el estómago eche a andar, que las secreciones enzimáticas comiencen a fluir en el proceso digestivo y que el proceso dinámico de absorción de nutrientes en el torrente sanguíneo se mantenga en movimiento. El SNA también le indica al organismo cuándo no debe estar en modalidad de digestión, por ejemplo, cuando no tenga ningún alimento en el estómago o cuando esté activada la respuesta de luchar o huir.
El SNA tiene dos subdivisiones que lo ayudan a cumplir su doble cometido de estimulación e inhibición del proceso digestivo: las ramas simpática y parasimpática. La rama simpática activa la respuesta del estrés y suprime la actividad digestiva. La rama parasimpática relaja el cuerpo y activa la digestión. Puede ser útil pensar en estas dos partes del sistema nervioso como interruptores de encendido y apagado.
Cuando el sistema nervioso parasimpático está activo | DIGESTIÓN "ENCENDIDA" | Respuesta de estrés desactivada: relajación
---|---|---
Cuando el sistema nervioso simpático está activo | DIGESTIÓN "APAGADA" | Respuesta de estrés activada: modalidad de luchar o huir
Dicho en términos sencillos, la misma parte de nuestro cerebro que activa el estrés, desactiva la digestión. A la inversa, la parte del cerebro que activa la respuesta de relajación activa plenamente la capacidad de una sana digestión. Ingerir alimentos sanos es sólo la mitad de la ecuación de la buena nutrición. La otra mitad consiste en encontrarse en el estado ideal para digerir y asimilar alimentos.
Chen, un carismático experto de 46 años en medicina China, padecía de perennes trastornos digestivos a pesar de tener una magnífica salud en general y un vasto conocimiento de la sanación natural. Le pareció que quizás era hora de examinar su dieta y me pidió ayuda. Cuando le hice algunas preguntas básicas sobre sus hábitos alimenticios, quedé muy sorprendido con sus respuestas. Chen acostumbraba a pasar por McDonald's cuando se dirigía al trabajo y desayunaba dos Egg McMuffins en su carro mientras se abría paso entre el tráfico de la ciudad. A la hora de almuerzo, se daba un salto al mismo restaurante McDonald's y se comía dos Big Mac en el carro mientras regresaba a la oficina. Después del trabajo, se comía dos cuñas de pizza. Chen me confesó que quería sentirse mejor pero que no estaba dispuesto a cocinar, ni a llevar su almuerzo al trabajo, ni a comer vegetales, ni a dejar de ir a McDonald's. ¡Imagínese usted!
Le dije que me parecía que en realidad podía ayudarlo a pesar de las limitaciones imposibles que me estaba imponiendo. Ésta fue la sencilla estrategia que accedió a aplicar a regañadientes: tenía que estacionar el carro para comer sus hamburguesas Big Mac y tenía que dedicar veinte minutos a disfrutarlas de forma lenta y sensual. Tenía que hacer lo mismo con sus Egg McMuffins en el desayuno. Tenía que tomar tiempo para sosegarse con los alimentos y con la vida. Debía respirar profundamente antes, durante y después de sus comidas.
Dos semanas más tarde Chen me llamó emocionado con magníficas noticias. En primer lugar, sus síntomas digestivos habían desaparecido. Además, dijo: "No me creerá, pero en realidad _odio_ las Big Mac. Llevo 15 años comiéndolas y no las soporto. ¿Alguna vez ha intentado saborear una Big Mac? No es posible. Hay que comerla rápido y ponerle mucho ketchup para matarle el sabor".
Chen no era de los que comen relajadamente. Tenía muchos pacientes que atender a lo largo del día y, aparentemente, muy poco tiempo para alimentarse. El sencillo acto de tomar tiempo para sosegarse y comer le permitió hacer que predominara el sistema simpático en lugar del parasimpático, y sus trastornos digestivos desaparecieron rápidamente. Cuando esto sucedió, la sabiduría de su cuerpo pudo al fin proporcionarle retroinformación sobre sus elecciones de alimentos, con lo que Chen pudo ulteriormente dejar de consumir hamburguesas Big Mac de forma natural y sin esfuerzo. No tuvo que usar su poder de voluntad para resistirse a un alimento favorito o hacer un esfuerzo mental para elegir mejor. Le bastó con tratar de saborear una Big Mac.
La carga bioquímica del estrés
Analice la información presentada en el cuadro siguiente para inspirarlo a probar las bondades de la alimentación sosegada, relajada y civilizada.
La respuesta de estrés incide favorable o desfavorablemente en los siguientes factores:
↓ **Absorción de nutrientes:** principalmente debido a una menor oxigenación y circulación sanguínea gastrointestinal; una menor producción de enzimas en el estómago, páncreas e hígado; y un menor flujo de bilis de la vesícula biliar.
↑ **Excreción de nutrientes:** pérdida de calcio, magnesio, potasio, zinc, cromo, selenio y otros microminerales a través de la orina.
↑ **Deficiencias de nutrientes:** particularmente vitamina C, vitamina B, hierro, zinc y selenio.
↑ **Colesterol en sangre:** el propio estrés hace que aumenten los niveles de colesterol LDL.
↑ **Triglicéridos séricos:** aumentan instantáneamente durante la respuesta de estrés.
↑ **Agregación de plaquetas en la sangre:** un importante factor de riesgo en las cardiopatías.
↑ **Retención de sal:** puede producir hipertensión arterial.
↑ **Cortisol:** vinculado con el aumento de peso, la obesidad abdominal y la incapacidad de bajar de peso o desarrollar los músculos. Su producción excesiva contribuye al envejecimiento prematuro del cuerpo.
↓ **Densidad de la flora intestinal:** el estrés destruye las bacterias intestinales beneficiosas. Esto puede dar lugar a problemas inmunológicos y de la piel, deficiencias de nutrientes y trastornos digestivos.
↓ **Suministro de oxígeno:** influye en todos los aspectos del metabolismo.
↓ **Eficiencia térmica:** su capacidad de quemar calorías se ve disminuida.
↑ **Producción de ácido clorhídrico:** aumenta las probabilidades de sufrir úlceras.
↓ **Hormona del crecimiento:** una importante hormona para el desarrollo, sanación y restablecimiento de tejidos corporales. Ayuda a quemar grasas y desarrollar los músculos.
↓ **Secreciones salivales:** menor capacidad de digestión de almidones y disminución de los factores inmunológicos orales.
↓ **Hormona tiroidea:** puede hacer que disminuya la actividad metabólica en todo el organismo.
↑ **Ritmo de deglución:** un rápido ritmo de deglución es un factor que aumenta las probabilidades de trastornos digestivos.
↓ **Tiempo de evacuación intestinal:** puede dar lugar a diarreas y a que macropartículas de alimentos entren prematuramente en el intestino delgado; éste es uno de los factores que pueden contribuir a las alergias y sensibilidades a alimentos y a distintas enfermedades.
↑ **Tiempo de evacuación intestinal:** puede dar lugar a estreñimiento. También es un factor de riesgo en las enfermedades del colon.
↑ **Alergias y sensibilidades a alimentos:** hay abundantes pruebas anecdóticas; se deben probablemente a una disminución de la inmunidad y a problemas de permeabilidad intestinal.
↑ **Funcionamiento errático del esfínter del esófago inferior:** el esfínter del esófago inferior se abre cuando no debe, lo cual ocasiona reflujo gástrico (también conocido como acidez estomacal).
↑ **Resistencia a la insulina:** el estrés crónico de bajo nivel puede hacer que las células afectadas no respondan a la insulina, factor que contribuye a la diabetes, el aumento de peso, las cardiopatías y el envejecimiento.
↓ **Eicosanoides:** en este importante grupo de hormonas controladoras están comprendidas las prostaglandinas, los tromboxanos y los leucotrienos. Influyen en el nivel de energía y en numerosas funciones metabólicas.
↑ **Riesgo de osteoporosis:** se ha demostrado que la densidad ósea disminuye en las mujeres estresadas y deprimidas. El estrés hace que aumente la excreción en la orina de calcio, magnesio y boro.
↑ **Estrés oxidativo:** provoca envejecimiento prematuro. Es un precursor de numerosas enfermedades.
↓ **Masa muscular:** implica una mayor acumulación de tejido adiposo y un metabolismo más lento.
↓ **Hormonas sexuales:** puede implicar la reducción de la libido o apetito sexual, de la energía y de la masa muscular.
↑ **Inflamación:** ésta es la base de muchas dolencias importantes, con inclusión de las enfermedades cerebrales y cardiacas.
↓ **Mitocondrias:** éstas son las encargadas de producir energía en las células. Cuando disminuye el número total de estos minúsculos orgánulos celulares, producimos literalmente menos energía. Puede dar lugar a la fatiga crónica.
↓ **Función renal:** implica una mayor toxicidad, desequilibrio electrolítico, retención de agua, cardiopatías.
¿Está comenzando a entender el poder metabólico de la relajación? ¿Se da cuenta de cómo el hecho de ingerir sus alimentos en el estado natural y necesario de predominación del sistema parasimpático puede dar excelentes resultados con los alimentos y el metabolismo?
_Lecciones de los franceses_
Uno de los ejemplos más fascinantes que conozco, y que ilustra la profunda diferencia entre el comer relajado y el comer a prisa, viene de la cultura europea. ¿Alguna vez ha estado en Francia? ¿Sabe usted cómo hacen los franceses cuando se trata de comer? Ante esta pregunta, la mayoría de las personas familiarizadas con esa cultura señalan que los franceses se toman varias horas para el almuerzo, beben buenas cantidades de vino tinto con sus comidas, comen mucho queso y alimentos ricos en grasas, normalmente se sirven porciones pequeñas, su comida del mediodía es la más grande del día, son fanáticos en cuanto a usar alimentos frescos e ingredientes de alta calidad, no hacen tantos ejercicios como los norteamericanos, fuman mucho, son más delgados y toman la cena y celebran sus comidas, en lugar de comer y salir corriendo. Hasta hace poco, los franceses no tenían siquiera un término para referirse a las "comidas rápidas".
Comparemos esto con los norteamericanos, que suelen dedicar apenas minutos al desayuno y almuerzo en lugar de una hora o más, que consumen su comida más grande del día en la cena en lugar de hacerlo en el almuerzo; que, en general, no toman vino con las comidas, ni insisten en ingredientes de alta calidad ni hacen que cada comida sea una celebración cultural digna de recordar. Además, los norteamericanos se sirven porciones más grandes, tienen una mayor tendencia a hacer ejercicios y sus cuerpos son más voluminosos.
Hace varios años, unos investigadores comenzaron a comparar la salud en Estados Unidos y en Francia y los resultados que obtuvieron fueron muy sorprendentes. Descubrieron que los franceses consumen un porcentaje per cápita de grasa dietética mucho mayor que el que consumen los estadounidenses. Cada hombre y cada mujer franceses comen muchos más alimentos con grasa a lo largo del año. Esto debería significar que tienen niveles más altos de colesterol en la sangre y una tasa mayor de enfermedades del corazón, pero resulta ser que estas tasas en el caso de los franceses son significativamente más bajas que las de los estadounidenses. Para la comunidad científica, esta revelación fue tan estremecedora como lo sería la de la llegada de un OVNI, porque se supone que las enfermedades del corazón y el colesterol en sangre aumenten a medida que las personas consumen más grasa, no a la inversa.
Nuestras mejores mentes en el campo de la medicina se abocaron a tratar de resolver este dilema y examinaron la mayor cantidad de explicaciones posibles. Los científicos razonaron que debía haber un ingrediente misterioso en la dieta francesa que les daba a ellos su ventaja de salud. Supusieron que sería el vino tinto. Entonces aislaron los componentes químicos activos del vino tinto: los polifenoles. Éste sería el supuesto factor X que tenía un efecto protector del corazón en los franceses.
Días después, en la primera plana de los principales diarios de todo el país se nos dijo "Tomemos más vino". Por supuesto, esto ocasionó un gran revuelo porque se contradecía con muchos otros estudios que nos indicaban que el alcohol mata las células cerebrales, suprime el sistema inmunológico, daña el hígado, produce mutaciones en los fetos y, por si eso fuera poco, causa estragos en la sociedad debido a la ebriedad y la adicción. ¿A quién creer?
Nuestra solución a esta controversia consistió en aislar los polifenoles del vino, comprimirlos en cápsulas, embotellarlos y venderlos en tiendas naturistas de todo el país. Resuelto el problema, ya los investigadores podían dormir tranquilos al saber que las pequeñas píldoras con polifenoles del vino tinto eran lo único que separaba a los norteamericanos de los franceses, con sus buenos índices de salud cardiaca.
Pero miremos el fenómeno más de cerca. Ninguno de los investigadores que se dedicaron a este proyecto tuvieron realmente en cuenta la imagen de conjunto. Ante todo, cuando los franceses comen, casi siempre lo hacen con la predominación del sistema parasimpático (el estado fisiológico de relajación y máxima función digestiva). Incluso si están estresados, el hecho de tomarse una buena cantidad de tiempo para consumir su comida, saborearla y socializar con sus compatriotas probablemente los ayuda a relajarse. Y si con eso no calman sus tensiones, seguramente lo logran con el vino tinto.
Si bien la cultura de las comidas rápidas se está estableciendo gradualmente entre ellos, los franceses (y otros europeos) en general no tienen la costumbre de usar el almuerzo para reuniones de trabajo como se hace en Estados Unidos. El contexto en que ellos comen no es el del negocio, sino el del placer. Como cultura, atribuyen un alto valor no sólo a los alimentos, sino a la nutrición. No consideran que comer es una molesta exigencia biológica que hay que satisfacer rápidamente para poder seguir adelante. Toman tiempo durante el día para relajar, celebrar y reconocer la honda necesidad humana de cenar. Lo que mantiene bajo control sus niveles de colesterol y enfermedades del corazón no son los polifenoles del vino tinto, sino su sistema nervioso parasimpático. Es el óptimo estado de digestión y asimilación en el que casi siempre comen como parte de su actitud mental.
comer estresado = predominación del sistema simpático = bloqueo digestivo
---
comer relajado = predominación del sistema parasimpático = digestión plena
He aquí una última lección de los franceses. Una geóloga estadounidense conocida mía fue enviada a Francia para supervisar un proyecto de excavación de tres semanas en el campo. Dado que el tiempo previsto para completar la tarea era muy escaso, la geóloga se frustraba rápidamente cuando cada día al mediodía los integrantes de su cuadrilla francesa desaparecían en el pueblo durante dos horas y media para almorzar. Después de una semana, la geóloga explicó a los hombres que la empresa les había dado un plazo muy apretado y que debían tomar su almuerzo en el lugar de trabajo. Debatieron esto entre ellos y accedieron de buena gana.
Al día siguiente, los miembros de la cuadrilla trajeron un misterioso camión que pasó la mañana inmóvil en el estacionamiento. Cuando llegó la hora del almuerzo, abrieron el camión y sacaron mesas, manteles, cubiertos, vajilla, flores, una cocina portátil y un espléndido aprovisionamiento de víveres. Comieron durante dos horas y media entre rocas y escombros, disfrutaron su vino y les pareció que habían cumplido muy bien la petición de su jefa norteamericana de que comieran en el trabajo.
_Relájese y queme calorías_
¿Alguna vez ha tenido la experiencia de irse de vacaciones, comer mucho más de lo normal, y aún así bajar de peso? Alrededor de una de cada cinco personas a quienes he encuestado respondieron afirmativamente esta pregunta. Otros dijeron que consumieron una cantidad mucho mayor de alimentos pero aún así mantuvieron el mismo peso. Según el antiguo paradigma de la nutrición esto es, cuando menos, imposible (o de lo contrario, se trata de un milagro). Sin embargo, con nuestra nueva comprensión de la digestión y el metabolismo, la razón de esta pérdida de peso es fácil de entender. Mientras estamos de vacaciones muchos hacemos algo muy insólito para nosotros: nos relajamos. Pasamos de la predominación crónica del sistema simpático al estado parasimpático. Nuestra actitud mental hace cambiar nuestro metabolismo en un grado tal que podemos comer más y, aún así, bajar de peso.
Yvonne, estudiante de postgrado, me contó lo siguiente: "Fui a Italia durante un semestre y realmente no me controlé en absoluto con la comida. Olvidé mi dieta y viví la buena vida. Comí pan, queso, postres, gelato, todo tipo de alimentos cremosos y montones de pasta. A mí misma me costaba creerlo, pero bajé ocho libras mientras estuve allá".
Arthur, contratista de profesión, dijo esto: "Fui a pasar un par de semanas en un centro turístico en Jamaica. Estaba agotado después de un proyecto de trabajo y necesitaba un descanso. Comí mucho, bebí mucho, dormí en la playa, y creo que una sola vez salí a caminar. Mi esposa todavía cuenta cómo fue que bajé siete libras con la 'dieta del hedonista'".
Una señora llamada Ella trabaja en un velero la mitad del año en Nantucket y la otra mitad en las Islas Vírgenes. Se percató de que cada vez que llegaba a las Islas Vírgenes bajaba unas 15 libras en cuestión de un mes, sin ningún cambio en su dieta y sin hacer ejercicios. ¿Adivina usted en qué consistía la diferencia? No sólo le gustaba más estar en las Islas Vírgenes que en Nantucket, sino que se dio cuenta también de que se sentía más atractiva en las islas. "Los hombres nacidos en las Islas Vírgenes no se fijan en lo que una pesa. En realidad prefieren a las mujeres voluminosas. En Nantucket los hombres no se fijan mucho en mí. Cuando llego a las islas, los hombres me ven muy atractiva. Allí nunca me preocupo por las calorías; como lo que me apetezca y lo disfruto. Soy una persona completamente distinta y mi metabolismo cambia por completo".
No se trata, por supuesto, de comer absolutamente todo lo que uno desee ni de tomar más vacaciones en las Islas Vírgenes. Se trata de que muchos de nosotros debemos actuar con más libertad y vivir la vida, porque nos relajaremos más y metabolizaremos mejor.
La conexión documentada científicamente entre el aumento de peso y el estrés es muy convincente. Numerosos estudios clínicos han demostrado que las afecciones en las que se produce un alto volumen de cortisol están estrechamente vinculadas con la acumulación de grasas. Eso se debe a que una de las funciones químicas del cortisol es enviar al cuerpo la señal de acumular grasa y no desarrollar el tejido muscular.
Recordemos que el cortisol es la hormona principal que se libera en cantidades importantes durante situaciones de estrés agudo y crónico. Las ratas y monos sometidos a estrés en experimentos muestran inicialmente elevados niveles de cortisol, seguidos del aumento de peso. Esto ocurre a pesar de que ingieren cantidades normales de comida. Ciertamente, muchas personas se quejan de que, aunque están siguiendo una dieta baja en calorías y haciendo más ejercicios, siguen sin poder adelgazar. El estrés es el motivo en la mayoría de los casos. Esto se aplica especialmente a los que experimentan el aumento de peso en torno a su abdomen, pues la producción excesiva de cortisol tiene el extraño efecto de engordar el vientre.
El regalo del tiempo
Durante la semana 1, puede ayudarse a sí mismo a sosegarse y relajarse y, por lo tanto, aumentar su metabolismo nutricional con este simple ejercicio: Comprométase a darse a sí mismo el regalo de más tiempo en cada comida. Dése cuenta de que el mundo puede esperar mientras usted se toma unos minutos más con su comida y recupera su derecho a almorzar.
• Si toma el desayuno en cinco minutos, haga que sean 10 minutos. Si normalmente toma 10 minutos, auméntelos a 15 ó 20.
• Dése a sí mismo al menos 30 minutos para el almuerzo y la cena. Trate de ver si puede aumentarlo a una hora.
• Reorganice sus horarios en la casa y en el trabajo lo mejor que pueda para darse más tiempo a sí mismo. Piense seriamente en cómo encontrar estos momentos extras.
• En la medida de sus posibilidades, haga que sus familiares, compañeros de trabajo y jefes también busquen más tiempo y relajación con las comidas. Búsquese un "compañero en la dieta del sosiego" para que se ayuden uno a otro.
• Coma solamente sentado. No responda al teléfono celular ni al de la casa ni al del trabajo, ni al buscapersonas, ni al correo electrónico, y no haga ningún tipo de trabajo mientras come.
De modo que, si usted es de las personas que creen estar haciendo todo lo correcto para bajar de peso pero no logran salir del mismo estancamiento, pregúntese a sí mismo sobre el estrés. ¿Lleva una vida apresurada? ¿Come a la velocidad de la luz? ¿Su empleo exige que usted viva en un estado de luchar o huir? Si es así, no importa cuánto se esfuerce en contar las calorías o en ejercitarse en la máquina de correr, pues no obtendrá el resultado que desea. Lo que debe hacer es de la mayor dificultad posible: Relájese. Deje de producir tanto cortisol. Respire hondo para darse vida, tenga un poco más de paz y dé a sus calorías la oportunidad de quemarse.
El estrés crónico también puede hacer que aumente la producción de insulina, otra hormona estrechamente vinculada con el aumento de peso. El páncreas produce insulina cada vez que hay un rápido aumento de la glucosa en la sangre. Una de las maneras en que la insulina reduce la glucosa en la sangre es al enviar al cuerpo la señal de que almacene agresivamente como grasa los carbohidratos consumidos en exceso. La insulina también envía al organismo la señal de que no libere la grasa almacenada. El estrés crónico y la producción de insulina vinculada con él son especialmente problemáticos en una afección conocida como resistencia a la insulina, en la que los niveles de glucosa en sangre se mantienen elevados aunque aumente la producción de insulina, debido a que las células sobre las que debe actuar esta hormona dejan de responder a ella. Si unimos a esto las típicas meriendas ricas en carbohidratos que consumimos cuando nos sentimos ansiosos y con carencia de amor, allanamos el camino para un rápido y fácil aumento de peso. Por lo tanto, cuando se trata de perder peso, tan importante es relajarnos y contar nuestras bendiciones como contar nuestras calorías.
Imagínese a usted mismo preocupándose por su peso, siguiendo una dieta forzada y fláccida y convencido de que no merecería existir si no pudiera reducir su cuerpo a unas dimensiones perfectas. Estos mensajes que se perpetúan a sí mismos lo someterán literalmente a un estado de estrés crónico de bajo nivel. Aunque esté haciendo dieta y consumiendo menos calorías, estará produciendo más cortisol e insulina, que mandan a su cuerpo la señal de aumentar de peso. En términos médicos, el estrés crónico hace que disminuya la eficiencia térmica, o sea, su capacidad de quemar calorías y metabolizar las grasa almacenada.
En resumen:
**Preocuparse por la acumulación de grasa hace que ésta aumente. La ansiedad sobre la pérdida de peso hace que su cuerpo acumule grasa y la retenga.**
Muchas personas se valen de la ansiedad y el estrés para motivarse a sí mismas a bajar de peso. Por ejemplo: "No iré a la fiesta si antes no logro bajar ocho libras", o "Sólo me veré bien cuando baje de peso". Este estrés autoimpuesto parece darnos energías porque estimula la producción de adrenalina y noradrenalina, hormonas que contribuyen al estado de alerta. No obstante, con el paso del tiempo, estas hormonas relacionadas con el reflejo de luchar o huir pueden menoscabar el metabolismo.
Aunque son muchos los ejemplos extraordinarios que he visto a través de los años, todavía me parece mágico cuando las personas me cuentan sus experiencias de cómo la relajación les transformó sus cuerpos. Terry, maestra de 55 años de edad, bajó nueve libras en cuatro semanas sin cambiar nada en su dieta. Simplemente decidió dejar de preocuparse por cada cosa que comía. Jody, escritora de 31 años, bajó cinco libras en una semana (las mismas "últimas cinco libras" que llevaba años tratando de perder) cuando al fin se decidió a dejar de obsesionarse por cinco libritas. Esther, de 48 años, llevaba mucho tiempo haciendo dieta y nunca había logrado bajar ni una libra. Después de varios meses con la dieta de "no ponerse a dieta", sin ningún sentido de culpabilidad y sin imponerse normas, siguió sin perder peso, pero al menos perdió la sensación de culpabilidad, el miedo y la infelicidad dietética.
En conclusión: no tiene porque preocuparse más ni castigarse con las comidas. Es completamente contraproducente estresarse en relación con la pérdida de peso, pues el mismo estrés hace que uno aumente de peso.
_Relájese y desarrolle el tejido óseo_
Todos sabemos que los suplementos de calcio pueden ayudar a desarrollar el tejido óseo pero, ¿ha oído hablar de lo beneficiosa que es para los huesos la paz interior? Las investigaciones han demostrado un efecto inequívoco del estrés sobre la densidad ósea. Los ratones obligados a vivir durante tres semanas en jaulas abarrotadas mostraban una importante desmineralización de los huesos: pérdida de calcio, fósforo, magnesio y hierro. (Tomen nota, citadinos.) También mostraban pérdidas importantes de concentración de microminerales vinculados con la salud ósea: cinc, boro, cromo y cobalto.
Los glucocorticoides, una clase de hormonas relacionadas con el estrés entre las que se encuentra el cortisol, son en gran medida responsables de los efectos debilitantes que el estrés tiene sobre los huesos. De hecho, estas hormonas bloquean la asimilación del calcio en los intestinos y limitan rigurosamente la cantidad de calcio disponible para el crecimiento óseo. La secreción excesiva de glucocorticoides debido, por ejemplo, al estrés crónico, produce pérdida de calcio en la orina, interfiere en el crecimiento y división de las células precursoras especializadas en los extremos de los huesos, e incluso hace que aumente el ritmo de descomposición del tejido óseo. Estos efectos de destrucción de los huesos producidos por el estrés se han observado con gran claridad en experimentos realizados con monas sometidas diariamente a estrés. Lo mismo se puede apreciar en personas que padecen del síndrome de Cushing (un trastorno consistente en la hipersecreción de cortisol) y en pacientes que reciben grandes dosis de glucocorticoides como tratamiento contra alguna enfermedad.
De modo que, si usted piensa que lo único que necesita para evitar que los huesos le salgan malos es tomar un suplemento de calcio, piense otra vez. En Estados Unidos se registra una de las más elevadas tasas de consumo de calcio en el mundo y, aún así, también se registra una de las más elevadas tasas de osteoporosis. Algo debe estar mal. La ecuación más satisfactoria para mantener la densidad ósea no pasa por un mayor consumo de calcio. Pasa por una menor excreción de calcio. Literalmente perdemos calcio en la orina apenas a unos minutos de sentir estrés. Es el mismo calcio que unos momentos antes estaba en los huesos. En un estudio realizado por los Institutos Nacionales de Salud y publicado en la _New England Journal of Medicine_ [Revista de Medicina de Nueva Inglaterra] se confirmó que la depresión pasada o actual en las mujeres está estrechamente vinculada con la pérdida ósea. Las mujeres deprimidas tenían índices de densidad ósea hasta un 13 por ciento inferiores a los de mujeres no deprimidas. Este estudio demostró concluyentemente que la salud ósea y la salud mental van de la mano.
Conviene señalar que el estrés no es el único factor que nos hace excretar calcio en la orina. Otros factores que contribuyen a la excreción de calcio comprenden la cafeína, el alcohol, la contaminación del aire, el humo del cigarrillo, el exceso de azúcar, el exceso de proteínas de origen animal y el ácido fosfórico (que se encuentra en muchos refrescos carbonatados con cola). Si unimos todos estos factores y los añadimos al estilo de vida del oficinista típico, encontraremos que cada vez que vamos al baño estamos expulsando un mineral precioso a un ritmo alarmante. Si usted es de los que ha recibido miles de veces el mensaje de "consumir más calcio", es hora de fijarse en la imagen de conjunto.
Una mujer que asistió a uno de mis talleres se me acercó luego, deseosa de contarme su historia. A pesar de toda una vida de ejercicios, una dieta sana con alto contenido de calcio y ningún antecedente familiar de osteoporosis, Arlene había sido diagnosticada recientemente de esa enfermedad a los 42 años de edad. Sus médicos estaban perplejos y ella se sintió devastada por el diagnóstico, porque no tenía sentido. Tras oírme hablar de la conexión entre estrés y la pérdida ósea, Arlene tuvo una revelación personal. Su trabajo la había enfermado. Llevaba más de dieciséis años ocupando un exigente y estresante puesto relacionado con publicaciones, y esto le había consumido la mayor parte de su vida. Por primera vez se dio cuenta de que podía cambiar a un trabajo más satisfactorio, un cambio que sería tan importante como tomar su medicamento.
Fiel a su palabra, Arlene pronto abandonó su empleo en la casa editorial donde había trabajado la mayor parte de su carrera. Encontró una revista que le ofrecía un entorno laboral más cuerdo, un horario más propicio a la salud y un tiempo razonable para el almuerzo. No es que estuviera libre de estrés en su nuevo trabajo, pero sí consiguió establecer relaciones más positivas con sus compañeros de trabajo y algunos momentos preciosos de relajación durante el día. Cuando volvimos a hablar un año después, Arlene me contó lo siguiente.
"Siempre acepté una vida laboral de gran estrés porque creía que no había otra opción. Llegó un momento en que eso me parecía normal. Durante muchos años fui fanática de las dietas y el ejercicio, pero ahora me doy cuenta de que comía bajo estrés crónico y sé que pagué un precio por ello... Ahora, por primera vez en mi vida, estoy feliz en el trabajo y me aseguro de comer cuando estoy relajada... La prueba de que todo está dando resultado es que he logrado revertir mi osteoporosis después de probar sin éxito varios métodos. Sé que no se trata solamente del calcio. Se trata de mí". Desde que Arlene pudo decir "¡eureka!" en cuanto a la relación entre el estrés y la osteoporosis, hizo algo que antes no hacía suficientemente: se puso en sintonía con la sabiduría de su cuerpo y escuchó a su inteligencia interna. Se dio a sí misma el lujo de considerar que tenía voz y voto en el mundo que ella misma había creado.
¿Se da cuenta de los increíbles cambios metabólicos que podemos lograr si nos damos a nosotros mismos la potestad de influir en todas las elecciones que hacemos en la vida, sean grandes o pequeñas? ¿Entiende cómo es que la salud ósea no sólo depende del calcio que ingiere en sus alimentos, sino de los sentimientos que lleva en su corazón y los pensamientos que lleva en su cabeza? A los huesos no les basta con la nutrición. Necesitan sustento.
Está claro que el estrés es parte normal de la vida y que tiene una función saludable. No obstante, la respuesta fisiológica de estrés fue concebida para funcionar durante unos minutos cada vez, y sólo en situaciones de amenaza a la vida. De hecho, en los primeros minutos de una plena respuesta de estrés, nuestro metabolismo alcanza su máxima eficiencia. Cuando la respuesta se prolonga, día tras día, es cuando este mismo medio de supervivencia que se supone que nos salve, comienza a demolernos poco a poco.
_Semana 1: Su tarea principal_
¿Se describiría a sí mismo como una persona que come rápidamente, moderadamente, o lentamente? Si respondió "lentamente", lo felicito. Si no, ésta es su tarea primaria para la semana 1 y mucho más allá: transfórmese en una persona que come lentamente. Cada vez que coma con otras personas, compita en secreto consigo mismo: ¡usted gana si es el último en terminar de comer!
La semana 1 es su oportunidad de graduarse del hábito de comer rápido y elevar su cuerpo, durante las comidas, a la predominación del sistema parasimpático, o sea, el estado óptimo de digestión, asimilación y consumo de calorías. Es hora de grabar de forma indeleble en su conciencia que ha dejado atrás la época de las comidas rápidas y ha entrado en la época de la "comidas lentas". Es su nuevo estilo de vida. Si usted ha probado con distintas dietas y sistemas nutricionales pero no ha logrado mantenerse satisfactoriamente dentro de un mismo método, entonces es hora de cambiar de estrategia. Prioricemos lo que hay que priorizar. En lugar de centrarnos en _qué_ comer, es hora de tener claro _cómo_ comer.
**Ejercicio: Inventario del estilo de vida**
Durante la semana 1 de la Dieta del Sosiego, tome los mismos alimentos de siempre. Pero ahora relájese y sosiéguese como nunca antes lo ha hecho. Olvide la necesidad de saber "qué alimentos son beneficiosos". Examinaremos más de cerca ese tema en las semanas 2 y .
Por el momento, búsquese un buen diario donde pueda anotar los ejercicios y actividades que le indicaré a lo largo de este libro. Luego, dedique un momento a examinar sus propios estilos de alimentación, con estrés o sin él, para lo cual le ruego que responda a las preguntas siguientes:
• ¿Tiende a comer más cuando siente ansiedad? ¿O come menos en esos momentos? ¿O a veces come más y otras veces come menos, según la situación?
• ¿Qué tipos de circunstancias le inducen a comer de esa manera: una determinada hora del día, cierto lugar, días específicos de la semana? ¿El hecho de que usted coma con ansiedad está relacionado con el trabajo o con la familia?
• ¿Aproximadamente con qué frecuencia come bajo estrés? ¿Puede expresar esto como porcentaje del tiempo total que dedica a las comidas? (Por ejemplo, algunas personas comen bajo estrés sólo el 5 por ciento del tiempo, otros, el 85 por ciento del tiempo.)
• ¿Usted tiende a consumir ciertos alimentos cuando se siente estresado? De ser así, anote cuáles son esos alimentos. ¿Cuáles son los que come más a menudo?
• Después de comer bajo estrés, ¿siente llenura o siente hambre? ¿Ha notado algún síntoma físico común en esos momentos o con posterioridad a ellos?
• ¿Cuánto tiempo dedica a alimentarse durante episodios de comer bajo estrés? ¿Saborea su comida? ¿La mastica varias veces o la engulle con prisa?
Ahora piense en las oportunidades en que ha comido relajado, disfrutando la ocasión y los alimentos, quizás en buena compañía, cuando se ha sentido satisfecho una vez que ha terminado la comida. ¿Cuán a menudo le sucede esto? ¿En qué porcentaje del tiempo? ¿Toma algún alimento en particular durante comidas relajadas? ¿Cuánto tiempo le dedica a esto? ¿Dónde come estas comidas? ¿Con quién? ¿Cuándo? ¿Cómo se siente después de una comida libre de estrés? ¿Qué sensaciones siente?
La vitamina T ("tiempo" para las comidas) es un requisito fundamental de la nutrición, que se echa de menos en las dietas de muchos de nosotros en el mundo "civilizado". Al dedicar más tiempo a sus comidas, uno puede elevarse de la categoría de mamífero que se alimenta a un ser humano que come. El resultado es que se nutrirá con los alimentos en lugar de engullir caóticamente los nutrientes en su tracto digestivo. Al hacer esto, se potenciará al máximo su metabolismo.
Incluso si siente que no le alcanza el tiempo, lo bueno es que hace falta menos de un minuto para liberar el cuerpo de estrés y pasar al máximo nivel de metabolismo nutricional. Afortunadamente, no tendrá que vender su casa y mudarse a Francia para lograr esto. Es posible comer sin estrés en cualquier lugar y momento y cosechar el fruto de sus esfuerzos casi instantáneamente.
**El camino real para atajar la respuesta de estrés y comer de forma relajada y sosegada pasa por la respiración consciente.**
Le explico por qué.
A cada estado emocional corresponden una frecuencia de ondas cerebrales y un patrón de respiración. Imagínese que va conduciendo su auto por una intersección y de repente tiene que dar un frenazo porque alguien ha cruzado con la luz roja. Si en medio de esa experiencia cercana a la muerte usted pudiera fijarse en su forma de respirar, se daría cuenta de que está aguantando la respiración. Piense en esto. El patrón de respiración del estrés y la ansiedad es superficial, arrítmico e infrecuente. Una vez que se da cuenta de que pudo evitar un accidente y que ha salvado la vida, probablemente espirará con un profundo suspiro. La inteligencia automática del cuerpo nos hace respirar hondo automáticamente en el instante en que se percata de que ha quedado atrás la circunstancia que ponía en peligro nuestras vidas.
Cuando nos encontramos en un estado de estrés, si adoptamos conscientemente el patrón de respiración profundo y rítmico característico del estado relajado, engañamos al sistema nervioso central. El cerebro piensa algo por el estilo de: "¡Ey, creía que estaba hecho un manojo de nervios, pero la respiración está relajada, así que seguramente estoy relajado!" Se envía una señal desde la corteza central del cerebro (el centro del pensamiento), a los nervios de la médula espinal y de ahí a distintos órganos del cuerpo. También entra a funcionar el sistema endocrino para desactivar la respuesta de estrés. El resultado es un cambio de un estado de baja actividad digestiva a la plena fuerza digestiva.
He observado a muchas personas curar o reducir marcadamente los síntomas del síndrome de intestino irritable, acidez, estreñimiento, trastornos gástricos crónicos, fatiga después de las comidas y un montón de malestares digestivos mediante el uso habitual de las dos sencillas técnicas de respiración que se exponen a continuación.
**Ejercicio: Esté presente y respire**
En cada comida o merienda, y en cualquier momento en que vaya a llevar comida a su boca, pregúntese: "¿Estoy a punto de comer bajo estrés? ¿Tengo la mente acelerada?" Si la respuesta es afirmativa, haga una pausa. Luego respire diez veces en forma prolongada, lenta y profunda. Idealmente, su ejercicio de respiración profunda seguirá esta secuencia:
Siéntese en una posición cómoda con el espinazo recto y los pies
completamente apoyados en el suelo.
Puede mantener los ojos abiertos o cerrados.
Aspire profundamente, llenando sus pulmones hasta
aproximadamente dos tercios de su capacidad.
Aguante la respiración durante varios segundos.
Espire por completo.
Repita este ciclo diez veces.
Esta sencilla práctica puede servir para atajar la respuesta de estrés en apenas un minuto, según cuán intenso sea su reflejo de luchar o huir. Usted podrá usar esta técnica incluso si se encuentra en una situación en la que respirar no es aceptable para quienes le rodean, por ejemplo, en un almuerzo de trabajo con personas tozudas que no saben apreciar el oxígeno. Simplemente manténgase centrado en su respiración mientras sigue mirando a los demás comensales y escuchando la conversación. Pensarán que les está prestando gran atención, pero lo que usted estará haciendo secretamente es estimular la predominación del sistema parasimpático. Es de veras estimulante.
Al aguantar la respiración durante varios segundos, los cuerpos carotídeos, diminutas masas de tejido nervioso que contienen receptores químicos especializados y están situadas a lo largo de las arterias carótidas, interpretan esto erróneamente como un aumento de la presión sanguínea. Los cuerpos carotídeos envían entonces a los vasos sanguíneos la señal de dilatarse, lo cual provoca un descenso global de la presión sanguínea y, en consecuencia, hace que disminuya la respuesta de estrés.
Al aspirar hasta solamente dos tercios de la capacidad pulmonar, uno consigue evitar que la presión sanguínea aumente debido al mero esfuerzo de hacer que los pulmones se dilaten al máximo. Al espirar con más fuerza que la utilizada en la aspiración, uno contribuye a hacer que el aire agotado viciado salga de los pulmones. También se ha demostrado que la respiración lenta y profunda hace que aumente la liberación de endorfinas en el cuerpo, lo que produce una sensación de relajación y bienestar.
En el nivel básico de la respiración profunda es preferible aspirar y espirar a través de la nariz. El aire que entra a través de los pasajes nasales se calienta rápidamente hasta alcanzar la temperatura del cuerpo porque los pulmones funcionan de forma más eficiente con aire caliente. Usted mismo puede comprobar esto muy fácilmente si sale a la intemperie en un frío día de invierno y respira por la boca: el aire frío hace que los pulmones se pongan tensos. La respiración nasal también tiene un potente efecto en el sistema nervioso central porque los receptores nerviosos de la nariz van directamente al cerebro. Si está congestionado debido a la sinusitis y esto le dificulta la respiración nasal, puede entonces respirar suficientemente bien por la boca.
Una útil variación de esta técnica consiste en ponerse una mano sobre el abdomen, o incluso las dos manos, una por encima de la otra. Esto quizás le ayude a centrarse más claramente en sus intestinos y relajarse más profundamente.
Muchas personas hablan de quemar calorías, pero pocas se dan cuenta de que una caloría no es más que la unidad de medida del calor que se libera cuando algo se quema. Para determinar el valor calórico de un alimento, los científicos especializados lo colocan en un aparato especial que en esencia lo incinera y mide el calor expedido. Por eso no es de sorprender que casi todo tenga un valor calórico mensurable. Una galleta de la fortuna contiene aproximadamente 30 calorías. La página que está leyendo tiene no menos de 60 calorías. La silla en que está sentado tiene 100.000 calorías o más. Y todas las calorías requieren oxígeno para poder quemarse.
Si usted desea potenciar al máximo el metabolismo, la respiración es uno de los medios más eficaces para lograrlo porque, mientras mayor sea su capacidad de inhalar oxígeno, mayor será su poder metabólico de "combustión".
**Inhale más oxígeno y quemará mejor los alimentos.**
Es realmente así de sencillo. El sistema digestivo tiene hambre de oxígeno. Algunas partes del revestimiento estomacal consumen más oxígeno que cualquier otro tejido del cuerpo. Las vellosidades intestinales, las principales responsables de la absorción de nutrientes, tienen la tarea de extraer grandes cantidades de oxígeno de la sangre durante la descomposición de una comida. Cuando en la sangre no hay suficiente oxígeno para que las vellosidades lo extraigan, disminuye la absorción.
Mientras más comemos, más respiración nos exige el cuerpo. Después de la comida, el sistema nervioso parasimpático hace que se desencadenen cambios sincrónicos en la respiración, la circulación sanguínea y el consumo de oxígeno. En otras palabras, el cerebro hace que aumente automáticamente la capacidad aeróbica en respuesta a la necesidad de más oxígeno. Respirar más si come mucho es lo mismo que hacer más ejercicios si come mucho. Si la ansiedad o la estimulación excesiva interfieren en el funcionamiento del interruptor natural del cuerpo que lo hace respirar profundamente, se verá limitada su capacidad de quemar calorías. La regla más sencilla que debe seguir en ese caso es: si come más, respire más.
Sigamos examinando la relación entre el oxígeno y la pérdida de peso. ¿Alguna vez ha tenido la experiencia de iniciar una dieta baja en calorías y no bajar de peso en absoluto, o comenzar una dieta y bajar de peso en la primera semana pero luego mantenerse en el mismo peso aunque sigue consumiendo alimentos bajos en calorías? Muchas personas se quedan perplejas con este misterioso fenómeno, pero la razón es muy sencilla. Su metabolismo ha cambiado. El cuerpo ha aprendido a tolerar las exiguas porciones de alimentos mediante la reducción de la absorción de oxígeno: la disminución del consumo de oxígeno entraña una disminución del metabolismo. En muchos casos lo que hacen las dietas para bajar de peso es indicar al organismo que éste necesita menos oxígeno. Por eso, al ponerse en una dieta baja en calorías, quizás usted piense que está haciendo lo correcto para perder más libras, pero en realidad está yendo en su propio detrimento.
Otra manera de ver este fenómeno consiste en considerar que el acto de comer crea una "exigencia" sobre el metabolismo. Del mismo modo que levantar pesos entraña la exigencia de que sus músculos sean más grandes y más fuertes, el consumo de alimentos entraña para su metabolismo la exigencia de hacerse más fuerte y eficiente. Los alimentos son literalmente como pesos que carga su organismo. Lo que determina el valor nutricional y metabólico de una comida no es sólo el volumen de nutrientes presentes en los alimentos; ese valor también está determinado por el proceso que ocurre en su organismo para descomponer los alimentos ingeridos.
De hecho, el simple acto de comer hace por sí mismo que aumente el metabolismo. Si examináramos uno de los parámetros más comunes del metabolismo (la temperatura del cuerpo) veríamos que, cada vez que comemos, la temperatura corporal aumenta automáticamente. En esa realidad se basa el refrán de la medicina tradicional de "matar de hambre la fiebre": si ya uno tiene alta la temperatura del cuerpo, debe evitar comer para no hacerla subir aún más.
¿Se da cuenta de la manera en que comer poco o consumir alimentos de muy bajo valor calórico puede ser contraproducente para la pérdida de peso?
No es de sorprender que, si comer menos puede hacer que consumamos menos oxígeno y, por tanto, que disminuya el metabolismo, entonces comer más haría que aumentara el metabolismo. De hecho, muchas personas con quienes he trabajado que de veras tenían necesidad de bajar de peso y llevaban mucho tiempo siguiendo una dieta baja en calorías sin resultados satisfactorios, bajaron de peso cuando empezaron a comer más. ¿Conoce a alguien que haya tenido esta insólita experiencia? El aumento del consumo de alimentos creó literalmente una exigencia de fuerza metabólica y, por lo tanto, de absorción de oxígeno. El aumento resultante de la capacidad de quemar calorías compensó con creces el mayor consumo de alimentos.
Ciertamente, muchos aumentamos de peso simplemente porque comemos mucho. Pero, cuando cambiamos al extremo opuesto (comer muy poco) lo más probable es que reduzcamos nuestra capacidad de quemar calorías. Ahora mismo hay aproximadamente 60 millones de estadounidenses que están a dieta. Si las dietas bajas en calorías (o sea, de 1.400 calorías o menos al día) fueran realmente eficaces a largo plazo, deberíamos ver más resultados satisfactorios y un menor número de personas a dieta. Por otra parte, tampoco se debe comer en exceso y esperar bajar de peso de esta manera. Lo cierto es que ninguno de los extremos, ni mucha ni muy poca comida, lo llevará adonde usted quiere ir.
Si usted realmente desea alcanzar su peso y metabolismo óptimos, no puede llegar a esa meta negando sus necesidades y yendo en contra de la biología. Perder peso implica ganar vida. Coma relajado y respire generosamente, y accederá al plan de la naturaleza para lograr una mayor salud y satisfacción interna con los alimentos.
Eso no es todo en relación con el oxígeno. Una mayor absorción de oxígeno no sólo nos ayuda a quemar los alimentos, sino que también es necesaria para quemar el propio combustible interno del cuerpo: la grasa. El "efecto de entrenamiento" de cualquier ejercicio normal produce dos beneficios fundamentales. En primer lugar, el ejercicio simplemente ayuda a su cuerpo a absorber más oxígeno. En segundo lugar, su cuerpo aprende cómo usar mejor ese oxígeno. Y la estrategia del cuerpo para usar más eficientemente el oxígeno consiste en aprovechar la grasa como combustible. Lo más sorprendente es que uno puede obtener al menos una parte de los beneficios del ejercicio aeróbico simplemente entrenándose para respirar con mayor plenitud mientras esté sentado comiendo. También prosperará en lo que se refiere a quemar grasas si se recuerda constantemente que debe respirar más profundamente a lo largo del día. ¡Respirar es literalmente un ejercicio de quemar grasas!
Información que debe conocer sobre la "vitamina O"
Le revelaré uno de los secretos mejor guardados en el negocio de la nutrición: hay un solo nutriente milagroso, que tiene un profundo poder metabólico, está al alcance de todos y casi nunca se usa a plenitud. Este nutriente milagroso es la "vitamina O", o sea, el oxígeno. Y lo necesitamos en grandes cantidades. Cuando se trata de alimentos, lo importante es la calidad. Cuando se trata de oxígeno, lo importante es la cantidad. Muchas personas restringen su ingestión de alimentos pero nadie se pone a dieta de oxígeno. Si lo intenta, verá lo que ocurre. El ser humano puede sobrevivir cuatro semanas sin alimentos y cuatro días sin agua, pero sólo puede durar cuatro minutos sin oxígeno. ¡Éste sí que es un nutriente esencial! Aunque usted no lo supiera, el oxígeno ha sido y siempre será su prioridad número uno en materia de nutrición.
En resumen, si uno no respira, es lo mismo que si no hubiera comido. Todo el proceso de digestión consiste en descomponer los alimentos en fragmentos microscópicos que puedan ser enviados a sus células y quemados con oxígeno para liberar energía. Más del 95 por ciento de la energía generada en el cuerpo proviene de la simple combinación de oxígeno más alimentos. Sin oxígeno, sus alimentos son literalmente inútiles. Cuando uno desea prender el fuego en la chimenea, lo que más le interesa es tener buena leña (que sirva de combustible) y que el aire circule adecuadamente. Sin oxígeno, el fuego no podría existir y el combustible no se quemaría. Lo mismo ocurre en su organismo. El cuerpo mismo es literalmente una máquina biológica productora de calor. Casi todas las reacciones químicas que ocurren dentro de nosotros generan calor como subproducto. A nivel celular, los alimentos son el combustible y el oxígeno sirve literalmente para avivar las llamas de nuestro metabolismo. De ahí que el parámetro más comúnmente usado para medir la tasa metabólica sea la utilización de oxígeno. El metabolismo depende completamente del oxígeno. Y el oxígeno se obtiene a través de la respiración.
Resulta sorprendente ver que el oxígeno, siendo una parte vital de nuestra dieta, no recibe la atención que merece. La mayoría de nosotros aprendimos en las clases de biología de secundaria que el oxígeno combinado con los carbohidratos (alimentos) ocasiona la liberación de energía. No obstante, pocos aprendimos el secreto de que, mientras más respiramos, más calorías quemamos.
**Ejercicio: Respire mientras come**
Respirar durante las comidas es una manera excelente de obligarse a comer despacio y de forma relajada. Si usted come mientras está distraído por tareas del trabajo o mientras mantiene una conversación tensa, o si es de los que acostumbran a comer rápido, su respiración será más superficial. Al recordarse a sí mismo que debe respirar más profundamente durante las comidas, naturalmente comerá más despacio, estará más presente y su metabolismo será más potente.
A fin de aumentar su capacidad aeróbica mientras come, pregúntese al menos tres veces durante las comidas: "¿Cómo está mi respiración?" Luego intensifique conscientemente su respiración con el menor esfuerzo posible. Concéntrese en respirar hondo hasta alcanzar un nivel que, si bien es nuevo para usted, le resulta de todos modos natural y cómodo.
Válgase de una respiración menos brusca y más profunda como pausa natural durante las comidas. Deléitese con el oxígeno como se deleitaría con la comida misma. Respire profundamente tres veces en cada pausa.
Considere que el oxígeno que inhala es tan fundamental para su comida como lo pueden ser una ensalada o un encurtido. Cada vez que respire profundamente hará llegar más oxígeno a su torrente sanguíneo y a sus células, donde aquél generará instantáneamente el poder de quemar calorías. Después de varias semanas, se percatará de que respirar mientras come se ha convertido en un nuevo hábito. No necesitará recordarse con tanta frecuencia que debe respirar, porque la respiración será una parte natural y automática del comer. Al sosegarse y respirar mejor, hará que se acelere su metabolismo.
Es interesante señalar que otra manera sencilla de aumentar su consumo de oxígeno y, por tanto, su metabolismo, consiste en abrir una ventana. El aire en exteriores tiene un mayor porcentaje de oxígeno que el aire en interiores. La falta de oxígeno en el aire viciado de interiores o en una habitación sin ventanas (situación típica en los espacios de trabajo o en edificios de oficinas) constituye un causante de estrés fisiológico para el organismo. Cuando la cantidad de oxígeno en un espacio interior es muy baja, el ritmo cardiaco y la presión sanguínea aumentan levemente y el contenido de glucosa en la sangre disminuye. Nos sentimos amodorrados, irritables, con baja energía y necesitados de algún estímulo. Y, ¿cuál es la estrategia típica a la que recurrimos cuando nos sentimos de esta manera? Buscamos algo que comer o tomamos una taza de café para que nos despierte. Muchos estamos sedientos de oxígeno pero creemos equivocadamente que tenemos hambre de comida.
Como suele suceder, quienes disponen de más dinero son los que realmente hacen un uso óptimo de estas realidades médicas. La próxima vez que vaya a un casino en Las Vegas y se pregunte cómo es que las personas logran mantenerse tantas horas jugando con energía, entusiasmo y abandono, puede dar las gracias al oxígeno. Los casinos bombean cantidades extraordinarias de O2 en sus salas de clima controlado para mantener alertas a los jugadores. Usan un valioso recurso vital para hacer que usted se desprenda de otro valioso recurso. ¿Se imagina cómo sería nuestra vida laboral si todos los negocios y empleadores prestaran tanta atención como prestan nuestros amigos en Las Vegas a la oxigenación máxima?
Más consejos para la alimentación relajada y la respiración profunda
Encienda una vela en la mesa, ponga una música suave, decore el entorno donde come con objetos bellos y alegres.
• Si come fuera de casa, escoja un restaurante que tenga una atmósfera relajada y propicia a la nutrición. Si la comida es para llevar, busque un sitio apacible o festivo al aire libre donde pueda deleitarse con su comida.
• Cene en buena compañía, con personas cuya presencia lo nutran e inspiren.
• Haga que su conversación sea edificante y libre de negatividad o chismorreo.
• Cuide su postura al comer. Una postura erguida le permite respirar más profundamente y mejor.
• Durante su jornada laboral, haga una pausa para respirar. Salga a respirar aire fresco, aunque sólo sea por unos minutos.
• Aproveche el momento de comer para deshacerse de todas sus preocupaciones y dejar de pensar en el trabajo. Use pensamientos positivos que lo ayuden a asimilar nutrientes y quemar calorías. Si insiste en preocuparse y trabajar, lo puede hacer después que se haya relajado y haya terminado su comida nutritiva.
A medida que experimente con estas técnicas de respiración, fíjese en cualquier cambio en sus niveles de energía después de las comidas, en su cociente de satisfacción y en cualquier mejora en cuanto a malestares digestivos. También sería conveniente que prestara atención a la respiración durante el resto del día, porque el respirar de forma intencionada, lenta y profunda hará que aumente aún más la oxigenación del cuerpo, lo que se traduce en una fuerza metabólica aún mayor.
Además, sería útil que se fijara en cualquier tipo de resistencia que pueda tener ante la idea de relajarse y sosegarse con las comidas. Esta transición a menudo puede prestarse a conflictos. Puede despertar en nosotros algunas cualidades que parecen escapar de nuestro control. Nuestra forma de alimentarnos es nuestra forma de vivir. Así pues, sosegarnos con las comidas nos ayuda de manera simbólica a vivir relajadamente con nuestros cuerpos, nuestras profesiones, nuestros temores y deseos y cualquier cosa que nos depare la vida. Es cuestión de darnos a nosotros mismos el derecho a compartir la sencillez de nuestros momentos felices en la Tierra. Es cuestión de recuperar nuestro tiempo, nuestra dignidad y la inviolabilidad del cuidado del propio ser. Si usted ha consumido sus alimentos a toda velocidad, es hora de que se relaje para conseguir el nivel metabólico que debería tener.
_Lecciones clave_
• Cuando se activa la respuesta de estrés, se bloquea la digestión. Cuando se activa la respuesta de relajación, la digestión alcanza su plenitud.
• El estrés crónico de bajo nivel, al estimular la producción de cortisol e insulina, hace que disminuya la capacidad de quemar calorías. Uno de los resultados de esto puede ser el aumento de peso.
• La preocupación y la ansiedad generan una respuesta de estrés. Uno de los resultados puede ser el aumento de peso.
• El estado de estrés puede crear las condiciones metabólicas para la pérdida de densidad ósea. El estado de relajación contribuye a la buena salud ósea.
• La respiración consciente disipa la respuesta de estrés y promueve el pleno poder digestivo.
• El oxígeno es el nutriente metabólico más fundamental y necesario para el organismo. Mientras más respiramos, mejor digerimos y asimilamos los alimentos, y más calorías quemamos.
• El tiempo es un nutriente esencial.
• Antes de concentrarse en qué comer, aprenda cómo comer.
SEMANA 2
El poder metabólico de la calidad
El descubrimiento de un nuevo plato contribuye
más a la felicidad del ser humano que el
descubrimiento de una nueva estrella.
JEAN BRILLAT-SAVARIN
Sucede que la interrogante nutricional más importante y urgente de nuestros tiempos ("¿Qué debo comer?") encuentra todo un abanico de respuestas confusas y contradictorias. Afortunadamente, tengo una sugerencia muy práctica que hacerle sobre cómo convertirse en su propio dietista y asegurarse de siempre elegir alimentos con un nivel nutritivo de muy bueno a excelente. Si me concede el honor de ser su nutricionista personal sólo una vez, si me pregunta "¿cuál es la estrategia nutricional más sencilla que, más que cualquier otro cambio, me podría dar el mayor rendimiento metabólico, mejorar mi salud y mi peso, y tener un efecto positivo en las vidas de otras personas y en el propio planeta en que vivimos?", yo le diría que siga esta pauta:
**Eleve la calidad de sus alimentos.**
La calidad lo es todo. En cualquier estudio importante sobre nutrición que se ha llevado a cabo y en que se han comparado las dietas de los países industrializados (que consisten principalmente en alimentos refinados, producidos en masa y de baja calidad) con las de las culturas tradicionales (alimentos frescos, integrales, de cultivo local y llenos de vitalidad), quienes siguen dietas tradicionales obtienen resultados mucho mejores en todas las categorías importantes de la salud. Eleve la calidad de sus alimentos y elevará el metabolismo.
El término _calidad_ se refiere a cualquiera de los siguientes atributos, o a todos ellos: real, fresco, orgánico, epicúreo, hecho con amor, hecho en casa, producido localmente, en variedades clásicas, denso en nutrientes, con bajo contenido de toxinas producidas por el hombre, cultivado y comercializado con honestidad e integridad, de buen gusto, lleno de buen sabor, no con sabores virtuales que encubren la ausencia de nutrientes y de vitalidad. _Calidad_ significa que un alimento ha recibido cuidado y atención y que tiene una historia interesante.
Como mismo ocurre con los automóviles y otros bienes duraderos, con los alimentos uno obtiene el valor de lo que invierte en ellos. ¿Concebiría usted la posibilidad de que un automóvil se fabricara con las piezas más baratas, se ensamblara a toda prisa y se diseñara sin prestar ninguna atención a las necesidades del conductor para que éste se sintiera a gusto en dicho vehículo? La ciencia no tiene forma de medir el valor y los efectos de la calidad de los alimentos en el cuerpo humano, porque todavía estamos en pañales en lo que se refiere a nutrición y sólo podemos entender de valores de nutrientes. Cuando los expertos en nutrición sentamos las bases de cómo el valor de un alimento se revela gloriosamente en su perfil de nutrientes, todo suena muy científico. Y lo es. Excepto que estos parámetros del verdadero valor de una comida son muy limitados e incompletos desde el punto de vista científico.
Cuando el arte de la comida sea elevado al fin al sitial que merece, los científicos podrán hablar con mayor sabiduría y claridad. No me refiero tanto a una manera distinta de ver los alimentos y la nutrición como a un criterio totalmente novedoso en relación con el mundo y el lugar que ocupamos en él.
Cuando la ciencia estudia los alimentos, la nutrición o un suplemento, rara vez examina la calidad. Ésa es una de las razones ocultas de que los resultados de los estudios sobre alimentos a menudo estén en conflicto entre sí y de que uno reciba invariablemente mensajes contradictorios sobre la alimentación. Recordemos la famosa "paradoja francesa" a la que nos referimos en el capítulo anterior y cómo los europeos pueden consumir una mayor cantidad de grasa sin tener el mismo aumento en los niveles de colesterol y enfermedades del corazón que sufren los estadounidenses. Esto no sólo se debe a los beneficios que reciben los europeos del poder metabólico de la relajación, la respiración y el comer sosegadamente, es también cuestión de calidad. Gran parte de la cocina europea está a nivel al que deberíamos aspirar. Esa calidad más elevada supone un metabolismo más sano. La única "paradoja" en este caso es por qué los investigadores no podían ver la imagen de conjunto.
Aún hay otra razón muy importante para elegir alimentos de alta calidad que la mayoría de los expertos tienden a pasar por alto y que ciertamente merece su atención si usted está preocupado por el peso: mientras peor sea la calidad de la comida, en mayor cantidad la consumiremos.
El problema de la sobreingesta en nuestro país no radica en que tengamos un trastorno colectivo de la fuerza de voluntad. Cierto, muchos comemos en exceso. Pero lo hacemos, en gran medida, porque nuestros alimentos son deficientes en nutrientes. Le faltan las vitaminas, minerales, enzimas y todos los factores y energías que necesitamos, aunque aún no hayan sido descubiertos. El cerebro se percata de estas deficiencias y responde sabiamente ante esta ausencia de química vital ordenándonos aplicar la más sensata estrategia de supervivencia: consumir más alimentos. Si usted fuera al cine o a una fiesta y la experiencia le pareciera poco sustancial, se iría sintiéndose insatisfecho y deseando más. Lo mismo ocurre con los alimentos.
Al escoger alimentos orgánicos, su dieta pasa a tener una mayor densidad de nutrientes. Eso se debe a que, libra por libra, los alimentos orgánicos tienen más vitaminas y minerales que sus homólogos no orgánicos y producidos en masa. También tienen menor cantidad de xenotoxinas, o sea, sustancias fabricadas por el hombre como los pesticidas y herbicidas que funcionan como antinutrientes y agentes causantes de enfermedades. Orgánico significa simplemente "real".
Por supuesto, es fácil volverse apático cuando uno oye constantes mensajes sobre los carcinógenos presentes en nuestros alimentos, los males de los carbohidratos o el mercurio en el pescado. He oído a muchas personas lamentarse: "Todo hace daño". Pero ahora tiene una herramienta poderosa para ayudarle a abrirse paso a través de toda esa confusión nutricional:
**Coma lo que coma, escoja la versión de mayor calidad posible de esos alimentos.**
Esto hace que aumenten al máximo las probabilidades de que sus alimentos sean sanos, trátese de tocino, bananas, pan o una tarta de cumpleaños. Sí, los alimentos de calidad son definitivamente más caros. Pero ése es el verdadero seguro de salud. Lo que está en juego es su vida y la de sus seres queridos cuya alimentación depende de usted.
_Los alimentos son..._
Antes de examinar sugerencias específicas sobre cómo incluir alimentos de calidad en nuestra dieta, entendamos mejor el verdadero poder metabólico de la calidad mediante un examen más minucioso de lo que son realmente los alimentos.
La mayoría de las personas dirían que los alimentos son un conjunto de vitaminas, minerales, macronutrientes y otras sustancias químicas. Para determinar el valor de una comida, mediríamos la cantidad de nutrientes que contiene: lea las etiquetas de cualquier producto y verá esta filosofía en acción. Pero ya es hora de ponerse al día con el nuevo milenio. Esta visión de los alimentos ya no describe adecuadamente la realidad nutricional. Los alimentos son mucho más que un montón de sustancias químicas. Son energía e información.
Esta definición se aplica a cualquier sustancia que consumamos, sea agua, un vegetal o hierba, un suplemento, un medicamento, caviar o algodón de azúcar. Cualquiera que sea el efecto metabólico que el cuerpo recibe de esas sustancias ocurre porque éstas han comunicado un mensaje específico a nuestras células. La cafeína presente en el café le dice literalmente al corazón que aumente su ritmo, a la presión sanguínea, que se eleve y al sistema nervioso, que acelere sus funciones. La fibra contenida en la avena conversa de hecho con los intestinos, y les dice que se contraigan, al mismo tiempo que se comunica con el hígado, el páncreas y el torrente sanguíneo, indicándoles que disminuyan el colesterol de baja densidad y que estabilicen la glucosa en sangre. Los bioflavonoides que se encuentran en las bayas le indican al cuerpo que mantengan los vasos sanguíneos fuertes y elásticos, con objeto de reducir la inflamación celular y desacelerar el proceso de envejecimiento de tejidos específicos, como la mácula de los ojos. Los alimentos hablan con su cuerpo, y el cuerpo les responde.
Éste no es un concepto estrambótico sobre el metabolismo; es una realidad científica. Con su simple fórmula E=mc2, Einstein demostró que la materia y energía son la misma cosa y que una se puede convertir en la otra y viceversa. Y ambas están cargadas de información. De hecho, cada partícula en la creación (desde una humilde mota de polvo hasta un sol galáctico) contiene inmensas cantidades de información, también denominada memoria. El simple hecho de que no siempre podamos percibir con nuestros cinco sentidos esta biblioteca oculta dentro de todos los componentes de la materia no significa que no exista.
Tomemos el ejemplo del tomate. Si el suelo en que crece está agotado, entonces se podrá comprobar que el tomate tiene un bajo contenido de minerales, una menor cantidad de azúcares naturales y más ácidos, lo que significa que será duro, insípido e inferior en cuanto a nutrientes. Si se rocía con pesticidas y herbicidas, llevará a su cuerpo mensajes que pueden ser carcinogénicos, mutagénicos y neurotóxicos. Si se cultiva en una granja impersonal que funciona como una fábrica, el tomate carecerá de vitalidad y de encanto. Si lo recoge un trabajador migratorio mal remunerado que no recibe ninguna prestación social y no tiene casi ningún derecho laboral, entonces el tomate lleva en sí hipocresía y falta de integridad. Si es trozado con una máquina junto con otros miles de tomates, y luego es enviado a un restaurante de comida rápida y servido con un panecillo y con carne de una vaca que ha sufrido traumas aún peores, entonces nuestro tomate ya es suicida o incluso homicida, porque ha perdido su alma y ya no tiene razón para vivir. Creo que se ha llevado la idea.
Los antiguos sistemas de curación como el Ayurveda y la medicina china han reconocido desde hace mucho tiempo el carácter energético de los alimentos. En lugar de describir los elementos químicos que componen una comida, estos métodos se basan en los elementos "arquetípicos". Los describen como tierra, agua, madera, fuego y metal; kapa, pitta y vatta; yin y yang. Ninguno de estos elementos se pueden ver bajo un microscopio, pero se observan claramente en acción, de una forma muy parecida a cómo todos pueden percatarse de nuestro carácter, aunque éste no se puede localizar concretamente. El yin y el yang son tan reales para los chinos como lo son para nosotros las proteínas y las grasas.
Por todo lo anterior, el verdadero valor de un alimento nunca podrá discernirse a partir de una etiqueta. Su verdadero valor está en toda la energía e información que contiene. Y sí, esto incluye el contenido de vitaminas, minerales, proteínas, fibra y grasas. Pero también tiene que ver con la forma en que el alimento ha sido cultivado, manipulado, transportado, manufacturado, publicitado, cocinado, servido y consumido. Toda esta información vive dentro de un alimento del mismo modo que uno vive dentro de su cuerpo.
Así que, si verdaderamente queremos contrarrestar el aumento de las enfermedades del corazón con ayuda de la dieta, entonces es hora de que pongamos más corazón en la forma en que creamos los alimentos, los consumimos y los compartimos con los que tienen hambre. Si queremos poner coto al crecimiento desenfrenado de las células cancerosas en la familia humana y limitar la cantidad de carcinógenos en nuestros alimentos, es hora entonces de hacer que el mundo vaya más despacio, darnos cuenta de nuestro propio crecimiento desenfrenado y volver a pensar en la forma maniática en que producimos nuestros alimentos.
Muchas personas quieren que los alimentos le proporcionen salud, felicidad y todas las bendiciones de la belleza. Pues bien, la única forma en que los alimentos pudieran darnos semejante tesoro sería si los creáramos a esa imagen. Eso cosecharemos cuando cultivemos los alimentos con las energías del amor y la belleza.
_Limite los antinutrientes en su dieta_
Para poder aprovechar mejor lo que comemos, reducir el consumo de alimentos poco nutritivos es tan importante como aumentar el consumo de alimentos sanos. Los antinutrientes literalmente atentan contra la maquinaria metabólica del cuerpo a nivel celular. Los antinutrientes más potentes que hay que limitar son:
• Grasas de baja calidad
• Azúcar de baja calidad
• Harina blanca de baja calidad
• Productos lácteos de baja calidad
• Carnes de baja calidad
**_Grasas de baja calidad_** se refiere a cualquier alimento que contenga aceites hidrogenados, aceites parcialmente hidrogenados, aceite de nuez de palma hidrogenado, margarina y cualquier producto semejante que contenga aceite hidrogenado, aceite de semilla de algodón, Olestra (una grasa sintética) y la mayoría de los aceites comerciales de cocina adquiridos en supermercados. En las grasas de baja calidad también se incluyen la mayoría de las frituras: papas fritas, pollo frito, etc.
Lea las etiquetas de todo lo que compre. Los aceites hidrogenados se encuentran en muchos alimentos de producción masiva, incluidas las papas fritas, los totopos de maíz, las galletas saladas o dulces y los alimentos elaborados, congelados, horneados, así como las golosinas y otros. La mayoría de los aceites que se encuentran en un supermercado son altamente procesados, o sea, que se han calentado a altas temperaturas y han quedado desprovistos de sus grasas esenciales más delicadas y de otros nutrientes.
**En la medida de sus posibilidades, sustituya los productos que contengan grasas de baja calidad por aceites de calidad y alimentos con grasas de calidad.**
Entre éstos figuran el aceite de oliva, el aceite de sésamo y el aceite de coco. Todos estos aceites son excelentes para cocinar. Entre otros aceites que se pueden usar para aderezos y salsas figuran los de girasol, linaza, avellana, pistacho, cañamón y macadamia. Siempre utilice aceites orgánicos sin refinar y procesados en forma experta. Generalmente los podrá encontrar en tiendas naturistas. (Como nota colateral, no soy muy aficionado al aceite de canola. No es muy estable ante el calor y la mayoría de las marcas están excesivamente procesadas. Lo mismo se aplica al aceite de soya.)
Use mantequilla verdadera en lugar de margarina. Lo ideal es que sea producida sin hormonas y fresca de la granja u orgánica. La mejor mantequilla es la que se extrae de la leche cruda y sin pasteurizar. Se puede usar además la mantequilla clarificada, que también se conoce como "ghee" o mantequilla separada. Es un alimento tradicional de la India, utilizado desde hace mucho tiempo, que resulta muy estable ante el calor y, por lo tanto, se puede usar para dorar o para freído ligero.
Otras fuentes recomendables de grasas beneficiosas para la salud son:
Aguacates o paltas — lo mejor es que sean orgánicos y frescos
Aceitunas — aproveche su gran variedad
Pescado fresco — especialmente el pescado capturado en el océano o en ríos, no criado en granja; busque variedad
Nueces y semillas — las orgánicas siempre son mejores
Mantequilla de nueces — mantequilla de cacahuete, de almendra, de sésamo
Huevos de corral — vienen de una gallina de verdad, que corre por toda una granja de verdad y come comida de verdad
Productos lácteos orgánicos — incluidos el yogur, el queso y la leche, especialmente si se han producido a partir de leche cruda y sin pasteurizar, extraída de animales alimentados con pasto fresco
Por cierto, las grasas sanas en sus alimentos no se convierten en grasa en su cuerpo. Si usted se priva de grasas esenciales para bajar de peso, obtendrá el resultado opuesto. E incluso si logra bajar de peso, probablemente sufrirá alguno de los síntomas de deficiencia clínica de grasa: irritabilidad, fatiga, cabello deslustrado y quebradizo, sequedad de la piel, enrojecimiento alrededor de los ojos, problemas digestivos, estreñimiento, incapacidad de bajar de peso y trastornos del estado de ánimo. Por eso, quisiera ofrecerle una disculpa en nombre de todos los nutricionistas, dietistas y médicos que le han dado la información incorrecta desde finales de los años 60. Usted no tiene la culpa de que le hayamos indicado el camino incorrecto. Recuerde, éste es el mismo grupo de expertos con buenas intenciones que inventaron la comida de hospital. Las intenciones son buenas, pero no siempre dan en el blanco.
La grasa es esencial para la vida hasta tal punto que, si pudiéramos succionar toda la grasa que hay en nuestro cuerpo (la liposucción máxima) moriríamos al instante. La grasa sirve como fuente de energía para el corazón y el cerebro. Es un elemento esencial de muchas de las hormonas y sustancias químicas que nos mantienen vivos. Es una fuente nutricional para el sistema nervioso central y reviste y protege todos los órganos. Por éstas y otras razones, y porque el organismo no puede producir por sí mismo todas las grasas específicas que necesita, hemos clasificado estos importantes componentes de nuestra dieta como "grasas esenciales", también conocidas como "ácidos grasos esenciales". Es posible que también haya oído referirse a ellas como ácidos grasos "omega 3" y "omega 6".
La grasa cumple una importantísima función estructural: es una de las piedras angulares de la pared que separa a cada célula en su cuerpo. Las paredes de sus células no se parecen en nada a las paredes de su casa. Una pared arquitectónica es inflexible, sólida, carente de inteligencia e impermeable a los elementos, y puede hacerse de cualquier material que impida que lo de afuera entre y que lo de adentro salga.
Nuestras paredes celulares son exactamente lo opuesto. Son dúctiles, permeables, altamente complejas y extremadamente inteligentes, pues deben controlar con precisión el tráfico de miles de tipos distintos de biomoléculas a través de su superficie en cada milisegundo. En lo que se refiere al organismo, la pared celular es un elemento decisivo y las grasas beneficiosas son parte indispensable del proceso.
Quizás el mérito mayor de las grasas es que componen aproximadamente del 40 al 60 por ciento del cerebro. ¿Qué le parece esa estadística tan poco atractiva? Así que, la próxima vez que piense, dé gracias a las grasas.
Por supuesto, las grasas de baja calidad son inocuas en pequeñas cantidades para la mayoría de las personas. Sin embargo, cuando las grasas de baja calidad pasan a formar parte de nuestra alimentación cotidiana, nuestra salud sufre las consecuencias tarde o temprano. Estas grasas, que son químicamente distintas a las grasas de calidad, se convierten literalmente en las piedras angulares de nuestras paredes celulares. El resultado es que la pared celular se vuelve más rígida, susceptible a la oxidación o envejecimiento, y menos inteligente, pues va perdiendo su capacidad de tomar decisiones inteligentes acerca de lo que debe entrar y lo que debe salir. Esto es de especial interés en lo que se refiere al cerebro, que está compuesto en gran medida por grasa esencial del tipo omega 3. Cuando se incorpora grasa de baja calidad a su estructura, el tejido cerebral se oxida más fácilmente y se torna rígido (y, por lo tanto, "estúpido"). Esto contribuye a que uno parezca menos interesante en fiestas y a aumentar las probabilidades de sufrir el mal de Alzheimer, la demencia senil y otras enfermedades del cerebro. Por consiguiente, no hay que ser un "cerebro" para reconocer la necesidad de consumir más grasas sanas y menos grasas disfuncionales.
**_Azúcar de baja calidad_** se refiere a cualquier alimento que contenga jarabe de maíz con alto contenido de fructosa, jarabe de maíz con fructosa, jarabe de maíz, azúcar blanca, glucosa, "Florida Crystals" o cualquier edulcorante artificial. Lea las etiquetas de los productos para comprobar si tienen estos ingredientes. Las distintas formas de jarabe de maíz se encuentran comúnmente en los refrescos, jugos, caramelos y golosinas y galletas dulces empacadas, e incluso en las barras de proteína consideradas sanas. En la medida de sus posibilidades, elimine de su casa los productos que tengan estos ingredientes. Que sean una excepción ocasional en su menú en lugar de ser la norma.
Sustituya los refrescos comerciales por jugos orgánicos, infusiones frías o agua. Teniendo presente la necesidad de buscar variedad, use confituras orgánicas; frutas frescas; galletas, pasteles y magdalenas orgánicos o frescos; caramelos orgánicos; helados y sorbetes orgánicos.
**Sustituya todos los "alimentos gratificantes" con versiones de mayor calidad adquiridas en tiendas naturistas, en las que se usen edulcorantes de calidad como los que figuran a continuación:**
Miel pura: En la etiqueta debe decir "cruda", "natural" o "no sometida a calor". La miel cruda tiene un alto contenido de enzimas y sustancias fitoquímicas relacionadas con las plantas y su polen. Se utiliza tradicionalmente como alimento y como medicina. No debe administrarse miel cruda a lactantes.
Jarabe de arce: Tiene un alto contenido de minerales y sustancias fitoquímicas. Las variedades orgánicas del jarabe de arce no contienen formaldehído, que sí se usa en la mayoría de las variedades producidas en masa.
Malta de cebada: Menos dulce que otros edulcorantes; buena para hornear.
Stevia: Es un edulcorante sin calorías, completamente natural y obtenido de hierbas, con propiedades medicinales. Una pequeña cantidad puede endulzar su refresco o infusión.
Sucanat: Es una forma auténtica de azúcar morena, hecha de jugo de caña orgánico deshidratado. Con mayor densidad de nutrientes que el azúcar blanca, sin residuos químicos, buena para hornear y para refrescos y bebidas. La rapadura o raspadura es otro producto similar.
La versión oficial de la comunidad científica es que todos los azúcares (sea azúcar blanca, jarabe de maíz, miel, jarabe de arce, etc.) son esencialmente iguales desde el punto de vista químico. Desafortunadamente, en la actualidad no existe ningún modelo científico que sea lo suficientemente sutil y preciso como para revelar las verdaderas distinciones entre estos portadores de energía e información, tan distintos entre sí. Por ese motivo, nuestra dieta colectiva está repleta de edulcorantes de baja calidad y muchos sufrimos las consecuencias: obesidad, cardiopatías y diabetes.
Por cierto, tal vez usted crea que los refrescos de dieta y los edulcorantes artificiales nos ayudarían a bajar de peso porque no contienen calorías ni azúcar que produzcan un aumento de la insulina. Pero eso no es verdaderamente así. Después de 40 años de uso comercial de edulcorantes químicos artificiales, ni un solo estudio ha demostrado nunca un vínculo siquiera medianamente convincente entre los sustitutos del azúcar y la pérdida de peso.
En lugar de ello, los investigadores están descubriendo ahora que los edulcorantes artificiales (que producen un falso placer) en realidad podrían contribuir al aumento de peso. Por una jugarreta del destino, la molécula del edulcorante artificial es tan lista que logra convencer al cerebro de que es azúcar de verdad, por lo que el organismo libera insulina para ayudar a metabolizar el azúcar artificial. Al no encontrar verdadera azúcar y no tener nada que hacer, el excedente de insulina realiza su otra tarea evolutiva, que consiste en mandar al cuerpo la señal de almacenar grasa. Además, hay pruebas cada vez más numerosas y convincentes de que el aspartame es una importante neurotoxina. Por todo lo anterior, mi consejo como profesional es que, si tiene en su casa algún alimento edulcorado artificialmente, asegúrese de "matarlo" de un buen pisotón y de deshacerse de él.
**_Harina blanca de baja calidad_** se refiere a alimentos producidos en masa como las pastas, panes, galletas dulces, magdalenas y bagels; galletas saladas; cereales fríos para el desayuno; productos de avena endulzados con azúcar; barras de granola o muesli de producción comercial; pretzels, pasteles y rosquillas.
Sólo desde el siglo pasado nuestra dieta ha incluido una cantidad tan grande de carbohidratos refinados y altamente procesados: productos de harina blanca, panes, galletas dulces, rosquillas, frituras, pretzels, cereales, galletas saladas, pastas, dulces, etc. Nuestros antepasados consumían alimentos que contenían carbohidratos sin procesar. Cuando consumimos estos alimentos, de los que se han eliminado la mayor parte de sus vitaminas y minerales, nuestro nivel de insulina sube excesivamente, lo cual indica al organismo que debe aumentar de peso y almacenar grasa. Además, el exceso de insulina hace que el cuerpo pida aún más azúcar y más alimentos con carbohidratos. Todo esto puede acarrear diabetes, cardiopatías y muchos tipos de enfermedades degenerativas.
Hay un sinfín de libros de dietas, como los que promueven la dieta de Atkins, la "Dieta de la Zona", "Sugar Busters", la dieta del paleolítico, las dietas con alto contenido de proteína y otras, que tienen en común una sabiduría muy útil: el exceso de carbohidratos refinados en nuestras dietas constituye un problema. Así que, en lugar de preocuparse por determinar la cantidad precisa de carbohidratos que su cuerpo necesita según los científicos (advertencia: nadie conoce este dato de todos modos), comience su exploración del poder metabólico de la calidad mediante la inclusión de carbohidratos que sean portadores de calidad y la limitación o eliminación en la mayor medida posible de los que no lo sean. Con sólo hacer esto, comenzará a controlar las ansias de carbohidratos y a descubrir la inteligencia natural de su cuerpo para determinar porciones, porcentajes y cantidades.
**Revise su despensa y comience a reemplazar productos de harina blanca de baja calidad con alimentos que contengan carbohidratos de calidad.**
Estos alimentos comprenden las variantes orgánicas de arroz integral, frijoles o judías, quinua, cebada, maíz, amaranto, avena, sémola de avena, lentejas, garbanzos, mijo (los mejores resultados con los granos y frijoles se obtienen poniéndolos en remojo antes de cocinarlos); pastas orgánicas y/o acabadas de hacer; panes de masa fermentada o de trigo germinado o de harina integral recién hechos; galletas saladas de centeno; galletas saladas sin aceites hidrogenados; panes, galletas saladas y otros productos preparados con harina de espelta; frituras orgánicas (de maíz, papa y arroz, sin aceite o con aceite de oliva); vegetales orgánicos, con inclusión de calabaza, camote o batata, ñame, tubérculos, papa; frutas orgánicas, y teniendo en cuenta en todo caso la variedad.
Si está tratando de mermar el consumo de carbohidratos, debería concentrarse en los carbohidratos refinados y producidos en masa. Están permitidos los vegetales, incluidos los de alto contenido de almidón; sólo que no los consuma en exceso. Las frutas también son excelentes. Simplemente asegúrese de tener en cuenta la variedad y no limitarse a piña, uvas, bananas y frutas secas, pues éstas pueden tener un elevado contenido de azúcares naturales. Los granos integrales, como el arroz integral, son preferibles antes que sus variantes refinadas pero, como nutricionista, le puedo decir que no se va acabar el mundo si come arroz blanco o pan blanco de vez en cuando. Siempre que éstos no sean los componentes principales de su dieta, nunca he oído decir que nadie se haya muerto por consumirlos ocasionalmente.
**_Productos lácteos de baja calidad_** se refiere a alimentos no orgánicos producidos en masa, con hormonas, como queso, leche, yogur, queso crema, requesón, leche saborizada y golosinas que contengan subproductos del queso.
Les quedan días contados a la leche y al queso como alimentos exaltados en nuestra dieta. Cada vez hay más pruebas de que la importancia de la leche como fuente de calcio absorbible ha sido muy exagerada y quizás incluso sea falsa. Como fuentes excelentes de calcio disponible naturalmente, puede contar con las verduras de hoja y las nueces y semillas. Además de la intolerancia a la lactosa (la incapacidad de metabolizar el azúcar de la leche), muchas personas son sensibles o muy alérgicas al componente proteico de la leche sin darse cuenta de ello. Cuando la proteína de la leche se calienta a altas temperaturas, como sucede en el proceso de pasteurización, la compleja molécula de proteína de la leche (denominada caseína) sufre una modificación radical que la puede hacer citotóxica y neurotóxica. Si usted ha experimentado cualquier combinación de problemas crónicos de sinusitis, congestión nasal o pulmonar, goteo posnasal, sensibilidad digestiva, dolores de cabeza, alergias múltiples y sequedad de la piel, será un excelente candidato para experimentar con una dieta libre de todo tipo de leche y de productos lácteos durante la semana 2. Incluso si usted no presenta estos síntomas, le recomendaría encarecidamente que probara a no consumir productos lácteos durante esta semana para ver qué efectos le produce este cambio.
Los expertos en nutrición está en constante desacuerdo sobre los méritos de la leche y los productos lácteos. Esto se debe a que la mayoría de los productos disponibles comercialmente en esta categoría son de calidad extremadamente baja.
**En general, le sugiero que mantenga al mínimo el consumo de productos lácteos. Cuando tenga que usar estos productos, use los que aparecen a continuación en lugar de los producidos en masa y de baja calidad.**
Leche: La leche cruda, orgánica y sin pasteurizar es la mejor. Sería magnífico si la pudiera conseguir de producción local y sin hormonas.
Queso: Orgánico, o cualquier variedad de producción local o importada de alta calidad, hecha con leche cruda y sin pasteurizar.
Yogur: Con todo su contenido de grasa, orgánico, o producido localmente cuando sea posible.
Requesón: Lo mejor es que tenga todo su contenido de grasa, y que sea orgánico y fresco.
Mantequilla: La mayor calidad se encuentra generalmente en las variedades locales, orgánicas, hechas con leche cruda, o importadas de Europa.
Queso de soya: Un sustituto útil para muchas personas. La mayoría de las marcas contienen caseína (la proteína de la leche), pero es un producto que suele ser bien tolerado.
También puede usar leche de arroz, leche de almendras, yogur de soya, yogur de arroz y helados de soya y arroz como sustitutos de sus equivalentes lácteos.
**_Carne de baja calidad_** se refiere a todas las carnes utilizadas en comidas rápidas; carnes procesadas como los fiambres empacados y los perros calientes producidos comercialmente; las carnes utilizadas en comidas preparadas y congeladas; cualquier carne fresca o congelada que provenga de animales criados en jaulas, tratados con hormonas y alimentados con piensos procesados; cualquier carne que provenga de animales que no hayan sido criados y sacrificados con pulcritud y humanidad.
**En la medida de sus posibilidades, sustituya estas carnes de baja calidad con cualquier carne de pollo, pavo, res, cerdo, carnero u otros animales o aves que hayan sido criados con libertad de movimiento, sin hormonas, y alimentados con hierbas y productos orgánicos.**
Es posible encontrar muchos de estos productos frescos en su supermercado o fiambrería. También podrá encontrar variantes orgánicas de perros calientes, hamburguesas, caldos de pollo, salchichas y otros populares productos de carne congelados y preparados en una tienda naturista bien surtida. Los huevos de corral o de gallinas no enjauladas (que a veces se denominan "huevos omega") son la opción de calidad preferida. También puede sustituir toda o parte de la carne en su dieta con pescado fresco o ahumado o con proteínas de origen vegetal como el tofu, el tempeh y las mantequillas de nueces.
Ya es hora de que reconozcamos la realidad de nuestro hábito de comer carne. La humanidad debe su supervivencia, en gran parte, a los animales cuyas carnes hemos comido. Aducir que comer carne es malo es como negar el sustento que nos ha permitido llegar al punto en que nos encontramos. De otro modo no estaríamos aquí, ni siquiera para discutir sobre este tema. No obstante, nuestra dependencia excesiva de los alimentos de origen animal presenta un claro desequilibrio y nuestra relación con el reino animal nos está matando. Nuestra insistencia en producir carne en masa está contaminando gravemente nuestro medio ambiente y despojando a los países en desarrollo de valiosos recursos de agua y tierra. Encima, ha creado el fenómeno de las "vacas locas". La verdad ineludible es que comer una criatura criada y sacrificada sin respeto ni pulcritud es una manera de conjurar enfermedades sobre la familia humana.
Quizás le interese saber que la carne más cara y mejor cotizada del planeta proviene de las vacas de Kobe. Son ejemplares de ganado bovino de calidad y su atractivo se debe enteramente a su estilo de vida. Tanto es así que, si alguna vez usted tuviera que sentir celos de un grupo de vacas, sería precisamente de éstas. Las vacas de Kobe viven en Hawaii. Disfrutan de un clima perfecto y soleado, comen el pasto más sano y sabroso, cultivado en suelo volcánico rico en nutrientes, respiran aire fresco de isla, tienen una hermosa vista del océano y disponen de mucho tiempo para socializar y reflexionar en paz. Viven una vida de ensueño. ¿Hay que sorprenderse de que tengan tan buen sabor? Usted mismo tendría buen sabor si pudiera darse esa vida.
En conclusión, estas vacas aportan a quienes comen su carne exactamente lo mismo que se les facilitó a ellas: vida, armonía, alimentación y nutrición. Por supuesto, cuando comparamos esto con la horrorosa existencia de los animales criados en granjas se echa a ver por qué los expertos obtienen resultados tan diferentes y extraen tantas conclusiones contradictorias sobre las relativas ventajas para la salud de consumir carne.
Los resultados varían porque la calidad de la carne varía. Por eso algunos expertos aprueban el consumo de carne mientras que otros concluyen lo contrario. A mi juicio, las investigaciones más respetables demuestran que los países que presentan un alto consumo per cápita de productos comerciales de carne, junto con un exceso de carbohidratos refinados, grasas hidrogenadas y aceites vegetales de baja calidad tienen la mayor incidencia de cáncer relacionado con la ingestión de carne, mientras que en las sociedades tradicionales en las que no se consume azúcar, productos de harina blanca ni aceites de baja calidad y sí se consumen carnes de alta calidad, no existe ningún vínculo entre el cáncer y la ingestión de carne. ¿Se da cuenta de las implicaciones?
_Semana 2: Su tarea principal_
La semana 2 es su oportunidad de hacer lo mejor que pueda para eliminar de su casa los productos alimenticios de baja calidad, producidos en masa y sin inspiración y sustituirlos con equivalentes de alta calidad. Es el momento de concentrarse en alimentos que sean frescos o hechos en casa, orgánicos, producidos localmente y de la mejor calidad que usted pueda encontrar, teniendo en cuenta los factores que lo puedan limitar, como el tiempo, la conveniencia, el dinero o la disponibilidad. Durante este tiempo, olvide su necesidad de saber exactamente qué alimentos consumir y en qué cantidades.
Su tarea principal en la semana 2 es, entonces: dondequiera que coma y sea cual sea el alimento que consuma, asegúrese de que éste sea de calidad al menos en un 80 por ciento de las ocasiones. Esto le garantiza recibir los nutrientes que necesita para gozar de buena salud y eliminar al mismo tiempo las sustancias tóxicas que contaminan la cadena alimenticia y suprimen el poder del metabolismo. Piense en la semana 2 como un nuevo comienzo en lo que respecta a su manera de valorar la nutrición de su cuerpo. Celebre esta nueva oportunidad sabiendo que sentará una nueva pauta en cuanto a su manera de reconocer el milagro del sustento que nos conecta a todos. Diga adiós a los alimentos que no reflejan la calidad, gusto y vitalidad que usted merece y dé la bienvenida a los alimentos que sí los reflejan. Esto no significa que nunca más podrá comer una golosina hecha con productos refinados. Significa simplemente que su dieta en general va por el camino de la calidad, que usted ha optado por traer un nivel más elevado de alimentación al santuario de su hogar y que cualquier cosa que se aparte de la calidad ha de ser la excepción y no la regla.
Seamos realistas. La mayoría de nosotros no vamos a comer siempre alimentos considerados "sanos". En algún momento comeremos tarta, galletas, pastas y otras comidas chatarra. En algún momento tomaremos bebidas alcohólicas. Que así sea. Es mejor reconocer que esto es parte de nuestro programa de nutrición en lugar de fingir que no lo es. Hagámoslo de esa forma intermedia, que es la forma honesta y, para muchas personas, la forma práctica. Y en la época en que vivimos, bien pudiera ser la forma más sana. De veras. Así que no malgaste su energía tratando de portarse como un santo para luego tratarse como un demonio cuando inevitablemente no alcance la santidad. Si al menos el 80 por ciento de los alimentos que usted ingiere son de alta calidad, le irá bien. Si logra algo mejor que eso, considérelo una recompensa extra.
**Ejercicio: Compras de calidad**
Durante la semana 2, averigüe dónde se encuentran los supermercados naturales o cooperativas de alimentos mejor surtidos de su zona. Algunas de las cadenas más conocidas en Estados Unidos son Wild Oats, Whole Foods, Wild by Nature y Trader Joe's. Muchas de estas tiendas tienen una sección de alimentos naturales frescos, concebida para los que no tenemos mucho tiempo para cocinar.
Dedique varias horas a recorrer la tienda. Explore, lea las etiquetas, haga preguntas y vea cuáles alimentos le atraen la atención. Haga todo lo posible por escoger alimentos orgánicos.
Si tiene hijos pequeños, llévelos de compra con usted e inclúyalos en el proceso. Explíqueles que su nutricionista dijo que si querían crecer y ser fuertes o tener buena piel y cabello lustroso, deberían comer alimentos de buena calidad. Déles opciones de la sección de golosinas o de comidas preparadas y permítales participar en la definición de su nuevo estilo de vida.
Además, busque los mejores lugares donde comprar carnes de buena calidad, pescado fresco, pan de harina integral o de masa fermentada recién hecho y vegetales y hortalizas orgánicos cultivados en su zona. Establezca un calendario de compras de alimentos que le garantice tener en casa durante toda la semana suficiente comida sana y de alta calidad. En otras palabras, no deje que las compras se conviertan en una actividad fortuita. Priorícelas y haga de ellas un ritual especial.
Recuerde, usted está tratando de cambiar a las opciones de mayor calidad de los alimentos que de todos modos va a consumir. Si tiene que comer papas fritas, compre una variedad orgánica que sea hecha al horno, con aceite de oliva. Si va a tomar café, que sea orgánico y sin edulcorantes artificiales. Si no tiene modo de evitar los bagels, compre los más frescos. Si toma jugos, prepárelos usted mismo, compre una máquina para hacerlos o cómprelos hechos, pero de marcas orgánicas, especialmente para sus hijos. Si va a usar alimentos enlatados o congelados, use marcas orgánicas. Todo es cuestión de hacer las mejores elecciones posibles dentro de las elecciones que de todas formas hará. Aplique esta filosofía a todo lo que coma y verá cómo adelanta en la curva nutricional de la salud.
_Pero, por favor, dígame no más lo que tengo que comer_
Como se habrá dado cuenta, éste es un programa de dieta que no le dice exactamente qué alimentos comer ni en qué cantidades. Éste es el mayor favor que le puedo hacer como nutricionista. Darle a usted la potestad de tener una relación más profunda con los alimentos y con la inteligencia propia de su organismo es el camino más seguro para llevar su metabolismo al más alto nivel.
Si usted insiste absolutamente en que tiene que saber la respuesta precisa y eternamente correcta a la pregunta de "¿Qué debo comer?", sólo tengo un consejo importante que darle: no se preocupe más por eso. Mantenga la cordura. La búsqueda de la dieta perfecta que nos hará perennemente felices, saludables y glamorosos ha creado algunas cargas pesadas que ya no necesitamos arrastrar. Muchas personas pasan de una dieta a otra y de un experto a otro sintiéndose a menudo víctimas de los mensajes conflictivos que transmiten nuestros gurúes de los alimentos y sin la menor idea de lo que deben hacer. Es hora de conocer el terreno.
El campo de la nutrición es una tierra poco explorada. Es como el Lejano Oeste, ni más ni menos. Muchas de las afirmaciones que valoramos los expertos mantienen su validez por muy poco tiempo y rápidamente quedan reemplazadas por algo más interesante y novedoso. Esto se debe a que la ciencia de la alimentación siempre está cambiando, al igual que cambiamos usted y yo. Todavía estamos descubriendo quiénes somos y qué nos sostiene. Quizá siempre será así.
Por lo tanto, en lugar de sentirnos desalentados por la infinita cantidad de información de nutrición conflictiva y difícil de seguir que nos proponen los expertos, es mejor relajarse y buscar un término medio. Que la calidad sea su guía más confiable. Efectivamente, hay ciertos tipos de alimentos en ciertas cantidades que son lo mejor y lo más beneficioso que se puede comer. Pero esa información no se encuentra en ningún libro ni la revela ningún experto. Se encuentra dentro de usted. Es una forma de sintonización interna que requiere práctica y se logra con el tiempo. A eso se refiere este programa.
En el próximo capítulo aprenderá más sobre cómo descubrir los alimentos y las cantidades que son adecuados para usted. De momento, la nutrición de calidad es su primer y más importante paso. La elevación de la calidad de los alimentos es la mejora nutricional más práctica e infalible que puede realizar. Y tiene repercusiones hermosas y de largo alcance. Porque la salud no es una cuestión individual relacionada únicamente con su metabolismo. Se extiende más allá del cuerpo y llega hasta donde alcanza la vista y más aún. Descuidar al planeta, su suelo y la red de alimentos y no tener la cortesía de compartir nuestra comida con otras personas en el planeta tiene consecuencias patológicas que quedan registradas en nuestros alimentos en forma de energía e información y vuelven directamente a nosotros.
Donde las dan las toman. Esto no es un concepto imaginario sino una realidad literal de la nutrición. Usted obtendrá lo que quiera que un alimento le dé, siempre que esos dones hayan sido concedidos al propio alimento. Quizás éste sea el más grande secreto nutricional de nuestros tiempos.
_Lecciones clave_
• Comer alimentos de calidad es quizás la estrategia más útil e infalible que podemos elegir.
• El consumo de alimentos de mayor calidad supone un mayor valor nutricional. Cuando comemos constantemente alimentos de baja calidad, el cerebro registra un déficit de nutrientes y nos envía la señal de comer más.
• Muchas personas que creen tener problemas de fuerza de voluntad están sufriendo en realidad un déficit de alimentos ricos en nutrientes.
• Coma lo que coma, escoja la versión de mayor calidad de ese alimento.
• Ante todo, los alimentos son energía e información.
• Cada experiencia en la historia de un alimento está registrada dentro de éste como energía e información. Éste es un importante factor que determina su valor nutricional.
Más consejos para comer y vivir con calidad
Al comenzar la semana, enumere en su diario todo lo que podría interponerse a su intención de incluir alimentos de calidad en su vida: "No me alcanza el tiempo", "Es demasiado caro", "Mi esposo/a [o mi novio/a, o mis hijos] no lo aceptará(n)", "No sé dónde comprar esos alimentos", "El sabor de la comida no será el mismo".
Luego dedíquese, de forma metódica y creativa, a descubrir maneras de sortear estas preocupaciones. Tome nota de todos los restaurantes de calidad y establecimientos de venta de comida hecha para llevar, también de calidad, que estén cerca de su casa y de su trabajo. Si le gusta la comida japonesa, ¿quién tiene la mejor y más fresca? ¿Y la comida mexicana? ¿La comida china? ¿Quién tiene las mejores ensaladas? ¿Las mejores sopas caseras? Si va a comer pizza, busque la mejor pizzería de la ciudad. Si hace que le traigan la comida al trabajo, propóngase escoger los mejores proveedores en cuanto a calidad y frescura.
La calidad también cuenta cuando se trata de agua. Compre un filtro de agua para su cocina y use el agua filtrada para tomar y para cocinar.
Casi cualquier cosa que se coloque sobre la piel termina por penetrar en el organismo. Por eso le recomiendo que use productos de cuidado de la piel que casi se puedan comer. Estudie la posibilidad de cambiar los siguientes productos a versiones naturales y más inocuas para el medio ambiente: jabón, champú, acondicionador, humectantes, cosméticos, desodorante, gel de afeitar, pasta dental y enjuagues bucales. Los mejores lugares para conseguir marcas naturales son las tiendas naturistas o las cooperativas de alimentos.
Los productos domésticos de calidad también benefician la salud y reducen la carga tóxica a la que se han visto sometidos los seres humanos en los últimos cien años. En la medida de sus posibilidades, sustituya el detergente para vajilla, todas las sustancias limpiadoras, los detergentes para ropa, la lejía, los limpiadores de cañerías y otros productos similares con variantes más inocuas para el medio ambiente. Una vez más, los lugares perfectos para conseguir estos productos son las tiendas naturistas o las cooperativas de alimentos.
SEMANA 3
El poder metabólico de la conciencia
_La conciencia cura_.
FRITZ PERLS
Una de las revelaciones científicas más insólitas de los últimos cien años es la demostración matemática de que el acto de observar cualquier fenómeno en el universo (sea el vuelo de un pájaro o la rotación de un planeta) ejerce una influencia directa sobre ese fenómeno. Según las leyes de la física, influimos inevitablemente en el vuelo del pájaro o en la velocidad de rotación del planeta con sólo concentrar nuestra atención en ellos. Entonces, si tenemos el poder de desviar la órbita de un cuerpo celeste, no debería sorprender que la "vitamina A" (de "atención") también tenga un profundo impacto en el cuerpo humano.
¿Alguna vez se ha mirado en un espejo, le ha gustado lo que ve, y de pronto ha sentido que se eleva su estado de ánimo y aumenta su energía? Esto se debe a que la conciencia ha puesto en marcha la química del metabolismo. ¿Ha estado alguna vez en un entorno natural, apreciando la belleza del paisaje que lo rodea, y ha tenido una sensación de inmediata y profunda relajación? Esto también se debe a que la conciencia ha activado la fisiología del cuerpo. ¿O se ha percatado alguna vez de que, cuando alguien lo está mirando, usted actúa y se expresa con mayor energía y atención? En ese caso, es la conciencia de otras personas la que tiene un efecto sobre la bioquímica de su organismo.
La conciencia es presencia. Es nuestra capacidad de prestar atención a lo que existe, de experimentar lo que está haciendo la vida en el momento presente. Y cuando ponemos conciencia o atención en nuestra experiencia de comer, aquélla se convierte en una extraordinaria fuerza metabólica.
_La digestión comienza en la mente_
El poder de la conciencia como catalizador de la asimilación de nutrientes, la digestión y la capacidad de quemar calorías encuentra su mejor ejemplo en un fenómeno que los científicos denominan "respuesta digestiva de la fase cefálica". "Cefálica" significa "de la cabeza". La respuesta digestiva de la fase cefálica es simplemente un término complicado para referirse a los placeres del sabor, el aroma, la satisfacción y el estímulo visual de una comida. En otras palabras, es la fase de la digestión que ocurre en la cabeza. Lo sorprendente es que los investigadores han calculado que hasta el 30 ó 40 por ciento del total de la respuesta digestiva ante cualquier comida se debe a la respuesta digestiva de la fase cefálica, o sea, a nuestra conciencia plena de lo que estamos comiendo.
¿Puede recordar un momento en que, al ver su comida favorita, la boca comenzó a hacérsele agua o el estómago le empezó a crujir? Ésa es la respuesta digestiva de la fase cefálica. La digestión comienza literalmente en la cabeza cuando los receptores químicos y mecánicos situados en la lengua y en las cavidades oral y nasal son estimulados al oler la comida, saborearla, masticarla y percatarse de ella. La plena conciencia de nuestra comida da inicio a la secreción de saliva, ácidos gástricos, enzimas y neuropéptidos vinculados con los intestinos, y estimula la producción de toda la gama de enzimas pancreáticas, con inclusión de la tripsina, la quimotripsina, la amilasa pancreática y la lipasa. Además, hace que la sangre fluya hacia los órganos digestivos, que el estómago y los intestinos se contraigan rítmicamente y que se modifiquen las concentraciones de electrolitos a lo largo del tracto digestivo se adapten para recibir los alimentos.
**La conciencia es metabolismo.**
Saquemos cuentas. Si los científicos dicen que entre el 30 y el 40 por ciento de nuestra respuesta digestiva total a cualquier alimento se debe a la respuesta digestiva de la fase cefálica, y si decidimos no prestar atención a nuestra comida, o sea, "quedarnos dormidos frente al plato" y no registramos ninguna sensación de sabor, olor, satisfacción o interés visual, estaremos metabolizando nuestra comida a sólo un 60 a 70 por ciento de eficiencia.
La falta de atención se traduce en una reducción del flujo sanguíneo a los órganos digestivos, lo que, como hemos visto, entraña una menor oxigenación y, por lo tanto, debilita la fuerza metabólica. Al tener una menor producción de enzimas en los intestinos nos volvemos susceptibles a problemas digestivos, trastornos intestinales, inmunidad reducida y fatiga.
¿Está comenzando a ver por qué comer como un sonámbulo es una decisión incorrecta desde el punto de vista nutricional?
_Cuando coma, coma_
A continuación describo la esencia de algunas de mis investigaciones favoritas que ilustran el poder nutricional de la conciencia.
El primer caso se refiere a un fenómeno llamado "audición dicotómica". Se pide a los sujetos del experimento que se concentren mientras dos personas hablan simultáneamente: una persona le habla por el oído izquierdo sobre viajes espaciales intergalácticos mientras que la otra le habla por el oído derecho sobre las ventajas de la planificación financiera. Si usted ha tenido la experiencia de escuchar a una persona por teléfono mientras alguien cerca de usted en la cocina comienza a hablarle como si existiera la capacidad sobrehumana de participar en dos conversaciones al mismo tiempo, ya sabe cómo es la situación.
Mientras estaban relajados, los sujetos del experimento consumieron agua mineral. Se midió la absorción en el intestino delgado de dos sustancias: sodio y cloro. Las asimilaron al 100 por ciento. Cuando las mismas personas se vieron expuestas a la audición dicotómica y luego se les dio a beber el agua mineral, mostraron un total bloqueo de la absorción de sodio y cloro que les duró hasta una hora después. En otras palabras, tuvieron 0 por ciento de absorción. El sencillo acto de prestar atención a dos estímulos al mismo tiempo alteró decisivamente su metabolismo.
En un estudio realizado en Italia sobre la digestión y el estímulo mental, se mostró un cortometraje a estudiantes universitarios. Valiéndose de la electrogastrografía, los investigadores podían determinar la actividad digestiva de cada estudiante antes de ver el cortometraje y durante éste. Al darles una merienda antes de la película, se les estimulaban las contracciones digestivas normales. Pero al consumir una merienda durante la película, presentaron índices inferiores de electrogastrografía. O sea, la motilidad intestinal disminuía, lo que se traducía en una menor producción de enzimas y una digestión ineficiente. Al reducirse la motilidad intestinal, los alimentos se demoran más en salir del cuerpo, lo cual puede dar lugar a la autotoxicidad: la producción de sustancias irritantes y venenosas que se liberan en el torrente sanguíneo.
Entonces, si ver una película o escuchar a varias personas a la vez puede ir en detrimento de su metabolismo, ¿qué cree usted que pasa cuando come mientras ve la televisión? ¿O cuando come mientras conduce? ¿O cuando come mientras trabaja en su escritorio? Metabolizar una comida es como asimilar una conversación. Si uno está hablando con un amigo y éste no presta atención, uno queda con la sensación de que el intercambio fue incompleto y con el deseo de continuarlo. En el mejor de los casos, la esencia de su conversación se habría asimilado mínimamente. Lo mismo ocurre con los alimentos.
_A más conciencia, menos apetito_
¿Alguna vez ha tenido la experiencia de consumir una comida abundante, sin prestarle mucha atención y, después de terminar, darse cuenta de que siente el estómago lleno pero la boca aún le pide más? ¿Alguna vez se ha preguntado por qué el organismo habría de comportarse de una forma tan extraña y darle este mensaje ambivalente?
Pues bien, la respuesta digestiva de la fase cefálica no es únicamente una respuesta; es todo un requisito nutricional. El cerebro tiene que experimentar sabor, placer, aroma y satisfacción para poder evaluar adecuadamente la comida y catalizar al máximo la eficiencia de la fuerza digestiva. Cuando comemos demasiado rápido o no nos percatamos de lo que estamos comiendo, el cerebro interpreta esta experiencia soslayada como hambre. No es suficientemente sensible como para decirnos: "Devoraste el desayuno, almorzaste como un maniático y tragaste la merienda como una bestia hambrienta. Ya no necesitas más comida". El cerebro simplemente dice: "No recuerdo haber comido nada. No sentí ninguna satisfacción. Nada pasó. Tengo hambre".
Y por eso buscamos más comida.
Por eso alrededor de nueve de cada diez personas con quienes he hablado y que dicen tener problemas de sobreingesta, en realidad tienen otro problema. Lo que les pasa es que no comen cuando comen. No prestan mucha atención a sus comidas, por lo que no logran satisfacer su requisito de la respuesta digestiva de la fase cefálica, lo cual los hace seguir deseando comida. Lo irónico del caso es que las personas que entran en esta categoría piensan que tienen problemas de voluntad. Pero no es así. De hecho, la falta de voluntad no es más que un elemento de menor importancia en su comer excesivo. Las empresas farmacéuticas invierten millones de dólares en la investigación y desarrollo de nuevos compuestos para suprimir el apetito y muchas personas hacen un gran esfuerzo por controlar su deseo de comer, todo lo cual constituye un monumental despilfarro de energía. Así que, si usted se ha castigado porque cree que no tiene suficiente fuerza de voluntad, ya es hora de que deje de maltratarse.
Dicho en términos sencillos, mientras menos atención preste usted en la mesa, más necesitará comer y más aumentará de peso.
Así queda claro que nuestro apetito está diseñado genéticamente para satisfacerse, no para suprimirse. ¿Qué tal si abandonamos una guerra que nunca se podrá ganar (la guerra contra la necesidad de comer) y alcanzamos una victoria metabólica al hacer lo contrario de lo que nos han enseñado? Proporcione a su cuerpo y alma exactamente lo que éstos quieren, una experiencia de comer que sea abundante en atención y conciencia, y nunca tendrá que luchar contra sí mismo.
_Quizás sus pensamientos lo engordan_
¿Alguna vez ha oído a alguien decir: "Yo aumento de peso con sólo pensar en la comida"? Sorprendentemente, es muy posible que esto sea cierto. Los científicos han descrito un componente interesante de la respuesta digestiva de la fase cefálica, que denominan respuesta de insulina de la fase cefálica. Como hemos visto, la insulina es una hormona que producimos para ayudar a metabolizar los carbohidratos o azúcares contenidos en nuestros alimentos. Cuando consumimos alimentos como pastas, pan, magdalenas, galletas dulces, tarta, cereales, galletas saladas, jugos o caramelos, el cuerpo produce insulina de inmediato. La insulina también tiene otra función interesante. En cantidades excesivas, envía al cuerpo la señal de almacenar grasa y de inhibir el crecimiento muscular.
La respuesta de insulina de la fase cefálica es un fenómeno mensurable según el cual el cuerpo produce insulina de sólo mirar un pedazo de tarta o fantasear sobre un cuenco de pasta. La digestión de los carbohidratos comienza literalmente en la mente. Es la forma que tiene el cuerpo de prepararse para digerir los alimentos antes de que éstos toquen siquiera su boca.
Pensemos entonces en la persona típica sometida a dieta que se niega a sí misma la posibilidad de consumir alimentos nutritivos o satisfactorios, que no cumple su requisito de la respuesta digestiva de la fase cefálica y que, por lo tanto, fantasea constantemente sobre alimentos prohibidos, como los postres y golosinas. Esa persona se mantendrá en una constante respuesta de insulina de la fase cefálica, por lo que producirá insulina aunque no haya carbohidratos ni azúcar que procesar. Eso significa que los niveles de insulina serán artificialmente altos y que la insulina simplemente estará presente sin tener nada que hacer. Automáticamente, esta sustancia química desempeñará su función secundaria, que es la de almacenar grasa e inhibir el crecimiento muscular. Si añadimos a esto el estrés de ponerse a dieta y negarse a uno mismo alimentos y satisfacciones, veremos cómo se produce más cortisol, otra hormona que contribuye al almacenamiento de grasas. Así que, al fantasear constantemente sobre alimentos ricos en carbohidratos y llevar una vida estresante, la persona a dieta habrá puesto en su lugar las piezas precisas del rompecabezas para la elevación crónica de la insulina y el cortisol, los precursores del aumento de peso "no calórico".
La idea no es dejar de fantasear sobre barquillos y helados. La idea es comerlos, ser conscientes de que uno los está comiendo, obtener la satisfacción (respuesta digestiva de la fase cefálica) que exige el cerebro y pasar a continuación a otra experiencia vital. Si uno obtiene lo que quiere, no necesitará estar pensando constantemente en lo que no tiene. Es así de sencillo.
Para muchas personas, la satisfacción es un concepto radical. Se nos ha condicionado a creer que para perder peso es necesario privarnos de alimentos, negarnos placer y librar una batalla contra nuestro apetito con todo el arsenal que tengamos a nuestra disposición. Sin embargo, al luchar contra la biología del cuerpo, creamos precisamente la misma condición que tan francamente batallamos por evitar. Estar presentes y atentos cuando comemos, en lugar de estar ausentes, estimula el metabolismo y satisface la necesidad innata del cuerpo de nutrirse.
_A más conciencia, menos peso_
Lisa, abogada de 37 años de una firma bancaria, acudió a mí para bajar de peso. En su calidad de mujer soltera, su trabajo era su vida, y le dedicaba largas horas. Lisa era vivaracha y extrovertida pero, como me confesó, las ocho libras que había aumentado hacía unos años la tenían al borde de la locura. Quería enamorarse y comenzar una familia, pero su exceso de peso constituía para ella un obstáculo inaceptable.
Lisa se sentía frustrada porque, aunque al parecer estaba haciendo todo lo correcto para bajar de peso, no había perdido ni una libra en dos años. Su dieta consistía en lo siguiente: tomaba una taza de café por la mañana y no desayunaba nada más, o tomaba un yogur cuando ya estaba a punto de salir. También dejaba de almorzar varias veces por semana; en las pocas ocasiones en que almorzaba, tomaba una ensalada pequeña o la mitad de un emparedado de pavo. Hacia las tres de la tarde ya estaba hambrienta y, a menudo, con dolor de cabeza. En ese momento comía unas frituras, galletas dulces o caramelos y un refresco de dieta. Terminaba de trabajar alrededor de las siete de la noche, hacía ejercicios intensos en el gimnasio durante una hora y venía a cenar a alrededor de las 8:30 o las nueve de la noche. Su cena normalmente consistía en un poco de comida china o italiana, que consumía frente a la televisión. Varias horas después volvía a sentir hambre y comía un poco de rositas de maíz, frituras o yogur congelado hasta la hora de ir a la cama.
De modo que, aunque Lisa consumía en total pocas calorías y se ejercitaba intensamente cada día, aún tenía ocho libras de exceso de peso. Le parecía que su cena y la merienda que comía después eran su único pecado, pero al llegar la noche tenía tanta hambre que no encontraba manera de comer menos. Pensó que quizás tenía un problema de fuerza de voluntad y que yo podría ayudarla con eso, que tal vez podría enseñarle algún truco de magia para quitarle el hambre después del trabajo.
Le pregunté a Lisa si comía rápidamente, moderadamente o lentamente. Su respuesta fue "muy rápido". Luego le pregunté si le gustaba la comida. Replicó con animación casi infantil que le encantaba. Como puede imaginarse, mi pregunta siguiente fue: "Si te gusta tanto la comida, ¿por qué no comes comida de verdad y le dedicas el tiempo debido? ¿Por qué, cada vez que comes, lo haces con premura? ¿Por qué engulles la comida frente a la televisión? ¿Por qué no te aseguras de que si algo realmente te da satisfacción, dure el tiempo necesario y puedas saborear completamente la experiencia?"
Le indiqué que, si bien sus elecciones de alimentos no eran las mejores, la raíz de su problema no estaba en la nutrición propiamente dicha sino en la conciencia. Lista "nunca estaba presente" cuando comía. Y esa falta de atención era lo que la hacía llevar una dieta muy insuficiente en cuanto a calidad. Ella pensaba que su problema era en la noche, cuando no podía dejar de comer. Sin embargo, su verdadero problema era durante el día. Al dejar de comer, consumir muy pocas calorías, privarse de micronutrientes (vitaminas, minerales) y macronutrientes (proteínas, grasas) y mantener una dieta aburrida, la propia Lisa estaba creando las condiciones para dar rienda suelta a la glotonería por las noches. En el desayuno y el almuerzo no satisfacía sus necesidades de la fase cefálica (sabor, placer y conciencia). Su cerebro estaba interpretando como hambre esta privación de nutrientes y de atención. Por último, en la noche, Lisa ya no podía seguir luchando contra el deseo natural e innato de comer. En ese momento, su "resistencia" cedía y las compuertas se abrían de par en par. Su cuerpo pedía a gritos cualquier alimento que paliara su falta de nutrientes.
¿Se da cuenta de cómo la intensa necesidad de Lisa de comer por las noches no era más que el intento de su organismo de compensar un desequilibrio anterior? Desafortunadamente, cuando ella hacía al fin una comida sustanciosa, la engullía a toda velocidad frente a la televisión, lo que volvía a producir falta de satisfacción de la fase cefálica y el deseo de otra "comida" más, o sea, una golosina tras otra, poco después de su cena.
El remedio que le propuse a Lisa fue que comiera todas las comidas y meriendas prestándoles atención. Que nunca más dejara de desayunar ni de almorzar. Que mejorara la calidad de lo que comía. Que tomara un desayuno y un almuerzo más contundentes, lo que significaba incluir proteínas y grasas de calidad, o sea, huevos, pescado, mantequillas de nueces, ensaladas con frijoles o tofu o carnes de calidad. Que fuera más despacio. Que imaginara que la carrera había terminado y ya podía relajarse en la línea de meta. Que se olvidara de la televisión durante la cena. Que prendiera una vela, pusiera música, invitara a un amigo a comer con ella y se sintiera alimentada.
Lisa me miró como si le estuviera pidiendo que bailara desnuda frente a toda su familia. Pero entérese a continuación de lo que ella misma dijo acerca de lo que sucedió cuando siguió mis sugerencias.
"Pensé que mi peor problema sería comer más y disfrutarlo, porque no puedo evitar preocuparme por las calorías y los gramos de grasa. En realidad, lo que más me costó fue prestar atención a la comida. Me di cuenta de que tengo cierto déficit de atención cuando se trata de comer. Quería de veras que todo esto me diera resultado, porque definitivamente lo que estaba haciendo no funcionaba. No me tomó mucho tiempo descubrir, por primera vez, que podía encontrar dentro de mí, en relación con los alimentos, una calma que nunca antes me imaginé que existía."
Al comer con conciencia y satisfacer su digestión de la fase cefálica, Lisa dejó de tener tanta hambre a las tres de la tarde o después de la cena. No necesitaba más fuerza de voluntad; lo que necesitaba era prestar más atención. Al no privarse más de nutrientes ni hambrearse durante el día, Lisa vio cómo sus dolores de cabeza desaparecían rápidamente. Ahora consumía más calorías y más grasas sanas; y se sentía más feliz. En ocho semanas, Lisa bajó exactamente ocho libras. Seguir únicamente la estrategia de hacer cambios de nutrición nunca habría ayudado a Lisa a llegar adonde quería. La conciencia fue la clave de los resultados que obtuvo.
_El cerebro intestinal_
Hasta ahora hemos hablado de la conciencia que se experimenta en el cerebro. Pero hay otro tipo de inteligencia que es una fuerza metabólica igualmente potente, y se encuentra en el vientre. ¿Alguna vez ha sentido que tiene "mariposas" en el estómago? ¿O un "nudo" en la garganta? ¿Alguna vez se ha dejado llevar por un fuerte e innegable "instinto visceral" sobre alguna situación o persona? Pocas personas dirían que tienen un "instinto articulatorio" o un "instinto renal", pero la expresión "instinto visceral" encuentra muchos equivalentes en muchas culturas del mundo como fuente de conocimiento intuitivo. Pues resulta que los "pensamientos" o instintos viscerales no son un concepto rebuscado sino un hecho fisiológico. Los científicos han descubierto que, en lugar de un solo cerebro, tenemos dos: el otro se encuentra en el tracto digestivo.
El cerebro de los intestinos, conocido como sistema nervioso entérico (SNE), se encuentra por debajo del revestimiento de la mucosa y entre las capas de músculos del esófago, el estómago y los intestinos delgado y grueso. El sistema nervioso entérico es una red densa y complicada de neuronas y sustancias neuroquímicas que perciben y controlan lo que sucede en el tracto digestivo y, extraordinariamente, pueden sentir lo que sucede en otras partes del cuerpo, incluido el cerebro, y responder ante ello. Sorprendentemente, cuando los científicos al fin contaron el número de células nerviosas en el cerebro intestinal, encontraron que contenía más de cien millones de neuronas, o sea, un número mayor que en la médula espinal. Lo más fascinante es que los investigadores han observado un tráfico neural significativamente mayor del SNE al cerebro propiamente dicho que de éste al SNE. En otras palabras, en lugar de hacer que la cabeza dicte al sistema digestivo qué comer y cómo metabolizarlo, el centro de mando se encuentra en el vientre.
Además de una amplia red de neuronas, todo el tracto digestivo está recubierto de células que producen y reciben diversos neuropéptidos y sustancias neuroquímicas, las mismas que antes pensábamos que sólo se encontraban en el cerebro. Entre ellas figuran la serotonina, la dopamina, la norepinefrina y el glutamato. Para sorprenderse más aún, muchas hormonas y sustancias químicas que antes se pensaba que sólo existían en los intestinos se encontraron activas posteriormente en el cerebro. Entre ellas figuraban: insulina, colecistoquinina, proteína intestinal vasoactiva, motilina, gastrina, somatostatina, hormona liberadora de tirotropina, neurotensina, secretina, sustancia P, glucagón y bombesina.
El sistema nervioso entérico (el cerebro intestinal) y el sistema nervioso central (el cerebro propiamente dicho) comparten además otra fascinante similitud. En el estado de sueño, el cerebro propiamente dicho pasa por ciclos de 90 minutos de frecuencias de sueño de onda lenta, seguidas inmediatamente por la fase de movimiento ocular rápido (MOR, o REM en inglés), en la que se producen los sueños. El cerebro intestinal también pasa cada noche por un ciclo de 90 minutos de contracciones musculares de onda lenta seguido de breves arranques de rápidos movimientos musculares. ¿Será que los intestinos sueñan?
Otro descubrimiento interesante es que todo el tracto digestivo está recubierto con células especializadas que producen y reciben endorfinas y encefalinas, sustancias químicas que dan lugar a muchas sensaciones distintas, como la alegría, la satisfacción y la sensación de alivio de un dolor. La mayoría de las sensaciones digestivas de las que somos conscientes tienden a ser negativas, como el malestar y la incomodidad intestinales. No obstante, las cálidas sensaciones viscerales que a veces sentimos después de una comida satisfactoria o de un encuentro emocionante ocurren, en parte, cuando el sistema nervioso entérico emite sustancias químicas productoras de sensaciones de placer que son captadas por células distantes y cercanas.
Como muchos sabemos, los intestinos suelen servir como barómetro de nuestros estados emocionales y de estrés. Lo pueden confirmar las personas que sufren de úlcera péptica, síndrome de intestino irritable, acidez, malestares estomacales y otros trastornos. Por eso, cuando decimos que no tenemos "estómago" para soportar alguna situación o que algo nos "produce náuseas", estamos expresando sensaciones psicofisiológicas de la vida real que provienen del sistema nervioso entérico, o sea, del cerebro intestinal. Quizás a esto se debe que los intestinos produzcan en abundancia un tipo de sustancias químicas conocidas como benzodiazepinas. Estas sustancias psicoactivas son los ingredientes activos de medicamentos como el Valium y el Xanax. Así es, sus intestinos producen naturalmente estas sustancias, en su forma química exacta, sin necesidad de receta médica y sin ningún costo extra.
En Japón se considera que la parte media del torso es el foco de la sabiduría y de nuestro centro de gravedad, tanto físico como espiritual. Este lugar de máximo equilibrio, conocido como el _hara_ , se encuentra en torno a un punto situado justo por debajo del ombligo. Los japoneses se refieren literalmente al _hara_ como su sitio de pensamientos superiores, del mismo modo que los occidentales señalaríamos la cabeza como el lugar donde se encuentra el "mando central". En otras palabras, cuando los occidentales decimos "Lo sé" en tono convincente, señalamos a la cabeza con el dedo. Cuando los japoneses dicen "Lo sé", señalan a la barriga. Los japoneses acceden así, en parte, al potencial neuroquímico del cerebro intestinal. Los occidentales expresamos esta idea en un grado distinto cuando decimos "Tienes estómago". No elogiaríamos el valor de otros diciendo que "tienen riñón", o que "tienen bazo".
Lo que todo esto significa es que portamos una inmensa capacidad cerebral en el vientre, y que esa capacidad es subutilizada en la mayoría de los casos. Seguramente habrá oído decir que usamos menos del 10 por ciento de nuestra capacidad cerebral. Pues bien, lo mismo se aplica a nuestro uso del potencial del cerebro intestinal.
Así que, si usted piensa que tiene problemas porque su cerebro no es capaz de procesar toda la información contradictoria sobre la dieta con que nos bombardean los medios de información y los expertos, piense otra vez. En realidad no tiene ese problema. Su cerebro no está equipado para procesar toda esa información por sí mismo. No está diseñado para concebir una dieta "de alto contenido informativo". En lo que se refiere a los alimentos, estamos diseñados fisiológicamente para escuchar lo que diga el cerebro intestinal. El otro cerebro sólo desempeña un papel secundario en ese caso.
Nunca verá a un león confundido y ansioso por no saber qué cena sería más nutritiva, si cebra o búfalo, o si debería evitar comer hipopótamo debido a su alto contenido de grasa. Los animales saben instintivamente qué comer. Nosotros también lo sabemos. Simplemente no nos damos cuenta de que lo sabemos.
Normalmente cuando las personas deciden concentrarse en el vientre, es porque desean reducirlo o fortalecerlo. Pero empecemos por ordenar nuestras prioridades. Haga que su vientre sea más inteligente antes de hacer que sea más fuerte. Mientras menos inteligentes sean sus intestinos, más difícil les resultará encontrar su tono adecuado. Los músculos bien definidos son músculos inteligentes. La obsesión que tienen los estadounidenses con fortalecer el abdomen es un deseo mal orientado de usar de forma más eficiente la sabiduría de la parte media del torso. Al tener "estómago" para confiar en nuestra capacidad de acceder a los conocimientos de los intestinos, se esfuman los temores alimentados por el ego y se revela nuestro verdadero respeto por nosotros mismos.
_Semana 3: Su tarea principal_
Esta semana es su oportunidad de cosechar las recompensas metabólicas de la atención. Su tarea principal será estar presente ante los alimentos y acceder al sistema nervioso entérico, que es el núcleo de su sabiduría intestinal. Aprenderá a usar la conciencia relajada y el cerebro intestinal para ayudarlo a determinar qué alimentos comer y en qué cantidades.
**Ejercicio: Manténgase despierto al plato**
Éste será su ejercicio más importante en la semana: en cada comida y cada merienda, opte por estar presente.
Fíjese en su comida. Mírela, tóquela y saboréela con presencia. Establezca una conexión con ella. Manténgase alerta a su entorno. Absorba todos los nutrientes de su comida: los colores y texturas, las personas con quienes está comiendo y sus conversaciones, todo el ambiente y los matices de la experiencia de comer. Si en esas ocasiones se pone a pensar en el pasado o a tramar sobre el futuro, deje de hacerlo. Si está pensando en fantasías, regrese a la Tierra y trate bien a su comida. Manténgase apaciblemente alerta coma lo que coma, coma donde coma o coma con quien coma. Fíjese en las ocasiones en que se pone en piloto automático mientras come. En esos momentos, simplemente recuérdese a sí mismo que debe despertar.
Incluso si está comiendo algún alimento "prohibido" o un montón de helado, debe de todas formas prestar atención a lo que come. Esto es porque, mientras más presente esté en esos momentos y mientras más saboree la experiencia, más pronto satisfará su requisito de la respuesta digestiva de la fase cefálica y menos necesitará comer. Si acostumbra a ver televisión o leer mientras come, pruebe a pasar la semana sin mirar la tele ni leer la prensa y fíjese en la diferencia en su cuerpo. Si constantemente come con prisa, siéntese con calma a pasar un rato con su comida. Espero que la idea de prestar atención no le parezca una molesta estrategia dietética ni una forma de limitación ni castigo. Le hace bien. Da satisfacción personal. Es más, inspira al metabolismo.
Durante esta semana sería conveniente que se preguntara a sí mismo: "¿Por qué no habría de prestar atención? ¿Qué es lo que me impele a comer de forma inconsciente?" A veces optamos por no estar presentes porque queremos escapar de la realidad; por ejemplo, de una sensación o acontecimiento que nos resultan incómodos. No obstante, en la mayoría de los casos, comer sin prestar atención es un hábito que hemos adquirido de una cultura enamorada de la velocidad. Lo más probable es que, después de experimentar durante una semana prestando atención a la comida, creará un nuevo hábito. Se recordará naturalmente a sí mismo que debe mantenerse "despierto al plato".
**Ejercicio: Inventario de sabiduría intestinal**
A medida que envejecemos, el cerebro se vuelve de hecho más inteligente. Ha acumulado más experiencias vitales, información y sabiduría de las que valerse. Lo mismo sucede con el cerebro intestinal. A lo largo de los años su cerebro intestinal acumula una inmensa cantidad de datos sobre lo que funciona y lo que no funciona. Conoce sus necesidades de nutrientes y de alimentos. Entiende lo que le da y lo que le quita energía. Identifica los ingredientes a los que usted es sensible y alérgico. Recuerda hasta qué punto usted puede consumir alimentos de baja calidad. Se da cuenta de lo que es excesivo. Determina la cantidad de alcohol, azúcar y cafeína que usted puede consumir adecuadamente. Ha tomado nota de cada comida que le ha sentado bien y de las que no le han sentado bien. El cerebro intestinal es su propio experto en nutrición.
Teniendo esto en cuenta, haga un inventario de todas las lecciones importantes y pertinentes que su cerebro intestinal ha aprendido acerca de la comida y de la nutrición a lo largo de su vida. Sea específico y haga que su lista sea lo más completa posible. ¿Qué alimentos le hacen sentir mejor? ¿Qué alimentos lo deprimen? ¿Hay algún alimento que antes le sentaba bien pero que ahora le resulta difícil de tolerar? ¿Hay alimentos "prohibidos" sin los que usted simplemente no puede vivir? ¿Qué horas del día son las mejores para que usted coma? ¿Qué combinaciones de alimentos le producen trastornos digestivos? ¿Qué alimentos usted sabe que su cuerpo necesita en mayores cantidades? Considere que este ejercicio es como escribir el manual de instrucciones de nutrición de su organismo.
Al pensar en lo que debe incluir, respire profundamente con el abdomen. Relaje su mente y pida a sus intestinos que sean ellos quienes hablen. Haga que su cerebro intestinal se exprese y reconozca la sabiduría de la experiencia acumulada durante toda su vida.
Cuando haya terminado, relea y absorba la sabiduría e información que se ha revelado a usted mismo. ¿Hay algún detalle destacado? ¿Encuentra alguna nueva revelación sobre su ser y su metabolismo? ¿Alguna verdad particular sobre la nutrición de su organismo que usted necesitaría recordar más a menudo?
El primer paso práctico necesario para aprovechar la inteligencia del sistema nervioso entérico consiste en respirar con el abdomen. En la tradición yoga hay un dicho que ha ayudado a los practicantes de yoga a llegar a niveles cada vez más altos de dominio del cuerpo: "Adonde va a la atención, fluye la energía". Las décadas de investigación en materia de retroalimentación biológica han demostrado definitivamente este axioma pues, cuando nos concentramos en casi cualquier área del cuerpo podemos aumentar la circulación sanguínea, alterar el potencial bioeléctrico e influir en la secreción de numerosas sustancias bioquímicas. La respiración con el abdomen hace que éste absorba más oxígeno y activa el sistema nervioso entérico. Hace que los intestinos funcionen más inteligentemente. Piense en esto: el déficit de oxígeno puede ocasionar daños en el cerebro propiamente dicho, mientras que el oxígeno en grandes cantidades hace que mejoren la memoria, el rendimiento y la creatividad. Lo mismo se aplica al cerebro intestinal. Si priva de oxígeno al sistema nervioso entérico, éste se pone perezoso. Entonces pide más alimentos para poder registrar las sensaciones de placer, satisfacción y llenura. Si hace que el abdomen se llene de oxígeno, hace que el cerebro intestinal se mantenga activo y alerta.
Acceda a la sabiduría intestinal
Un multimillonario japonés declaró una vez que seguía una simple fórmula para obtener el éxito. Antes de llevar adelante cualquier convenio o proyecto, primero "lo traga" para ver cómo le hace digestión. Si lo que "traga" se metaboliza adecuadamente, cierra el trato. Pero, ante el menor atisbo de indigestión, pone fin a las negociaciones. Es posible que a usted y a mí esto nos parezca una manera supersticiosa de llevar los negocios, pero es un ejemplo revelador de cómo se puede aprovechar la genialidad del sistema nervioso entérico.
Aprender a usar su cerebro intestinal no se diferencia en nada de descubrir nuevas maneras de aprovechar la capacidad innata de su cerebro real. Requiere tiempo, concentración y un poco de tanteo. Para muchos de nosotros, los males como la indigestión, la hinchazón, la acidez y los gases no son tanto problemas digestivos como problemas de conciencia, o sea, una falta de atención a la constante retroinformación que podemos recibir del sistema nervioso entérico.
Es hora de aprovechar plenamente nuestras distintas formas de inteligencia cuando se trata de nutrición y salud. La clave para acceder a la abundante inteligencia del sistema nervioso entérico es el respeto. Dé cabida a la posibilidad de que su cerebro intestinal sea un asesor confiable. Respete el diseño cósmico que usted trae incorporado, el "sabio" que lleva dentro de sí, para que éste le enseñe cómo absorber y asimilar el mundo. Sosiéguese y preste atención a los mensajes que le esperan, a la genialidad que brota del núcleo de su ser.
Una vez que se haya centrado en la respiración, el paso siguiente es sencillo: pida consejos al cerebro intestinal. Pregunte en silencio al sistema nervioso entérico: "¿Este alimento que quiero comer me hará bien en este momento?" o "¿Qué alimento sería mejor para mi organismo en este momento?" o "¿En qué cantidad me sentará mejor este alimento?" Haga que su mente se mantenga en silencio y permita que las respuestas provengan de la reveladora fuente de sus intestinos. Algunas personas describen leves sensaciones intuitivas que les informan si deben escoger o no un alimento particular. Otros reciben una retroinformación clara e inconfundible de su sistema nervioso entérico que les parece una rotunda respuesta afirmativa o negativa. Antes de "engullir" efectivamente una comida o escoger un alimento, tráguelo simbólicamente y observe los resultados. Fíjese en qué tipo de sensación visceral le sobreviene.
Si le preocupa la posibilidad de que, al preguntarle a su sistema nervioso entérico qué comer, éste le dé una respuesta equivocada, despreocúpese. Usted se ha equivocado más de una vez en su vida y no ha pasado nada. Esto es un entrenamiento. Quizás usted piense: "Pero, ¿qué tal si le pregunto a mi cerebro intestinal y me dice que coma mucho chocolate?" En tales casos, sería conveniente que no se deje llevar por instrucciones que le resulten sospechosas y que trate de determinar cuál de los dos cerebros estaba hablando realmente. O también podría simplemente acatar la orden y tomar nota de los resultados. La evolución funciona a partir de prueba y error. Aprenda a ser responsable y a confiar en sí mismo. La potenciación personal equivale a potenciación metabólica.
En resumen:
1. Antes de cualquier comida o merienda, acomódese en su asiento y respire profundamente con el abdomen cinco veces.
2. Deje que la respiración fluya natural y abundantemente hacia adentro y hacia afuera sin aguantarla ni forzarla. Fíjese en lo distinto que se siente al mantener oxigenado el aparato digestivo.
3. Haga que su mente descanse y recabe los sabios consejos de su sistema nervioso entérico. Pregúntese en silencio: "¿Qué alimentos me proporcionarían mejor sustento en este momento?" "¿Este alimento en particular es una buena elección?" "¿Es ésta una buena combinación de alimentos?"
4. Deje que las respuestas vengan sin esfuerzo, manteniendo la mente tranquila y sin censura.
5. Siga las instrucciones de la sabiduría de sus intestinos y fíjese en los resultados.
Recuerde que está accediendo a un centro de inteligencia que la mayoría de nosotros no estamos acostumbrados a usar. Igual que al aprender otro idioma o un nuevo estilo de baile, es probable que se sienta un poco torpe y desorientado. Los errores son realmente oportunidades de aprender, aunque no lo parezcan. Mientras más recurra usted a la inteligencia de su sistema nervioso entérico, más listo se volverá éste. Considere que ésta será una práctica para el resto de su vida, en la que su cerebro intestinal pasará a ser un importante consultor en todos los asuntos relacionados con su nutrición. De modo que, independientemente de lo que digan los expertos y de lo confusos que sean sus consejos, usted siempre tendrá a su propio experto qué le dirá la última palabra. Así hacemos siempre. Algo dentro de nosotros decide a quién prestaremos atención. Al acceder a la sabiduría intestinal, aprendemos a hacer elecciones ponderadas, con libertad y muy bien informadas.
Otra versión de esta técnica consiste en pedir retroinformación al sistema nervioso entérico. Una vez que haya tomado un bocado de un alimento o un plato en particular, pregunte a los intestinos: "¿Qué tal eso? ¿Este alimento es bueno para mí?" Después de la comida, pregúntese a sí mismo: "¿Hay alguna correlación entre lo que comí y la manera en que me siento ahora?" Muchas personas han indicado que, al solicitar retroinformación, ha obtenido datos e ideas sobre nutrición que son muy específicos. Por ejemplo, a veces ingerimos ciertas combinaciones de alimentos que irritan al sistema digestivo y suprimen el metabolismo, pero no lo sabemos hasta que pedimos retroinformación. Hay muchos libros útiles sobre combinaciones de alimentos que nos dicen qué comer pero, según mi experiencia, cada persona tiene un metabolismo singular. Usted tiene su propio sistema de combinación de alimentos que, en última instancia, sólo usted puede descubrir.
**Ejercicio: Comer hasta llenarse de energía**
La mayoría de las personas comen hasta que se llenan el estómago. Siempre que esto ocurre, tenemos que generar una mayor fuerza metabólica para poder procesar una comida tan abundante. Más fuerza metabólica significa que debe enviarse más oxígeno y sangre a los órganos digestivos. Este flujo sanguíneo extra debe restarse de las extremidades, o sea, de los brazos y piernas y, en menor medida, de la cabeza. ¿Y qué pasa cuando reducimos el flujo sanguíneo a la cabeza? Nos sentimos cansados y perezosos. Comer hasta sentir llenura también puede provocar trastornos digestivos, acidez, bloqueo digestivo inducido por el estrés y deterioro del metabolismo de nutrientes. Por eso, en lugar de comer hasta llenarse de alimentos, coma hasta llenarse de energía.
Los yoguis de la India antigua describieron un momento especial en cualquier comida en el que, si uno deja de ingerir alimentos en ese momento, podrá abandonar la mesa con más prana (más energía o fuerza vital) que la que tenía antes de comer. Encontrar este "punto de energía" requiere cierta experimentación, pero definitivamente será bien recompensado cuando se demuestre a sí mismo la existencia de ese punto.
Esta técnica requiere que consultemos constantemente la sabiduría intestinal durante la comida. Pregunte al sistema nervioso entérico: "¿Cómo me siento? ¿Cómo está mi nivel de energía? ¿Aún me siento ligero? ¿Estoy comenzando a sentirme pesado?" Calcule el punto en el que se siente lleno de energía, pero no lleno de comida. Sentirá ligero el estómago; se sentirá más bien "animado", y aún tendrá un poco de hambre, pero convertirá esa hambre y el deseo de más comida en alguna otra actividad después de comer.
A la inversa, al ingerir tan sólo un bocado más allá del punto óptimo, comenzará a sentirse más pesado.
Recuerde hacer estas preguntas a sus intestinos, no a su cabeza. Mientras más participación dé a su sistema nervioso entérico, más aprende éste. La clave no es más que tener suficiente interés en lo que su cerebro intestinal le pueda decir. Al acumular la experiencia de todas las comidas de una semana, desarrollará mejor su instinto de encontrar un punto de energía. La recompensa será una sensación de ligereza enérgica y de satisfacción por haber sido más inteligente, no más tenaz, en cuanto a nutrirse a sí mismo de una manera positiva. Lo importante no es limitar las calorías, aunque es muy probable que consiga esto también. Lo importante es acceder a la sabiduría intestinal y dejarse guiar por ella.
Esta técnica es especialmente útil si usted sabe que necesitará todo su poder mental en determinado momento del día. Por ejemplo, para una reunión, negociación o examen. En pocas palabras, la comida anterior a esas ocasiones debería ser ligera. Esto garantizará un mayor flujo sanguíneo a su cerebro y lo hará más listo y alerta. A la inversa, si está en una negociación comercial y quiere salir ganando, sirva un inmenso y delicioso bufet a sus confiados interlocutores. Quedarán muy agradecidos.
Para resumir:
1. Al comienzo de su comida, propóngase comer hasta llenarse de energía.
2. Compruebe su nivel de energía al menos cuatro veces mientras come. Por nivel de energía se entiende su sensación de vitalidad, liviandad y agilidad mental.
3. Compruebe su nivel de satisfacción mientras come.
4. Compruebe su nivel de saciedad mientras come.
5. Calcule aproximadamente el punto en que puede dejar de comer y sentir más energía que cuando comenzó. Aún se sentirá ligeramente hambriento, no estará totalmente saciado, tal vez sienta el deseo de comer más, y aprovechará su "hambre" y su "deseo de más" en la próxima actividad que realice. Sabrá que ha pasado el punto óptimo de energía si empieza a sentirse pesado, perezoso, cansado, denso o poco concentrado.
Comer hasta llenarse de energía es una herramienta maravillosa que lo ayudará a liberarse de la confusión y el temor en torno a la pregunta "¿Cuánto debo comer?" Para la mayoría de las personas, el hecho de comer en cantidades medidas con precisión se ha convertido en una limitación y en un desgaste de energía. ¿Cómo podemos esperar que las porciones exactas nos den resultado, si comemos en un estado de estrés, sin conciencia ni satisfacción, o con muy poco oxígeno debido a la respiración superficial? Como hemos visto, el cuerpo exigirá absolutamente que comamos más en cualquiera de esas circunstancias y el precio será una reducción de la eficiencia digestiva y de la capacidad de quemar calorías. Si el simple acto de controlar las porciones nos permitiera bajar de peso eficazmente, todos habríamos seguido esa instrucción y ya hubiera dado resultado desde hace mucho tiempo. Pero algo ha faltado. Fortifique su dieta con la "vitamina C" (de "conciencia") y su poder metabólico aumentará de veras.
Conviene señalar cómo la mayoría de los libros de dieta y nutrición que hay en el mercado nos hacen contar gramos de proteína, gramos de grasa, gramos de carbohidratos, porciones, calorías, raciones y puntos. Cualquiera pensaría que estamos consumiendo un montón de números. Es como si alimentarse fuera un evento deportivo en el que se lleva el puntaje o una transacción de negocios en la que se registran débitos y créditos. Estamos realmente "digiriendo números".
Por supuesto, llevar este tipo de cuentas es adecuado en algunos casos, pero la mayoría de nosotros ya podemos pasar a un nuevo nivel de libertad. Visite en este planeta cualquier país cuyos habitantes consuman una dieta tradicional o de calidad (en partes de Asia, Europa, América Central, Australia, Nueva Zelanda, Islandia y la Cuenca del Pacífico) y encontrará personas delgadas, saludables, felices con sus cuerpos y completamente sorprendidos de que haya quienes se preocupen por contar calorías.
Es hora de que aprendamos de los verdaderos expertos. Olvide los números. Deje de contar lo que debe comer. Viva. Coma. Crea en usted mismo. Encuentre su inteligencia natural. Confíe en ella. Respétela. Honre el maravilloso proceso de nutrición que ocurre en nuestro interior. Acoja con satisfacción la vivacidad, suculencia y naturalidad de las comidas. Usted y yo tenemos un apetito instintivo, que no requiere esfuerzo, y que nos hablará claramente cuando nos olvidemos del miedo y prestemos atención. Dé el salto.
_Lecciones clave_
• La respuesta digestiva de la fase cefálica es una prueba convincente de que nuestra conciencia de las comidas influye marcadamente en su valor nutricional.
• Para lograr un metabolismo nutricional óptimo, cuando coma, no haga otra cosa.
• Mientras menos conciencia tengamos de las comidas cuando las ingerimos, más nos enviará el cerebro la señal de consumir alimentos en exceso.
• El sistema nervioso entérico (SNE) es un cerebro aparte, aunque interconectado, que se encuentra en el tracto digestivo.
• El sistema nervioso entérico contiene una inmensa cantidad de sabiduría e información acerca de nuestras necesidades nutricionales y metabólicas.
• Podemos acceder a la sabiduría intestinal a través de la conciencia de la mente y el cuerpo para determinar los alimentos que mejor resultado nos darán, y las cantidades en que debemos consumirlos.
SEMANA 4
El poder metabólico del ritmo
La luna inventó el ritmo natural.
La civilización lo desinventó.
TOM ROBBINS
El ritmo está en todas partes. Cada partícula de nuestro ser se mueve y vibra, baila y canta y lleva el ritmo de la sinfonía más deslumbrante que se haya concebido. Toda nuestra biología funciona como una fantástica maquinaria de relojería, con sus precisos ritmos químicos y hormonales cuya sincronización es decisiva para nuestra supervivencia y bienestar. Los latidos del corazón conforman un ritmo. El funcionamiento de sus pulmones, que inhalan y exhalan la respiración vital, conforma un ritmo. Las pulsaciones electroquímicas del cerebro conforman un ritmo. Lo mismo sucede con el ciclo menstrual, el caminar y el dormir, la digestión y evacuación y la contracción y expansión de cada célula, vaso sanguíneo y órgano dentro del organismo. Cualquier interferencia en estos ritmos puede provocar enfermedades o incluso la muerte.
Controle el ritmo y controlará el metabolismo.
Ciertamente, muchos de nuestros males desde la perspectiva de la nutrición (aumento de peso, fatiga, molestias digestivas, ansias de carbohidratos, comer excesivo) se pueden resolver si nos ponemos en sincronía con los ritmos que pueden regenerarnos naturalmente y sin esfuerzo. Examinemos la mejor manera en que podemos entender y aprovechar esta importante fuerza metabólica.
_Ritmos candentes_
Una de las maneras más sencillas y confiables de medir la tasa metabólica del cuerpo humano consiste en tomar su temperatura. Mientras más elevada sea ésta, más fuerte será su metabolismo. Recordemos que el término en latín para referirnos a la parte media del torso (plexo solar) significa "lugar de recepción del sol". Esto quiere decir que desde hace tiempo hemos sabido que el diseño básico de la forma humana contiene un dispositivo para captar la energía del sol. Mientras más eficientemente aprovechemos el calor del sol, mejores serán nuestra digestión, asimilación y capacidad de quemar calorías.
No es ningún accidente que usemos metáforas relacionadas con la temperatura para describir lo que nos emociona. Llamamos "bola de fuego" a una persona enérgica, "caliente" a una persona atractiva, sentimos la "calidez" de algunas personas y la " frialdad" de otras.
Como resultado de nuestra evolución, la temperatura corporal tiene un ritmo que es constante y predecible para la mayoría de las personas. Esta fluctuación rítmica diaria revela algunos detalles importantes que nos ayudarían a liberar nuestro potencial metabólico. Mientras dormimos en las horas de la noche y la madrugada, la temperatura del cuerpo desciende. Tiene sentido que nuestros cuerpos sean más fríos a esas horas, porque no estamos ocupados en cazar animales en la jungla ni en buscar gangas en las tiendas. Nuestros músculos tienen muy poco que hacer en ese momento; el cuerpo se encuentra en un estado de descanso, sanación y reparación de todos sus tejidos. Seguimos quemando calorías mientras dormimos, pero nunca en las cantidades que consumimos mientras estamos despiertos. Mientras dormimos, el cuerpo se encuentra en un estado de ayuno, a no ser que hayamos hecho una comida abundante justo antes de ir a dormir.
Desde el momento en que abrimos los ojos por la mañana, la temperatura corporal comienza automáticamente a subir. Esto es lo mismo que decir que nuestro metabolismo despierta junto con nosotros. Tiene sentido desde el punto de vista biológico porque ahora el sol está subiendo: es hora de encontrar comida, encontrar pareja, pelear y, quizás, hacer buenas obras. Incluso si uno se quedara en cama sin moverse el día entero, su temperatura y metabolismo se elevarían de todas formas porque estamos programados para guiarnos por los ritmos del sol.
Dado que la temperatura del cuerpo sube naturalmente en la mañana, es inteligente comer a esa hora si uno está tratando de bajar de peso. El hecho de poner alimentos en su sistema digestivo hará que la tasa metabólica aumente aún más y le proporcionará al cuerpo los nutrientes necesarios que ya se está preparando para procesar. Imagínese que su organismo es un horno. Cuando añade combustible, aumenta el calor.
Por supuesto, toda regla nutricional tiene su excepción. Muchas personas que viven en climas cálidos pueden pasar sin desayuno, o comiendo algo ligero o simplemente frutas. Usted mismo se percatará de que un desayuno sustancial le viene bien en los meses más fríos, mientras que preferirá comer ligero en la mañana durante los meses de verano. También puede ser que pase por etapas en las que su primera comida del día será el almuerzo, y esto puede ser suficiente, hasta que su metabolismo entre en la fase siguiente.
La temperatura corporal sigue aumentando lenta y constantemente hasta que llega a su punto culminante alrededor del mediodía. Llegará a su punto máximo en el preciso instante en que el sol esté mas alto en el cielo: éste es un hecho científico poco conocido que pone de relieve nuestra profunda conexión con el cosmos. Por lo tanto, nuestra fuerza digestiva está en su máxima temperatura a la hora de almuerzo. Por eso tiene sentido que consumamos nuestra comida más grande a esa hora, cuando nuestra capacidad de procesarla alcanza su mayor intensidad.
Después de nuestro cenit metabólico del mediodía, la temperatura corporal desciende en el período comprendido entre las dos y las cinco de la tarde aproximadamente. No es de sorprender el hecho de que, del mismo modo que nos sentimos más despiertos cuando la temperatura del cuerpo aumenta, nos hemos de sentir más soñolientos cuando ésta baja. Así que, si alguna vez ha pensado que algo anda mal por el hecho de que su energía disminuye entre las dos y las cinco, despreocúpese, que esto es perfectamente normal. Casi todo el mundo se siente cansado a esas horas. Es cuestión del ritmo del cuerpo humano. A los leones les encanta echarse a descansar para asimilar lo que han cazado. Lo mismo nos pasa a nosotros.
La energía del cuerpo (metabolismo) en forma de circulación sanguínea y oxigenación se desvía hacia la digestión después de la comida del mediodía. El resultado es que nos sentimos cansados. En muchos países de Europa y América Latina, las personas prefieren hacer su comida más abundante a la hora de almuerzo, que es el momento ideal para la digestión y el procesamiento de calorías. Luego toman una siesta. Los comercios cierran momentáneamente, la actividad social se reduce y algunas personas duermen un rato. Colaboran así con los ritmos naturales del cuerpo.
Culturas enteras están diseñadas en función de los ritmos digestivos.
Excepto la nuestra.
La mayoría de los estadounidenses nos atiborramos de cafeína o azúcar durante el descenso metabólico de las dos a las cinco de la tarde, obligándonos a vencer el cansancio en aras de un estilo de vida que valora excesivamente la idea de ir a toda marcha. ¿Puede imaginarse cómo sería la vida si pudiéramos relajarnos durante ese tiempo y olvidarnos momentáneamente de la búsqueda de logros y conquistas? Numerosos estudios han demostrado que uno o dos períodos de descanso de 15 a 20 minutos durante el día potencian grandemente la función cognitiva, el rendimiento físico, el estado de ánimo y la energía. Ni siquiera es necesario dormir durante ese tiempo. Es sólo cuestión de descansar, estarse quieto, no prestar atención a las sensaciones externas y recargar las baterías.
Dicho en términos sencillos, el descanso es un potenciador metabólico.
Entre las cuatro y las 6 de la tarde la temperatura del cuerpo comienza a subir otra vez. Ése es el momento en que la mayoría de las personas sienten que les vuelven las energías. También es el momento en que los ingleses hacen un receso para tomar el té. Tiene todo el sentido del mundo tomar cafeína en ese momento, cuando de todos modos el metabolismo se está acelerando. Hacia las nueve de la noche, la temperatura del cuerpo comienza a bajar de nuevo en preparación para el sueño. De hecho, las investigaciones sobre el sueño revelan que sólo podemos conciliar bien el sueño si la temperatura va en descenso. Por lo tanto, cualquier cosa que haga aumentar la temperatura del cuerpo tarde en la noche será contraproducente al buen dormir. Recuerde que el acto de comer hace que aumente la temperatura del cuerpo. Una comida abundante antes de ir a la cama sería contraproducente para el sueño. Aquí vemos una vez más cómo los estadounidenses hacemos las cosas al revés. Solemos comer poco o nada de desayuno, un almuerzo moderado y en la mayoría de los casos una gran cena antes de ir a dormir. Y eso es exactamente lo que uno tendría que hacer si su meta fuera el mal dormir y el aumento de peso.
_Es cuestión de tiempo_
Como bien saben todos los músicos, científicos y mecánicos, el ritmo se mide en cantidades por unidad de tiempo. El ritmo cardiaco se cuenta según el número de latidos por minuto. La velocidad a la que conduce su carro se mide en millas o kilómetros por hora. Aunque no existe ninguna forma oficial de calcular cuán bien metabolizamos y quemamos calorías a distintas horas del día, le puedo asegurar que también el metabolismo es cuestión de tiempo.
**Cuando se trata de comer, importan tanto el momento como el alimento.**
En un estudio típico, los investigadores pusieron a un grupo de personas en una dieta de 2.000 calorías. En la primera parte del estudio, los sujetos del experimento sólo podían consumir sus 2.000 calorías en el desayuno. No comían nada más durante el resto del día. Con esta sola comida en la mañana todos bajaron de peso o mantuvieron su peso. En la segunda fase del estudio, las mismas personas siguieron la misma dieta de 2.000 calorías, salvo que esta vez sólo las podían comer en la cena. Al pasarse el día entero sin comer y luego comer en la noche, cada uno de los sujetos del estudio aumentó de peso.
¿Se da cuenta de por qué contar calorías con la intención de bajar de peso puede ser un desperdicio de energía?
Encontrar el momento oportuno lo es todo. Los luchadores de sumo han sabido durante siglos que las grandes comidas a altas horas de la noche les proporcionarán la ventaja física que más ambicionan: la gordura. Dicho en pocas palabras, quemamos calorías con menos eficiencia en las horas de la noche.
Un aspecto importante del metabolismo que la ciencia popular tiende a pasar por alto es el hecho de que ejercitar su función digestiva, especialmente a la hora adecuada, hace que se fortalezca el metabolismo. El valor nutricional no sólo se expresa en las vitaminas y nutrientes que contienen nuestros alimentos, sino que se generan el proceso que sufre el cuerpo para pulverizar, digerir, asimilar y eliminar una comida. Comer es como hacer ejercicios, y los alimentos son las pesas que su cuerpo debe "alzar" para desarrollar su fuerza metabólica. Cuando dejamos de dar al tracto gastrointestinal su entrenamiento adecuado, pierde su tono y nuestro metabolismo se vuelve débil.
Si usted quisiera optimizar los beneficios del ejercicio físico, no haría su tanda de ejercicios en el momento del día en que está más cansado. Del mismo modo, si quiere obtener el mayor beneficio metabólico posible de comer, no consuma su comida más sustancial y más rica en nutrientes en la noche, pues a esas horas su digestión va en descenso. A menos que esté pensando seriamente en la posibilidad de convertirse en luchador de sumo, le sugiero que abandone inmediatamente esa dieta. Consumir poca comida durante el día, y mucha durante la noche, nunca lo llevará adonde quiere llegar cuando se trata de optimizar la energía y quemar calorías.
_La primera comida marca el ritmo_
Digamos que usted despierta en la mañana y decide no tomar el desayuno. Se dice a sí mismo: "Bueno, es que no tengo hambre, simplemente tomaré un poco de café, quizás un poquitín de cereal o una magdalena o un bagel. Si no como más que esto hasta el almuerzo, conseguiré al fin bajar de peso".
De eso nada.
La temperatura corporal aumenta naturalmente en la mañana para que uno esté preparado para la resurrección del metabolismo. Al recibir escasa o poca alimentación, el cuerpo se preocupa. Dice algo por el estilo de: "Ey, pensé que me estaba preparando para aumentar el metabolismo con una comida matutina. Pensé que había abundancia de comida. Pero no he ingerido alimento. Debo ser un náufrago en una isla desierta. O quizás hay hambruna. Más me vale reducir el metabolismo, almacenar grasa y no desarrollar ningún tejido muscular, porque me esperan tiempos de escasez".
Esta programación genética de supervivencia es un excelente mecanismo para permitir la continuación de la vida en tiempos de aprietos. Cuando el cerebro detecta que hay problemas con el suministro de alimentos, pone en marcha la reprogramación metabólica más sencilla y eficaz con miras a conservar energía: almacenar grasa y olvidarse de desarrollar los músculos. Exactamente lo opuesto de lo que uno está tratando de hacer al privarse de alimentos.
Para empeorar las cosas para quienes desean perder peso, muchas personas toman un desayuno que contiene un solo ingrediente: café. La cafeína hace por sí misma que aumenten los niveles de cortisol. Los promotores del consumo de café no quieren que usted sepa esto (yo antes trabajaba para ellos) porque significa en esencia que el café puede imitar químicamente la respuesta de estrés y ocasionar el aumento de peso en el abdomen. Esto no quiere decir que el café sea dañino. Sólo significa que, cuando uno combina la falta de alimentos (respuesta de supervivencia: cortisol elevado), la ansiedad (respuesta de estrés: cortisol elevado) y la cafeína (imita la respuesta de estrés: cortisol elevado), ya tenemos tres factores que, al combinar sus efectos, hacen que la producción de cortisol suba por los cielos, con la consecuencia de que suprimen el metabolismo digestivo y contribuyen al aumento de peso.
Una y otra vez, vemos la importancia de los niveles de cortisol en relación con la salud y el peso. El cortisol no es una sustancia dañina. Es un componente integral del organismo humano vivo. Sin él no podríamos existir. En las cantidades adecuadas, ayuda a mantener el funcionamiento correcto de cada sistema importante del organismo. No obstante, cuando producimos un exceso de cortisol, envejecemos prematuramente, desgastamos nuestros eslabones más débiles y acumulamos peso alrededor del abdomen.
Aunque resulte extraño, las sustancias químicas que más estragos nos causan en nuestras vidas y que resulta más tóxicas son las que nosotros mismos producimos. Por eso las grandes empresas farmacéuticas del mundo se han dedicado a tratar de perfeccionar equipos para comprobar en casa sus propios niveles de cortisol. Y, por supuesto, si usted encuentra que su nivel de cortisol es muy elevado, podría comprar el medicamento que ellos hayan ideado para reducirlo. Pero no tiene por qué esperar por la próxima panacea para mejorar el metabolismo y reducir su peso. Ningún medicamento ha logrado ese efecto ni ninguno lo logrará jamás. Simplemente siga los ritmos innatos del cuerpo y logrará liberarse a sí mismo al mismo tiempo que contribuye a sacar para siempre del negocio a los traficantes de píldoras de dieta.
Imaginemos que ya es hora de almuerzo. Usted ha tomado su desayuno pequeño o ínfimo y, luego, quizás una segunda taza de café a media mañana. Tiene alguna energía y no siente la necesidad ni el deseo de tomar un almuerzo grande. Quizás piensa que hace bien en controlar el consumo de calorías; quizás no tiene mucho tiempo de todos modos para el almuerzo así que, ¿por qué no comer medio emparedado o una ensalada con aderezo sin aceite, o almorzar más avanzado del día, a las dos o a las tres de la tarde?
Si ésta es su estrategia, lo que le parece sensato va en realidad en detrimento suyo. En primer lugar, el cuerpo está diseñado para digerir y quemar calorías óptimamente cuando el sol se encuentra en su punto más alto del cielo. Al no poner combustible en el horno en ese momento, o simplemente al no comer lo suficiente, ha dejado pasar su mejor oportunidad metabólica, que es aproximadamente de las 12 a la 1:30 de la tarde. Dejar pasar esta oportunidad es lo mismo que quedarse vestido sin tener adonde ir. Lo más probable es que, hacia las tres o las cuatro de la tarde ya esté muy hambriento, quizás irritable o con dolores de cabeza, y entonces eche mano a una merienda que no le hará bien. En otras palabras, estará sufriendo de arritmia, o sea, el hecho de estar fuera de sincronía con su corriente circadiana natural.
Además, el hecho de tomar un desayuno minúsculo y un almuerzo pequeño o tardío le garantiza que sentirá un hambre atroz en la noche. Muchas personas que siguen esta secuencia arrítmica de acontecimientos encuentran que deben comer una merienda sustancial antes de la cena porque están tan hambrientos que no pueden esperar a que llegue la hora de la comida propiamente dicha, o simplemente tienen un apetito voraz por la noche y consumen una enorme cena, al estilo de los luchadores de sumo.
_El don metabólico del sueño_
Una de las desventajas de consumir un gran volumen de comida antes de acostarnos es que esto nos hace desaprovechar algunos de los mejores dones metabólicos del sueño. Mientras uno duerme por la noche, la atención del metabolismo se concentra en el mantenimiento, desintoxicación, reparación y crecimiento de sus tejidos y órganos. Cuando uno desarrolla nuevo tejido muscular y óseo, lo hace mientras duerme. El hígado, que es nuestro órgano principal de desintoxicación, realiza la mayor parte de su trabajo en la noche y la madrugada. De todos los factores que activan nuestro metabolismo, el sueño no es el que recibe la mayor publicidad ni la mayor atención, pero pagamos el precio si no seguimos adecuadamente su ritmo.
Al consumir una comida abundante antes de dormir, gran parte de la energía metabólica que normalmente se invierte en el mantenimiento, desintoxicación, y crecimiento se desvía necesariamente hacia la digestión. Así es como funciona el cuerpo, ni más ni menos. Las necesidades de supervivencia a corto plazo se imponen a las necesidades a largo plazo. Por eso, al tener una circulación sanguínea excesiva y al estar centrado el metabolismo en procesar su comida mientras usted duerme, lo más probable es que despierte sintiéndose congestionado y pesado porque no se desintoxicó completamente durante la noche. El período comprendido entre la cena y el desayuno no es más que el ayuno previsto por la evolución. Esto se debe a que el estado de ayuno es el medio biológico ideal para reconstruir el organismo. Y por eso es que esta comida recibe el nombre de "des-ayuno", pues estamos poniendo fin a este período necesario al ingerir alimentos en la mañana.
Por eso, si uno se levanta sintiéndose cansado y tóxico por haber cenado tarde y abundantemente porque no comió de forma adecuada y relajada durante el resto del día, es natural que siga repitiendo esta pauta arrítmica. En la mañana no sentirá hambre porque su cuerpo seguirá aún en modalidad de desintoxicación cuando debería estar preparándose para el estímulo metabólico de comer. Luego el cuerpo interpretará el almuerzo como si fuera el desayuno, y la cena, como si fuera almuerzo, o sea, el momento de la comida más abundante. Algún tiempo después de la cena que su organismo interpretó como almuerzo, sentirá deseos de "cenar" y terminará por comer golosinas a altas horas de la noche.
Seguramente habrá oído a los nutricionistas recomendar que cenemos unas cuatro horas antes de ir a dormir. Esas cuatro horas son suficientes para que la mayoría de las personas metabolicen la comida. De este modo, podrá ir a dormir sin hacer que la temperatura de su cuerpo aumente a través del efecto metabólico de la comida, con lo cual aumentará sus probabilidades de tener un sueño reparador. Además, permitirá que su organismo haga lo que está programado para hacer mientras uno duerme (sanación, desintoxicación, reconstrucción, etc.) sin tener que utilizar su fuerza metabólica vital para la digestión.
Para poder lograr esto, quizás tenga que reentrenar a su cuerpo y reorientar su estilo de vida. Trate de cenar temprano y en pequeña cantidad y de tomar un desayuno más abundante. Cuando uno se toma su tiempo para almorzar de forma relajada y satisfactoria, luego le resulta más fácil cenar poco. Si usted sabe que no le va a quedar otra opción que cenar tarde porque así está programado y no hay manera de evitarlo, puede aún recurrir a un ardid que siempre da resultado: tome una buena merienda aproximadamente dos o tres horas antes de la cena, y luego coma menos en la cena. La merienda hará que su apetito en la noche se reduzca con lo que, esencialmente, será como si transfiriera algunas calorías de la cena y las "gastara" antes, en un momento en que las puede usar mejor y en que, de todas formas, las quemará. Esta estrategia también resulta útil si uno se siente muy hambriento para la cena después de llegar del trabajo. Cuando digo una merienda sustancial, me refiero a cualquier alimento que tenga cierto contenido de proteína o grasa: nueces y semillas, mezcla de frutos frescos, mantequilla de cacahuete o de almendra con galletas saladas o fruta, yogur, humus, guacamole o salsa de frijoles o judías.
Debido a nuestro estilo de trabajo, muchos hacemos caso omiso de los alimentos y la nutrición mientras nos lanzamos con frenesí a nuestras ocupaciones cotidianas. Pero siempre llega la hora de la verdad. En el instante en que llegamos del trabajo a la casa, el cerebro recibe al fin permiso para prestar atención a nuestras necesidades. Pero, en lugar de informarnos con toda calma que hemos descuidado la necesidad de alimentar al cuerpo y nutrir el alma en concordancia con los ritmos del día, comienza a dar saltos como un perro desatendido y a decir a ladridos: "¡tengo hambre!" Las sensaciones de voracidad que experimentamos pueden ser abrumadoras y hacernos comer de más. Entonces nos sentimos culpables y tratamos de compensar nuestra falta de fuerza de voluntad y control siguiendo un régimen más estricto de ejercicios.
¿Se da cuenta de que muchas veces nuestras soluciones a los problemas nutricionales no tienen en realidad nada que ver con el verdadero problema? ¿Está clara la forma en que nos castigamos por razones equivocadas en lo que respecta a la comida y el ejercicio?
Al planear una merienda en las últimas horas de la tarde uno le asesta un golpe preventivo al hábito de comer en exceso y sin control después del trabajo. De este modo, uno opta conscientemente por ejercer su derecho universal de nutrirse, con lo que impone un cortocircuito al hábito de privarse de comida para luego devorarla. Esto también constituye una importante afirmación de que su trabajo no está por encima de su salud.
_El ritmo da buen resultado_
Peter, consultor empresarial de 52 años, comparte su tiempo entre Nueva York, Londres y la Florida. Al tener familiares y negocios en cada uno de estos lugares, Peter lleva una vida consistente en períodos de muchos viajes y trabajo entremezclados con tiempo muerto que a veces dura varios meses. Vino a verme porque tenía hinchazón crónica después de las comidas, aumento de peso alrededor del abdomen, ansias de carbohidratos y fatiga. Peter es un hombre de iniciativa, por lo que constantemente exploraba distintas formas de controlar sus síntomas: suplementos, programas de limpieza del colon, dietas bajas en calorías, desayunos de fruta solamente, etc. Logró algunos resultados con todos estos métodos pero inevitablemente volvía a subir de peso y a sentir fatiga e hinchazón. Estaba harto de ir hacia adelante y hacia atrás; no podía entender por qué todo funcionaba al principio pero luego fallaba, y estaba cansado de sentirse cansado.
Cuando le pregunté a Peter sobre su dieta, me relató una situación interesante. Desayunaba erráticamente. A veces desayunaba y a veces no. Cuando lo hacía, el desayuno consistía en un croissant y un café. Lo mismo pasaba con el almuerzo: a veces almorzaba y a veces no. Normalmente tomaba un café si dejaba de almorzar, y tal vez comía una ensalada ya entrada la tarde, o un poco de queso o algún dulce. Cuando tomaba el almuerzo, éste consistía en un emparedado de pavo o pasta. Solía sentirse irritable y de mal talante a media tarde. Luego estaba hambriento hacia las seis de la tarde. Consumía una inmensa cena y se iba temprano a la cama sintiéndose repleto e hinchado. Sorprendentemente, e incluso en las semanas en que Peter no trabajaba, insistía en seguir este horario errático a pesar de que le sobraba el tiempo durante el día para planificar sus comidas.
La mayoría de los nutricionistas o médicos seguirían ciertas estrategias más bien predecibles: planificar comidas bajas en calorías para bajar de peso, hacer pruebas de alergia o exámenes gastrointestinales para la hinchazón y recetar una larga lista de suplementos o medicamentos para el cansancio y la depresión. De hecho, Peter se había sometido a todos estos tratamientos y más, y todos eran métodos sensatos y bien escogidos. Pero nada dio un resultado duradero, porque el problema metabólico esencial de Peter nunca fue solucionado. El problema de Peter era simplemente éste: su vida carecía de ritmo.
No había encontrado una manera de nutrirse de forma constante y bien ponderada. No atendía al cuerpo y al alma con ninguna coherencia. Internamente, nunca se comprometió con su verdadero sustento. Planificaba sus finanzas, pero no podía predecir su próxima comida. Le gustaba estar ocupado, pero no sabía cómo relajarse. Era prisionero de los procesos bioquímicos de luchar o huir incluso cuando nadie lo estaba persiguiendo ni tenía a nadie a quien atacar. En ese sutil estado bioquímico de miedo interno y constante preocupación, ¿quién tiene tiempo para una experiencia nutricional ponderada?
Le sugerí a Peter que las mejores estrategias dietéticas y médicas nunca serían de ninguna utilidad hasta que él optara por regirse por la ley que debe seguir toda criatura viviente: la ley del ritmo. Le dije que planificara su desayuno, su almuerzo y su cena y los disfrutara como si cada uno de ellos fuera su primera o su última comida en la Tierra. Debía escoger entre dos opciones: hacer de su nutrición una prioridad diaria o buscar otro planeta donde las comidas estresantes, desordenadas y fastidiosas puedan hacer que las personas sean felices y saludables.
La simple estrategia de comprometerse con el ritmo tuvo un profundo efecto. Peter dejó de tratar de solucionar su problema y comenzó a crear un ritmo diario en el que no quedaba espacio para que sus problemas existieran. Optó por comenzar el día, no saliendo por la puerta a toda velocidad, sino extendiendo lentamente sus alas, aceptando al mundo y elevándose poco a poco. En otras palabras, estableció un tiempo para sentarse a tomar su desayuno y disfrutarlo. A media mañana, prestaba atención a su hambre para ver si necesitaba una merienda. Cada vez que podía planificaba un almuerzo celebratorio. Traía comida al trabajo por si necesitaba una merienda de calidad. Y, a la hora de cenar después de llegar a la casa, ya no se trataba de una "comida de desesperación" que tenía que eliminar todas sus tensiones y satisfacer todos sus deseos insatisfechos de comida. La cena se convirtió en una experiencia relajada de comida ligera que él esperaba con alegría.
El ritmo no es cuestión de seguir mecánicamente un horario de alimentación. Es cuestión de acceder a estar vivo en una manera que funcione. Es cuestión de respetarse a uno mismo lo suficiente para valorar el cuidado de su cuerpo. Es cuestión de aprender a usar la "vitamina T" (de "tiempo") para que pueda desarrollarse verdaderamente el metabolismo. Cuando vivimos cada día sumidos en la química del estrés, nuestro nivel de cortisol se mantiene constantemente elevado. Esta sustancia química no sólo nos hace estar más alertas, sino que tiene un inusitado efecto adicional: el cortisol distorsiona nuestra percepción del tiempo. En otras palabras, tiene el efecto farmacológico de hacernos sentir como si estuviéramos atrasados y no nos alcanzara el tiempo. Por supuesto, ésta es una de las funciones ingeniosas del cortisol porque, cuando tenemos una manada de lobos sobre nuestra pista, realmente no nos queda tiempo. Pero cuando generamos automáticamente esta sustancia química un día tras otro porque no sabemos relajarnos, respirar y ser conscientes, funcionamos como si los lobos siempre estuvieran al acecho.
El ritmo principal que Peter cambió fue un ritmo interno, algo muy dentro de sí que va más allá del reino de la nutrición y las píldoras y la planificación de comidas. Peter accedió a un lugar tranquilo y seguro dentro de sí. Aunque seguía siendo el mismo hombre, con la misma vida, una parte de él había dejado al fin de correr. Durante muchos años Peter había tenido abundantes amigos, familiares y comodidades, pero nunca había aprendido a apreciarlos plenamente. Al sosegarse y elegir la vida, el mundo de Peter se transformó. Sus problemas digestivos fueron desapareciendo en cuestión de semanas y dejó de tener hinchazón o cansancio después de las comidas. También se sentía más feliz consigo mismo. Y en un plazo de cinco meses, bajó 20 libras.
Pero, afortunadamente, los problemas de Peter no desaparecieron del todo, pues cada vez que se dejaba llevar por una vida frenética, aprensiva y arrítmica, sus síntomas volvían rápidamente. Su sistema digestivo se convirtió en un barómetro que lo alertaba acerca de cuándo comenzaba a perder su paso y volvía a caer en sus viejas pautas de descuido de sí mismo. La relación que Peter mantenía con la salud y el peso no llegó a ser perfecta, pero sí auténtica.
_Un verano interminable_
Una última manera en la que podemos trabajar con el ritmo para ayudar a transformar el peso y el bienestar consiste en cambiar los hábitos alimenticios que nos hacen pasar a un metabolismo de "prehibernación". Ahora me explico.
Nuestros antepasados distantes desarrollaron un exquisito mecanismo para aprovechar la abundancia de alimentos en el verano y la escasez de éstos en el invierno. Se mantenían despiertos durante más horas en los largos días de verano, se hartaban de todas las frutas y bayas que pudiera encontrar, y sus organismos almacenaban esos alimentos en forma de grasa. Como se acercaba el invierno y venían tiempos flacos, era mejor comer lo más posible mientras hubiera alimentos disponibles. Ésa fue la manera que encontró la evolución de estimular nuestro apetito a niveles excepcionales cuando los carbohidratos estuvieran disponibles y ayudarnos a almacenarlos en nuestros cuerpos. Una vez más, la insulina es la sustancia clave que producimos para lograr esta proeza. Normalmente, la insulina ayuda a enviar carbohidratos en forma de azúcar a nuestras células para proporcionarnos energía. Eso es bueno. Cuando consumimos carbohidratos excesivamente, y en consecuencia el cuerpo produce demasiada insulina, nos volvemos resistentes a la insulina; el cuerpo responde como si no hubiera insulina en él y almacena esos carbohidratos como grasa corporal. Eso también es bueno. No conviene seguir enviando azúcar a las células cuando uno come en exceso. Las células explotarían.
Así pues, a medida que evolucionamos a lo largo de millones de años, la química del organismo se volvió radicalmente diferente mientras nos preparábamos en el verano para los meses más fríos. La disponibilidad de carbohidratos estimulaban nuestro deseo voraz de consumirlos en mayor cantidad aún. Cuando llegaba el invierno, ya estábamos bien recubiertos de grasa corporal. Asimismo, tendríamos un exceso de peso debido al agua retenida por la dieta de alto contenido de carbohidratos, y nuestro nivel de colesterol sería alto porque el cuerpo también convierte los carbohidratos en colesterol para que sirvan como fuente de energía y para que taponen fugas en el sistema cardiovascular. El nivel de glucosa en la sangre sería bastante elevado (estado diabético), pues el azúcar en la sangre funciona literalmente como anticongelante durante los meses fríos. Por cierto, el anticongelante que utiliza en su automóvil también tiene un sabor dulce.
Observamos este patrón fisiológico en los mamíferos que hibernan. Un oso se harta de frutas en el verano, aumenta de peso, le suben el colesterol y la presión sanguínea hasta que, básicamente se encuentra en un estado diabético de alto contenido de glucosa en la sangre. Y todas estas "enfermedades", que son realmente útiles y necesarias a corto plazo, se resuelven naturalmente en los meses de invierno cuando el oso quema sus reservas de grasa y su colesterol, pierde su peso de agua, se deshace de su estado diabético anticongelante y sale de la hibernación con aspecto esbelto, hambriento y listo para la acción.
Pero aquí está el problema: si bien los seres humanos modernos no nos proponemos engordar en el verano, muchos de nosotros consumimos azúcar, caramelos, galletas dulces o saladas, tarta, pasta, pan, rosquillas, arroz, papas y productos de trigo en grandes cantidades un día tras otro a lo largo de todo el año. Y, de este modo, hacemos que nuestro sistema pierda su ritmo natural. El cuerpo cree que está en un eterno verano, por lo que permanecemos siempre en un estado de prehibernación. Una a esto la falta de sueño, el exceso de exposición a la luz artificial y el aumento del estrés, y el almacenamiento de grasa se multiplica. Así que, mientras más pronto usted deje de prepararse para hibernar, mejor. Esto supone comer menos carbohidratos de baja calidad y dormir mejor. No digo que irse a la cama tarde sea malo, ni tampoco digo que el azúcar y los carbohidratos sean dañinos. Solamente le estoy alertando ante el hecho de que si estas condiciones predominan en su estilo de vida, usted no se acercará mucho a su verdadero potencial metabólico.
Por lo tanto, con toda esta información acerca del poder metabólico del ritmo, esto es lo que se tiene que preguntar: ¿Tengo ritmo? ¿Mi día fluye de manera coherente? ¿Tengo un estilo de vida con horas establecidas para la comida, el descanso y la alimentación? Si su respuesta a estas preguntas es negativa, entonces su primer paso deberá ser:
**Priorice el ritmo.**
Esto significa que debe deshacerse de la inmediatez en relación con las comidas, o sea, que no debe comer cualquier cosa que le caiga en las manos cuando puede y debe cuidar de sí mismo haciendo que la nutrición sea una prioridad constante.
Así es como debe empezar.
_Semana 4: Su tarea principal_
Esta semana representa su oportunidad de cosechar las recompensas que el ritmo aporta al metabolismo. Aprenda a aprovechar en la mayor medida posible los principios fundamentales de la nutrición rítmica y obtendrá beneficios inmediatos para el cuerpo, la mente y el alma. Su tarea principal para la semana 4 es incorporar estas importantes estrategias de ritmo en su vida: comer a intervalos regulares, equilibrar sus macronutrientes en las comidas, planificar el momento y el tamaño de sus comidas, planificar sus comidas y meriendas diarias, no consumir una excesiva cantidad de cafeína y obtener el descanso y el entretenimiento necesarios.
**Ejercicio: Coma a intervalos regulares**
Comience su semana con el compromiso primordial de hacer comidas a intervalos regulares. Haga del comer una parte predecible de su desenvolvimiento diario. Ésa es la clave para liberar el poder metabólico del ritmo. Deje de omitir comidas y no ponga más limitaciones indefinidas al momento de comer: "No almorcé porque estaba muy ocupado", "Se me fue el tiempo", "Yo como cuando tengo tiempo". En la medida de lo posible, planifique cada noche sus menús y sus horas de comida para el día siguiente. Sepa que va a desayunar, almorzar y cenar. Priorice el ritmo. Haga que sus horas de comida sean importantes. Estudie detenidamente su horario y determinen los ajustes que necesita aplicar si quiere hacer tiempo para tres buenas comidas cada día. ¿Necesita despertarse un poco más temprano para poder sentarse a desayunar? ¿Qué debe hacer en la casa o el trabajo para poder tomar un almuerzo normal? ¿Cómo puede recabar la ayuda de las personas que lo rodean? Si viaja o tiene horarios irregulares de trabajo o en sus obligaciones como padre o madre, comprométase a planificar bien sus comidas por adelantado. Lleve la comida consigo cuando sea necesario.
**Ejercicio: Balancee los macronutrientes en las comidas**
Si ha intentado bajar de peso y no ha obtenido los resultados que desea, o si padece de fatiga crónica, le presento a continuación algunas estrategias excelentes que incorporan el uso de macronutrientes, o sea, proporciones de proteínas, grasas y carbohidratos. En la medida de sus posibilidades, elimine los desayunos consistentes en carbohidratos solamente. Esto significa que la primera comida de la mañana no debe consistir únicamente en cereal, avena, una rosquilla, un bagel, una magdalena, una barra de granola o muesli, un croissant, pan tostado con mermelada o margarina, etc. Estos alimentos no son necesariamente dañinos. Esto es simplemente un experimento para ver qué sucede con su metabolismo con ese cambio.
Esta semana, en cada desayuno, pregúntese a sí mismo: "¿Dónde puedo encontrar proteína y grasas sanas?" Haga que estos dos macronutrientes sean el centro de su primera comida del día. Trate de incluir una de estas opciones en su desayuno: mantequilla orgánica de cacahuete o de almendra, u otras mantequillas de nueces (con fruta o con una tostada de pan integral); huevos enteros de corral; yogur orgánico con toda su grasa con algunas nueces y semillas; requesón orgánico; pescado fresco o ahumado; salchicha de pavo de corral o queso de alta calidad. Incluya una rebanada de pan integral de calidad y/o frutas frescas a su gusto. Si desea tomar cereal, use avena orgánica y tómela con nueces y semillas o con mantequilla de nueces o yogur. El desayuno no es el momento de contar calorías. A menos que coma diez libras de queso crema, su cuerpo se ocupará de quemar en primer lugar lo que haya comido en la mañana. Es una ley del metabolismo.
Fíjese en que no estoy proponiendo que elimine fanáticamente los carbohidratos. Simplemente estamos haciendo que sean un acompañante cuando usted los desee, pero que no sean el plato principal. Por lo que se refiere a las porciones, coma una cantidad que lo deje satisfecho, pero no cansado y lleno. Confíe en sus elecciones.
Para el almuerzo, aplique el mismo principio básico. Vuelva a preguntarse: ¿dónde puedo encontrar proteína y grasas sanas? En la medida de sus posibilidades, haga que el centro de su almuerzo sea uno de estos alimentos: cualquier pescado (fresco o ahumado y, como tercera opción, enlatado); sushi; tofu; tempeh; frijoles o judías; aguacate; huevos orgánicos; pollo de corral; o pavo de corral. Cualquiera de estos alimentos puede muy bien ser parte de una ensalada o comerse junto con una ensalada. Utilice generosamente en su ensalada un aceite de oliva de calidad. Use pan, arroz o patatas solamente cuando no tenga otra opción. Úselos como acompañantes.
Al igual que en el desayuno, el almuerzo no es el momento de contar calorías. Simplemente busque satisfacer su apetito natural y disfrútelo.
La cena es la comida en la que más flexibilidad tiene con respecto a las proporciones de macronutrientes. No necesita una comida con gran contenido de calorías difíciles de quemar, procedentes de la grasa o la proteína, porque la mayor parte de sus necesidades de energía ya han sido satisfechas por ese día. El mecanismo metabólico de quemar calorías está reduciendo su intensidad. Además, muchas personas encuentran que una cena centrada en carbohidratos puede ser relajante, en lugar de estimulante. Escuche lo que su cuerpo le pide. Aunque ésta es la única comida en la que pudiera ser conveniente controlar las porciones, no se prive de grasas sanas. Si va a comer una ensalada, utilice un aceite de oliva de primera. Muchas veces uno termina ingiriendo más alimentos o más carbohidratos de los que necesita si su cuerpo no ha obtenido la grasa que deseaba, y que requería, a horas más tempranas del día. Si lo que le apetece es una cena ligera, o no comer nada, siga no más la sabiduría de su cuerpo. Al igual que con todas las comidas, la clave de la cena está en los alimentos de calidad.
**Ejercicio: Planifique el momento y el tamaño de sus comidas**
Una vez más, si la pérdida de peso y/o la fatiga son los problemas que usted quisiera solucionar, o si simplemente quiere aumentar su nivel de energía, intente lo siguiente. Pruebe a hacer un desayuno más sustancial, haga que su almuerzo sea su comida más abundante siempre que sea posible, o al menos refuércelo si normalmente come poco a esa hora. Si acostumbra a cenar abundantemente, trate de reducir el tamaño de esta comida. Redúzcala en aproximadamente un 10 a 20 por ciento. La meta es hacer que su cena termine siendo una comida más pequeña que su almuerzo.
Si encuentra que se siente más hambriento a media mañana una vez que comience a desayunar más abundantemente, no se enfade conmigo. Ésa es una buena señal: significa que su metabolismo ha aumentado. Su "horno" está funcionando a más alta temperatura y pide más combustible, y usted está recuperando su apetito natural.
Es muy importante comer a la hora adecuada. Trate de desayunar en algún momento entre las 6:30 y las 9:00 de la mañana. Evite desayunar más tarde. Trate de almorzar entre las 12:00 y la 1:30 de la tarde. Ése es el momento en que su metabolismo funciona óptimamente. Tenga en cuenta que, mientras más tarde desayune, mayores serán las probabilidades de pasarse del horario óptimo de almuerzo.
Haga todo lo posible por cenar aproximadamente cuatro horas antes de irse a la cama. Si usted es del tipo de persona que cena muy tarde y se va a la cama inmediatamente después de cenar, le irá mejor si cena al menos una hora antes. Salga a caminar después de cenar a horas avanzadas, para ayudar a la digestión. Si sabe que va a cenar tarde, no vaya a la mesa con un apetito voraz. Tome una merienda sustancial unas dos horas antes de la cena. Esto mermará su apetito y le permitirá comer en menores cantidades.
**Ejercicio: Planifique sus comidas y meriendas diarias**
Algunas personas pueden arreglárselas con cinco o seis comidas pequeñas diarias. Algunos sólo necesitan tres comidas diarias. Otros se conforman con dos. Algunos sienten la necesidad de merendar, mientras que muchos consideran innecesarias las meriendas.
Todos tenemos un metabolismo distinto. La experimentación es la única manera de determinar sus necesidades especiales en relación con el número de comidas. Si usted come a horarios erráticos, obtendrá fabulosos beneficios si comienza a seguir un horario ordenado de tres comidas diarias. Si no tiene tiempo para un almuerzo más relajado y sustancial, quizás le convendría almorzar relativamente poco y luego tomar una buena merienda al final de la tarde. Hágase la idea de que son dos almuerzos. Desde el punto de vista de bajar de peso y mantener sus niveles de energía, esto es mejor que dejar de almorzar y luego comer cuando está hambriento y agotado al final de la tarde. Si lo desea, también puede probar a tomar una merienda a media mañana y/o a media tarde. Válgase de los ejercicios de la semana 3 sobre la "sabiduría intestinal" para determinar lo que funciona mejor para usted.
Sintonícese con su carácter nutricional especial. Pruebe con algo distinto esta semana y fíjese en los resultados. Si quiere una merienda ligera, consuma frutas o vegetales frescos. En la medida de sus posibilidades, reduzca el consumo de meriendas que consistan solamente en carbohidratos altamente procesados y producidos en masa, o sea, jugos envasados, refrescos, caramelos, pretzels, frituras, galletas dulces, magdalenas, barra de granola o muesli, etc. Entre otras meriendas sustanciales y de buena calidad figuran: las nueces y semillas orgánicas, las mezclas de frutos frescos, la mantequilla de nueces con frutas o vegetales, el yogur orgánico, el queso de calidad con galletas saladas o frutas, las aceitunas, la salsa de frijoles, el humus, los caldos y sopas y los batidos.
**Ejercicio: Utilice la cafeína sabiamente**
Deshacerse de la cafeína es una forma contundente de recuperar su ritmo natural, o sea, su metabolismo más sano y robusto. Si experimenta variaciones en el estado de ánimo y descensos de energía, o si ha tenido dificultades para bajar de peso, ésta es la semana de transformar su relación con la cafeína. Esto no significa que nunca más deberá consumir cafeína ni que ésta es inherentemente mala. Se trata de que usted debe ser quien domine a la cafeína y no a la inversa. Se trata de buscar su verdadera energía. Lo mejor sería que eliminara el consumo de café u otras bebidas cafeinadas o, al menos, que lo redujera a no más de una taza diaria. Las bebidas cafeinadas incluyen los refrescos que contienen cafeína, refrescos de dieta, bebidas deportivas y té negro. (Aunque es cierto que el café descafeinado sería mejor para su experimento que el café común, el descafeinado contiene de todos modos un poco de cafeína.) Si tiene que tomar café en la mañana, tómelo junto con algún alimento. Esto ayudará a modular el ascenso de insulina y glucosa en la sangre, y su subsiguiente descenso, que tienen lugar cuando uno toma cafeína sola.
Muchas personas son muy sensibles a la cafeína sin saberlo. O sea, para la mayoría de nosotros un poco de cafeína basta para producir un fuerte efecto. Por esa razón le sugiero que se olvide del café durante toda la semana. Si siente que necesita absolutamente un sustituto, pruebe con el té verde. También contiene cafeína y otras sustancias químicas conexas, pero la contiene en menor proporción y surte un efecto distinto en el sistema nervioso. El té verde tiene además un efecto termogénico, o sea, potencia la capacidad de quemar calorías pero sin aumentar el ritmo cardiaco ni la presión arterial. El mate también sería una opción excelente como bebida sana, con un bajo contenido de cafeína y con un sutil efecto de vigorización. Si lo desea, utilice un edulcorante de calidad. También puede probar con té de hierbas, o podría aficionarse a tomar más agua.
**Ejercicio: Haga tiempo para el descanso y el ocio**
Esta semana trate de determinar si puede incorporar un período regular de relajación cada día a media tarde. Incluso quince minutos serían beneficiosos. En la medida de sus posibilidades, repliéguese del mundo exterior, cierre los ojos, respire y recargue las baterías. No se trata tanto de un período de sueño como de una oportunidad de descanso meditativo. (Tenga en cuenta la iluminación de su casa en horas de la noche. Asegúrese de que sea suave y relajante.) En muchos países de Europa y América Latina, este período de descanso está incorporado en el estilo de vida de cada cultura.
Cuando olvidamos dedicar tiempo cada día al descanso y al ocio (por ocio se entiende cualquier actividad que lo haga sonreír), la parte de nuestra existencia dedicada a las comidas puede asumir un peso mayor de lo debido. Ponemos una presión extra en las comidas y esperamos que nos aporten algo que en realidad no pueden aportarnos. Comprométase a experimentar alguna alegría cada día, jugar con niños, alguna forma de diversión o ejercicio, baile, masajes, ajedrez, conversaciones pueriles, besos. Cualquiera de estas actividades beneficiará su bienestar bioquímico.
Hemos visto cómo la nutrición y el metabolismo están íntimamente regulados por los ritmos naturales y cómo vivir una vida rítmica puede restablecer nuestro equilibrio personal y emocional. Al introducir el poder del ritmo en nuestra relación con la comida, el cuerpo encuentra su lugar adecuado. El ritmo hace que el alma encuentre su conexión con el mundo.
Seguir los ritmos naturales implica entender que el metabolismo no sólo tiene que ver con lo que uno come. Tiene que ver con rediseñar la "danza" que uno realiza a lo largo del día. Con encontrar un equilibrio entre la actividad y el descanso, el trabajo y el ocio, el dar y el recibir, los pensamientos y los sentimientos, la cabeza y el corazón. Con escoger la manera en que uno desea vivir en el mundo.
_Lecciones clave_
• La alineación con los ritmos de la vida hace que el metabolismo alcance su mayor plenitud.
• El metabolismo digestivo y la capacidad de quemar calorías alcanzan su mayor intensidad cuando el sol está más elevado en el cielo (la hora de almuerzo) y su menor intensidad, en las horas avanzadas de la noche.
• Dejar de desayunar y almorzar o comer demasiado poco en esas ocasiones hace que se reduzca el metabolismo e inhibe la pérdida de peso.
• Comer cada día a horas irregulares e impredecibles hace que nuestro metabolismo digestivo y nuestra capacidad de quemar calorías se desincronicen.
• El consumo excesivo de carbohidratos refinados ocasiona una implosión de los ritmos, lo que hace que el cerebro piense que es verano y envíe al cuerpo la señal de almacenar grasa.
SEMANA 5
El poder metabólico del placer
La única manera de contrarrestar la locura
universal de la vida acelerada consiste en oponer una
firme defensa con tranquilos placeres materiales.
DEL MANIFIESTO INTERNACIONAL
DE LAS COMIDAS SOSEGADAS
La "vitamina P" (de "placer") es un elemento vital que hace que nuestras comidas sean completas desde el punto de vista nutricional y que la vida valga la pena. Al igual que todos los organismos del planeta, los humanos estamos programados genéticamente para buscar el placer y evitar el dolor. Un gato que persigue a un ratón busca el placer; el desafortunado roedor trata por todos los medios de evitar el dolor. De hecho, cualquier comportamiento que se nos pueda ocurrir puede verse como una manifestación de una de estas finalidades o una mezcla de las dos. Esto se hace ver particularmente en nuestras costumbres alimentarias. Cuando comemos, tratamos de obtener el placer de la comida y evitar el dolor del hambre. Esto se debe a que el destino ha hecho que nuestros cuerpos estén orientados al disfrute.
La simple ecuación científica del profundo efecto bioquímico del placer es:
**El estímulo producido por la comida estimula el metabolismo.**
En un estudio realizado en la Universidad de Texas con participantes que tenían muy altos niveles de colesterol, se hizo que éstos siguieran una dieta de bajo contenido de grasa pero, un día sí y un día no, se les permitía darse el gusto de tomar un batido y un emparedado de jamón y queso. Según los conocimientos convencionales, deberían haber experimentado un importante aumento del nivel de colesterol, pero esto no sucedió. Lo único que se les elevó fue su nivel de disfrute. A pesar del alto contenido de grasa de las golosinas, su efecto de elevar el colesterol se vio mitigado de algún modo por la química del placer. Es fácil darse cuenta de que los momentos en que los participantes se daban esos gustos eran las únicas ocasiones de relajación y celebración en una dieta que de otro modo era insípida y estresante. Y quizás esa disminución del reflejo de luchar o huir era, por sí misma, suficiente para reducir el colesterol.
En otro estudio inusual, grupos de investigadores de Suecia y Tailandia unieron sus fuerzas para tratar de determinar en qué medida las preferencias culturales de alimentos influyen en la absorción de hierro de una comida. Un grupo de mujeres de cada país recibió una comida típica tailandesa: arroz, vegetales, coco, salsa de pescado y pasta de ají picante. Como puede imaginarse, a las mujeres tailandesas les gusta la comida tailandesa, pero a las mujeres suecas no les gusta. Esto demostró tener un importante efecto metabólico pues, aunque todas las comidas contenían exactamente la misma cantidad de hierro, las mujeres suecas sólo absorbieron la mitad de hierro que las mujeres tailandesas. Para completar esta fase del estudio, los dos grupos recibieron una comida sueca típica: hamburguesa, puré de papas y habichuelas verdes exactamente con el mismo contenido de hierro. No es de sorprenderse que las mujeres tailandesas absorbieron mucho menos hierro de la comida sueca.
Seguidamente, se separó a las mujeres tailandesas en dos grupos. Uno de los grupos recibió la comida tailandesa antes mencionada y el otro grupo recibió exactamente la misma comida, pero pasada por una licuadora y convertida en papilla. Simplemente imagínese su cena preferida pasada por una licuadora hasta convertirla en alimento para bebés. Aunque el contenido de nutrientes de cada comida era precisamente el mismo, las mujeres que consumieron los alimentos licuados absorbieron un 70 por ciento menos de hierro. Una vez más, se obtuvieron los mismos resultados con las participantes suecas cuya comida sueca fue convertida en papilla.
La conclusión ineludible es que el valor nutricional de una comida no se determina únicamente por los nutrientes que contiene, sino que depende de los factores sinérgicos que nos ayudan a absorber esos nutrientes. Elimine la "vitamina P", o sea, el placer, y verá cómo desciende en picada el valor nutricional de nuestras comidas. Añada esa vitamina, y verá que sus alimentos se metabolizan óptimamente. Así que, si usted es del tipo de personas que sólo comen alimentos "que le hacen bien" aunque no le gustan, o si piensa que puede llevar una pésima dieta y compensarla comiendo una barra de proteína de extraño sabor y fortificada con vitaminas, o si simplemente ha desterrado el placer porque no le alcanza el tiempo para cocinar ni para encontrar una comida deliciosa, entonces usted no se está haciendo ningún favor por lo que se refiere a su nutrición. Está tirándole la puerta en la cara a un mecanismo metabólico clave.
En un fascinante estudio realizado con animales, unos científicos destruyeron quirúrgicamente los centros neuronales en el cerebro de algunas ratas que les permiten registrar el sentido del sabor. De este modo, un grupo de ratas quedó incapacitado de sentir el sabor de su comida. Se utilizó como grupo de control a otras ratas normales, saludables, y mucho más afortunadas, que todavía podían disfrutar sus alimentos. Ambos grupos recibieron exactamente la misma comida, en las mismas cantidades, y recibieron el mismo trato respetuoso de parte de los investigadores. Al cabo de algún tiempo, todas las ratas incapacitadas de sentir el sabor murieron. Los científicos, sorprendidos, decidieron hacerles autopsias para ver si podían determinar la causa de muerte. Encontraron que, aunque esta ratas habían consumido la misma cantidad de alimentos, habían muerto de todos modos de desnutrición clínica. Sus órganos se habían atrofiado como si no hubieran recibido ningún alimento.
La moraleja de este relato es que el sabor y el placer son esenciales para la vida, quizás mucho más de lo que podríamos imaginarnos.
_Pistas químicas del placer_
Examinemos la sustancia química colecistoquinina (CCQ). Es producida por el cuerpo en respuesta a la presencia de proteína o grasa en una comida y realiza varias funciones versátiles. En primer lugar, ayuda directamente a la digestión al estimular al intestino delgado, el páncreas, la vesícula biliar y el estómago. En segundo lugar, cuando se libera en el hipotálamo, parte del área límbica del cerebro, suprime el apetito. Por último, la CCQ estimula la sensación de placer en la corteza cerebral, que es la parte más externa del cerebro.
De modo que, si atamos todos los cabos, nos damos cuenta de que la misma sustancia química que sirve para metabolizar nuestras comidas también nos dice cuando es hora de terminar la comida y nos hace sentir bien en cuanto a la experiencia en general. Nos muestra cómo el placer, el metabolismo y un apetito controlado por medios naturales están profundamente entrelazados. La mayoría de las personas piensan que el placer está completamente separado del proceso nutricional y que no cumple ninguna función metabólica. Creemos que si un alimento nos hace sentir bien el cuerpo recibe automáticamente el estímulo de consumirlo en mayores cantidades. Los efectos producidos por la CCQ en el cerebro nos dicen algo muy distinto.
Ante la falta de la saciedad inducida por el placer, una de las sustancias químicas que hace aumentar nuestro apetito es el neuropéptido Y. Esta sustancia nos hace buscar comida. Alcanza un nivel naturalmente elevado en la mañana, lo cual tiene sentido, porque ése es el momento en que el organismo se está preparando para la acción. El neuropéptido Y también está elevado cuando nos privamos de alimentos. Su presencia es particularmente destacada después de hacer dieta. Cada vez que caemos en un estado de hipoglucemia (lo que normalmente significa que también estamos de mal talante) aumentan los niveles de neuropéptido Y, y esto nos estimula a consumir carbohidratos.
De modo que, si usted se priva del placer de los alimentos mediante un bajo consumo de calorías o si se restringe a una dieta totalmente aburrida, el organismo reacciona con sustancias químicas que exigen placer y satisfacción. La lección que nos enseña el neuropéptido Y es que no podemos escapar al imperativo biológico de celebrar y disfrutar. Independientemente de lo mezquinos que seamos con la comida, el cuerpo no admite que se le niegue lo que necesita.
El tipo de sustancias químicas que la mayoría de las personas vinculan con el placer es el de las endorfinas. Estas sustancias son producidas naturalmente en distintas partes del cuerpo (sobre todo en el cerebro y el sistema digestivo) y existen, en parte, para hacernos felices. El simple acto de comer hace que aumenten nuestros niveles de endorfinas. Esto nos dice que comer es una experiencia inherentemente placentera porque así lo determina la bioquímica del organismo. Lo más insólito de las endorfinas es que no son sólo moléculas de placer, sino que también estimulan la movilización de grasas. En otras palabras, la misma sustancia química que nos hace sentir bien hace que se queme la grasa del cuerpo. Además, mientras más endorfina se libere en su tracto digestivo, más sangre y oxígeno se enviarán a éste. Esto se traduce en una digestión y asimilación más completa y, en última instancia, en una mayor eficiencia del proceso de quemar calorías.
Por supuesto, con esto no quiero decir que uno puede comer toneladas de postres o de comida chatarra y que todas esas calorías se quemarán siempre que uno sienta placer. Lo importante es que la química del placer está intrínsecamente diseñada para potenciar el metabolismo. Cuando aprovechamos inteligentemente esta realidad biológica, nuestra salud puede prosperar. Pero si no recibimos el placer que el cuerpo y el alma exigen cada día y en cada comida, sufriremos. En el Mahabharata, el antiguo poema épico de la India, se nos dice: "Es mejor arder en llamas, aunque sea por un momento, que yacer para siempre sobre las brasas de los deseos insatisfechos".
Muchos decimos que nos encanta comer, pero cuando lo hacemos demasiado rápido o sin prestar atención o con una buena dosis de culpabilidad, el sistema nervioso central y el sistema nervioso entérico registran solamente un mínimo de sensaciones placenteras. El resultado es que nos sentimos impulsados fisiológicamente a comer más. Nos vemos obligados a correr tras el placer que nunca recibimos plenamente, aunque siempre lo tenemos a nuestro alcance.
Así que, si usted es de los que creen que pueden controlar su apetito y de esta manera bajar de peso privándose de placer, le sugiero que reevalúe inmediatamente esta forma de pensar. Todavía no he conocido a ninguna persona que haya logrado bajar de peso y mantenerse delgada imponiéndose a su impulso natural e innato de disfrutar y celebrar la comida. Bajar de peso mediante la limitación del placer es como dejar de respirar para abandonar el hábito de fumar. Nunca podremos aumentar la capacidad metabólica del cuerpo si limitamos lo que es esencial para la vida.
_El placer cataliza la respuesta de relajación_
La clave del potente efecto del placer para balancear su apetito es que promueve una respuesta fisiológica de relajación. Cuando más comemos en exceso es cuando estamos ansiosos, estresados o sin prestar atención. Una persona que come relajadamente y disfruta este placer tiene un control natural. Una persona que come mientras está estresada produce mayor cantidad de cortisol, la hormona de estrés a la que nos hemos referido en varias ocasiones. Lo sorprendente es que el cortisol nos vuelve insensibles al placer. Ésta es otra de las funciones impresionantes de esta sustancia química. Cuando uno está inmerso en la respuesta de luchar o huir y tratando de escapar de un lobo hambriento, no hace falta que el cerebro se entretenga sintiéndose bien ni se ponga a pensar en el chocolate. Todo nuestro ser necesita centrarse en la supervivencia.
Por eso, cuando el cortisol nos vuelven insensibles al placer en nuestro estrés cotidiano, necesitamos comer más para sentir la misma cantidad de placer que cuando estamos relajados. Esto significa que si uno teme el placer o está ansioso en cuanto a la posibilidad de aumentar de peso o asustado por la perspectiva de comer un postre, generará más cortisol. Esta sustancia inundará su torrente sanguíneo, lo volverá insensible al placer e, irónicamente, creará la misma profecía autocumplida que usted temía desde un principio, o sea, "si como algo que me dé placer, no podré parar de comerlo".
¿Se da cuenta de cómo nuestros temores nutricionales contribuyen a crear nuestra realidad metabólica?
Al placer le gusta la lentitud. Prospera en un espacio cálido, íntimo y acogedor. Revela sus secretos más profundos cuando nos deshacemos de toda pretensión de rapidez y permitimos que la intemporalidad y la sensualidad nos hagan estar presentes en cada momento. La promesa de rapidez (comida rápida, automóviles rápidos, servicio rápido, resultados rápidos) nos ha hecho que lo veamos todo borroso, o sea, que no veamos nada. Entonces lo compensamos con la "intensidad" (trabajo intenso, diversión intensa, muerte intensa), que nos deja exhaustos y rígidos. Podemos terminar con arterias endurecidas, corazones endurecidos, articulaciones endurecidas o huesos que se destruyen bajo el peso de una vida de alto impacto.
El placer es el antídoto esencial.
_Poner en perspectiva el placer_
Epicuro es reconocido como la autoridad de la antigüedad sobre los placeres del paladar. Honramos a este patriarca griego cada vez que utilizamos la descripción "delicia epicúrea" para referirnos a una comida. No obstante, pocos se dan cuenta de que Epicuro no era un glotón adicto al placer; era en realidad un hombre sencillo y austero que escogía sus placeres con gran cuidado y sabiduría, y los disfrutaba profundamente. Quizás la mejor manera de resumir toda su filosofía sobre el placer se resume en sus propias palabras: "Es imposible vivir placenteramente sin vivir de forma sabia, buena y justa, y es imposible vivir de forma sabia, buena y justa sin vivir placenteramente".
Encuentro que muchas personas temen al placer de la comida y batallan contra él, o sucumben constantemente y sin mucho control a sus deseos de comer. Los dos extremos dañan el cuerpo y la psiquis. Epicuro nos indica que hay un término medio. Utilizar sabiamente el placer significa acogerlo con deleite. Significa recurrir a placeres "sanos" y moderarse con los placeres "no sanos" para que al menos éstos hagan el mínimo de daño y para tratar de contribuir a la meta de mejorar nuestro metabolismo. Desafortunadamente, muchas personas se quedan atascadas en la idea de que, como muchas comidas placenteras "son malas para uno", comerlas en cualquier circunstancia es perjudicial. Esa perspectiva sobre la nutrición es anticuada.
Veamos el caso del chocolate. Algunos expertos aseguran que el chocolate es malo para la salud porque contiene azúcar y grasa. Otros señalan abundantemente que el chocolate contiene magnesio y antioxidantes, por lo que es bueno para la salud. ¿Quién tiene la razón?
Pues bien, todos tienen la razón. La respuesta a esa pregunta radica en la cantidad de chocolate que uno ingiera. O sea, la dosis es la que determina el veneno. Muchas sustancias o alimentos pueden ser beneficiosos en pequeñas cantidades y tóxicos en grandes cantidades. También es cuestión de calidad. ¿Su chocolate ha sido producido con integridad y con buenos ingredientes? ¿Usted come el chocolate de forma relajada, consciente y celebratoria? Todos estos factores contribuyen a determinar el verdadero valor nutricional del chocolate en cada momento. Sí, determinados alimentos, como las frutas, son intrínsecamente sanos y también nos pueden proporcionar placer. No obstante, muchos alimentos que podrían considerarse placeres "no sanos" pueden ser neutrales para el organismo, e incluso pueden beneficiar al metabolismo cuando los consumimos en dosis moderadas y con deleite.
En breve entraré en más detalles a este respecto. Veamos primero este relato.
Winnie, una atareada mujer de 34 años y madre de tres hijos, acudió a mí con un problema alimentario que se le había intensificado después que tuvo hijos: ansiaba constantemente comer chocolate. No importaba cuánto chocolate comiera, siempre quería más y nunca se sentía satisfecha. Winnie era capaz de describir detalladamente todas sus variantes favoritas de chocolate y por qué la estimulaban tanto. Quería saber cuál era la gravedad de su problema, pues quería deshacerse de él, pero sin deshacerse del chocolate por lo mucho que le gustaba.
Winnie era delgada, nunca había tenido problemas de exceso de peso pero, de todos modos, le preocupaba que, si se entregaba a sus ansias de chocolate, aumentaría de peso. Además, Winnie se preguntaba si, al controlar sus ansias y reducir el consumo de chocolate, lograría de hecho bajar de peso, aunque en realidad no quería comer menos chocolate ni bajar de peso. Preguntaba sólo por curiosidad.
Lo que me sorprendió de esta maravillosa mujer fue que, cuando me relató los detalles precisos de su ritual diario con el chocolate, me di cuenta de que en realidad no consumía tanto chocolate. Quizás una barrita de Milky Way después del almuerzo, a veces una sola galleta con pedazos de chocolate después de la cena. Pero la mayoría de las veces tomaba después de la cena una natilla o un helado de chocolate sin grasa y bajos en calorías. Nunca comía chocolate más de dos veces al día; normalmente lo comía una sola vez, y algunos días pasaba por completo sin él. Al entrar en más detalles, pude saber que Winnie comía chocolate rápidamente. A menudo se sentía estresada y, en general, nunca disfrutaba la comida porque andaba en un apuro constante por atender a sus tres hijos de edad escolar. Padecía además de estreñimiento crónico, lo que me pareció importante, pero ella había aprendido a sobrellevarlo.
Le sugerí a Winnie que tal vez tenía tanta ansia de chocolate porque en realidad nunca lo probaba. Sí, comía chocolate, pero nunca recibía todo el placer que buscaba. No generaba una reacción química de placer en su cuerpo y, por lo tanto, no satisfacía su deseo del corazón ni lo que le exigía su respuesta digestiva de la fase cefálica. Le expliqué que mientras más elevado fuera su nivel de cortisol (ansiedad y estrés) más reducida sería su capacidad de experimentar una respuesta fisiológica de placer. No sólo esto, sino que la mayor parte del chocolate que consumía era "impotente". Muchas de sus golosinas de chocolate no contenían grasa. Las investigaciones nos indican que una proporción de 50 por ciento de grasa a 50 por ciento de azúcar produce la mayor liberación de endorfinas en el cuerpo, o sea, "un verdadero orgasmo alimenticio". La mísera natilla de chocolate sin grasa que Winnie comía no tenía el rendimiento debido y, por eso, la dejaba insatisfecha.
El remedio que propuse a Winnie fue sencillo: que comiera más chocolate. Que comiera chocolate de verdad. Que hiciera lo impensable y entrara en una buena chocolatería y comprara lo que se le antojara. Que previera un pequeño postre después de cada cena o que comiera un poquitín de chocolate después de cada almuerzo. Que lo comiera lentamente, respirara y lo disfrutara a plenitud.
Winnie siguió este plan de acción. En menos de un mes desaparecieron sus "ansias" de chocolate y, aunque comía con placer todo el buen chocolate que se la antojara, nunca aumentó de peso. Ahora el chocolate era parte habitual de la dieta de Winnie; le gustaba y lo deseaba pero no se obsesionaba ni se preocupaba por él. Su relación con el placer sufrió un cambio. Ahora lo aceptaba como un derecho innato en lugar de resistirse y refrenarse.
Mi parte favorita de esta historia es que, al cabo de ese tiempo, Winnie dejó de padecer de estreñimiento y abandonó permanentemente el uso de laxantes. Quizás algunos expertos digan que ese efecto se debió al magnesio contenido en el chocolate. Otros se lo adjudicarían a la grasa extra. Los escépticos dirían que fue pura coincidencia. Pero al menos, ¿puede imaginarse cómo la química del placer relajó su sistema de evacuación, crónicamente tenso? No lo dude. Abrirse a más placer puede estimular el metabolismo y hacer que el cuerpo vuelva a su estado natural de equilibrio.
_La salud es placentera_
Quiero postular que la salud y, por extensión, cualquier acción que la promueva, es inherentemente una experiencia profundamente placentera. Cuando uno consume un alimento que es verdaderamente sano para uno, el cuerpo responde con un gran "¡sí!" biológico: activa un circuito de placer que es distinto, pero no menos potente, que las conexiones de placer que se establecen cuando uno ingiere una hamburguesa con queso o papas fritas o un helado. Las comidas sanas son las que el cuerpo reconoce como la justa medida biológica para promover algún aspecto de la plenitud del potencial metabólico. Una comida sana toca una fibra resonante en lo más hondo de nuestra inteligencia celular. El sonido que produce es perfecto y benéfico.
La salud es placentera. Los alimentos sanos también lo son. Lo mismo ocurre con los alimentos en su estado natural o en su estado más fresco, los alimentos de alta calidad y los platos creativos. Y cualquier alimento que conserve su personalidad, encanto y fuerza vital es también placentero.
Muchos de nosotros carecemos de experiencias de este tipo. Esto se debe a que el estilo de vida apresurado, con comidas rápidas y ejercicios rápidos hace que se cierre una puerta de percepción y que tengamos un umbral más bajo de placer. Nos aclimatamos a alimentos poco sanos, de poco placer y producidos en masa. Nuestro vocabulario relacionado con el placer se reduce y vivimos, sin darnos cuenta, en un mundo en el que la felicidad que experimentamos nunca está a la altura de su verdadero potencial.
Quizás usted se diga a sí mismo que le encanta el yogur congelado sin grasa, o cualquier otro alimento de dieta, pero la verdad es que no le encanta. No hay ningún encanto en ello. Usted se ha conformado con menos; ha jugado una treta a su metabolismo y se ha engañado a sí mismo al creer que está comiendo algo sustancioso. Cada vez que alguien me dice lo mucho que le gusta un dulce de chocolate bajo en calorías, sin grasa y edulcorado artificialmente, yo le digo que es como acostarse con un hombre con quien realmente no quiere estar, pero es la única persona disponible y lo mejor que usted puede conseguir en ese momento, y por eso se conforma con él. Comer alimentos que dan falso placer es casi lo mismo que acostarse con un amante momentáneo. Sí, los alimentos de dieta y la comida chatarra pueden proporcionar gran placer. Pero los alimentos verdaderos pueden proporcionarle mucho más que eso.
_Hechos para los dulces, y para la grasa_
Otra pieza del rompecabezas que une el placer con la capacidad metabólica es la percepción de sabores dulces. He conocido a muchas personas que creen que tienen un problema porque les encantan los dulces. Al vernos atrapados entre este deseo indestructible y su supuesto resultado (el aumento de peso), es fácil que nos sintamos injustamente timados por el destino. Sin embargo, la buena noticia es que si usted cree que tienen debilidad por los dulces y que eso representa un problema, no es así. Es que estamos hechos para los dulces.
Como recordará de sus primeras lecciones de biología, los seres humanos tenemos en la lengua cuatro tipos de papilas gustativas, que nos permiten detectar los sabores dulce, salado, ácido y amargo. Si contáramos cuántas papilas hay de cada tipo, veríamos que la inmensa mayoría son de la variedad capaz de detectar sabores dulces y que esas grandes cantidades de "papilas dulces" se encuentran principalmente en la parte delantera y central de la lengua, o sea, las que suelen entrar en contacto con la mayor parte de nuestros alimentos.
Y, ¿sabe usted qué hacen todas esas papilas gustativas dulces en su lengua? Pues, están ahí, esperando algo dulce.
Ésa es la función de estas papilas gustativas. Están al acecho, esperando la oportunidad de recibir una molécula dulce y enviar al cerebro una señal electroquímica con el único objetivo de producirle un estímulo. De hecho, sus papilas gustativas dulces tienen forzosamente que cumplir esta función. Imagínese lo que pasaría si le vendaran los ojos durante un día y le impidieran utilizar el sentido de la vista. Quizás sea una experiencia novedosa durante los primeros minutos, pero lo más probable es que al rato se vuelva desagradable y quizás incluso insoportable. Los sentidos del cuerpo (vista, oído, tacto, sabor, olfato) tienen que satisfacerse. Si pudiéramos "vendar" nuestras papilas gustativas dulces y privarnos del placer de las golosinas, o si usamos constantemente edulcorantes artificiales, el resultado es la desarmonía, y entonces ansiamos aún más los dulces.
Por cierto, el mismo concepto se aplica a la sal. ¿Cree usted que Dios le dio papilas gustativas altamente complejas, capaces de detectar la sal, sólo para torturarlo y aumentar su presión sanguínea?
Lo importante es no consumir cantidades tóxicas de azúcar y sal. Cualquier cosa que resulte placentera en pequeñas cantidades se torna dolorosa en grandes dosis. Incluso su canción favorita, si la escucha repetidamente durante horas y horas, llegará a resultar irritante. El hecho de pasarse dos semanas seguidas en compañía de su mejor amigo podría arruinar la amistad. Tenemos que monitorear nuestra dosis de placer al mismo que tendríamos que hacerlo con cualquier fármaco potente. Pero también tenemos que asegurarnos de obtener lo suficiente.
Así pues, desde una perspectiva evolutiva, el sabor dulce es una recompensa biológica. Nos da una razón para seguir viviendo. ¿Alguna vez se ha percatado de que, después de comer, uno quiere probar algo dulce y que incluso un pequeño bocado del postre de otra persona lo puede satisfacer? Es el sistema nervioso central, que le pide "vitamina P" (de "placer") a través de terminales nerviosas especializadas, denominadas papilas gustativas dulces, que a menudo necesitan un estímulo mínimo para satisfacer este componente clave de la respuesta digestiva de la fase cefálica.
Otro obstáculo que nos impide recibir todo el poder metabólico del placer es la manera en que vemos las grasas. Específicamente, nuestros conceptos erróneos sobre la biología de la grasa hacen que muchos temamos la grasa contenida en los alimentos, que sigamos dietas bajas en grasa y que suframos consecuencias inimaginables para la salud. Prívese del placer de la grasa y se privará del pleno poder del metabolismo.
Hemos visto que las grasas sanas son esenciales para la vida. Como suele suceder a la naturaleza, cuando algo es necesario para nuestra existencia biológica, nos causa sensaciones agradables. Tome agua fresca cuando esté deshidratado y notará la recompensa. Respire profundamente después de haber estado sumergido un rato y experimentará un deleite inmediato. Ingiera un alimento que contenga grasa y se sentirá satisfecho. Esto se debe a que la grasa es necesaria para el cuerpo, y la satisfacción de esta necesidad produce sensaciones agradables. Nuestra programación genética hace que sintamos sensaciones de placer en la lengua, en el sistema digestivo y en el cerebro. Así pues, la grasa, el placer y la supervivencia son inseparables. Conforman una trinidad del cuerpo. Pero, si usted es como la mayoría de los estadounidenses, probablemente separa los elementos de esta trinidad en su vida cotidiana y sufre las consecuencias.
Aunque para su supervivencia el cuerpo necesita grasa, si usted cree que es mala, hará todo lo posible por evitarla. Sin embargo, como la grasa produce inherentemente placer, lo tentará una y otra vez como una voz distante en su desierto nutricional, para que rompa una regla que usted piensa que es una ley cósmica pero que ha sido creada erróneamente por nutricionistas, médicos y expertos que son perfectamente capaces de equivocarse. Si usted consigue llevar una dieta extremadamente baja en grasas, tarde o temprano presentará signos de deficiencia clínica o subclínica de grasa. Algunos de los signos de esta afección comprenden la debilidad, irritabilidad, fatiga, piel reseca o grasa, acné, cabello dañado, caspa, psoriasis, uñas quebradizas, malestares digestivos, depresión, irritabilidad, enrojecimiento de los párpados, susceptibilidad a los resfriados, dolores en las articulaciones, estreñimiento y (sorprendentemente) aumento de peso o incapacidad de bajar de peso.
En un estudio realizado en la Universidad Bowman-Gray, los científicos separaron a unos monos en dos grupos. Los del primer grupo recibieron una dieta con un contenido normal de grasa mientras que los del segundo grupo recibieron una dieta sin grasa. Transcurrido cierto tiempo, los investigadores observaron que los monos que comían la cantidad normal de grasa se comportaban como monos normales, o sea, eran juguetones y activos. Los monos que seguían la dieta sin grasa se volvieron nerviosos y violentos, y algunos de ellos llegaron incluso a tratar de matar a otros.
Si usted conoce a alguien que esté siguiendo una dieta sin grasa, me imagino que esta información le resultará muy útil, al menos para su propia protección. Además, por cierto, ninguno de los monos que seguían la dieta sin grasa bajó en absoluto de peso.
_El placer sana_
Louise, una secretaria jurídica de 51 años, pidió una cita para verme porque estaba aburrida de su dieta y quería algunas nuevas ideas y sugerencias de menú. Se describió a sí misma como el tipo de persona que come lo mismo todo el tiempo.
Ésta era la dieta de Louise. El desayuno consistía en café y medio bagel con margarina. El almuerzo era una ensalada con aderezo sin grasa y requesón sin grasa con un refresco de dieta. En la tarde consumía un yogur congelado sin grasa. Ése era su momento favorito del día. La cena consistía en pollo sin pellejo con vegetales y arroz o una comida congelada de Lean Cuisine. El postre era galletas dulces sin grasa. No es de sorprender que estuviera aburrida.
Pero el aburrimiento era el menor de los problemas de Louise. Me confesó que llevaba casi dos años siguiendo este plan alimenticio con el propósito de bajar de peso, y que sólo había perdido unas libras. Aunque no había acudido a mí en busca de consejos de salud, me reveló que desde que había comenzado esta dieta el cabello se le había vuelto quebradizo y la piel, extremadamente seca, sentía fatiga, tenía frecuentes resfriados y siempre sentía hambre. ¡La dieta de Louise era virtualmente sin grasas y ella estaba pagando el precio!
Aunque le expliqué con detalles precisos cómo todos sus síntomas apuntaban a una deficiencia clínica de grasa, Louise quedó pasmada ante mi recomendación de que untara mantequilla fresca de cacahuete en el bagel y aceite de oliva en la ensalada, que comiera salmón fresco en lugar de comidas preparadas y congeladas y que optara por helado de verdad en lugar de un helado falso sin grasa. Louise insistió en que disfrutaba los alimentos sin grasa y no podía de ningún modo comer nada "grasiento" porque no podría parar y aumentaría de peso.
El mayor temor de Louise no era a la grasa. Lo que más temía era el placer. Su relación con los alimentos era un reflejo de su relación con la vida. No sólo le aburría la comida, le aburría la vida. Louise me contó cómo había caído en la rutina con su empleo, su matrimonio y su vida social. Casi no tenía disfrute en su vida y, de la misma manera en que se había convencido a sí misma de que su trabajo no era tan malo como para dejarlo, se había convencido de que los alimentos sin grasa sabían bien. Mientras menos grasa comía, menos placer sentía y más síntomas de dolor y disfunción desarrollaba su cuerpo. De modo que el mayor desafío que afrontábamos no consistía en hacer que Louise consumiera alimentos con grasa, lo cual ya era suficientemente difícil, sino en cultivar su confianza en el placer.
Louise y yo pusimos manos a la obra y trazamos un plan para ir reintroduciendo lentamente la grasa sana en su dieta. A medida en que vio disiparse algunos de sus síntomas, y que se dio cuenta de que podía comer un cacahuete o una aceituna y no aumentar de peso, Louise fue tomando más confianza en este método. Al paso de un año, se transformó en una mujer más feliz, positiva y llena de vitalidad. Todos sus síntomas de deficiencia de grasa desaparecieron. Su piel se veía sana y su cabello, lustroso, y le volvió su energía. Reconoció que, por primera vez desde que era niña, disfrutaba comer.
La mayoría de los médicos inteligentes dirían que el causante directo del alivio de los síntomas de Louise fue la adición de ácidos grasos esenciales. Estoy de acuerdo con eso en un 100 por ciento, pero añado este importante detalle: el placer sana. No fue la simple química del metabolismo de las grasas lo que hizo que mejoraran el aspecto y el ánimo de Louise. La aceptación y expresión del placer también la ayudaron a resaltar su verdadero resplandor.
¿Se da cuenta la fascinante conexión que existe entre el metabolismo nutricional, el placer y la belleza? ¿Entiende por qué se trata de un fenómeno de la mente, el cuerpo y el espíritu? ¿Hay algún aspecto de su propia vida en el que abrir la puerta al placer podría producir un logro similar?
Cuando el placer está prohibido, nunca lo recibimos verdaderamente. El cuerpo lo anhela y batallamos firmemente contra él o le ofrecemos sustitutos ineficaces, por ejemplo, alimentos sin grasa, sin sabor y producidos en masa que nos dejan insatisfechos. Es hora de seguir un nuevo criterio.
_Semana 5: Su tarea principal_
Esta semana es su oportunidad de profundizar en los placeres de la comida. Es un momento en el que usted puede centrarse en las sensaciones de deleite producidas por los alimentos y de los efectos cálidos y agradables que invaden al cuerpo después de consumir una comida beneficiosa para la salud. Profundizar hasta el nivel del placer es cuestión de dejar de ver la comida desde un punto de vista intelectual y verla desde el punto de vista de la sensualidad de cada célula. Es su oportunidad de explorar y experimentar con el uso sabio de la felicidad.
**Ejercicio: Inventario de placeres de los alimentos sanos**
Comience su semana con los placeres más confiables, o sea, los placeres sanos. Anote en su diario un inventario de todos los alimentos que usted ha aprendido (o que cree o intuye firmemente) que son sanos para usted y tienen el efecto positivo adicional de aportarle una experiencia placentera. En esta lista pueden figurar las frutas, el pescado, las nueces, las comidas macrobióticas, un jugo fresco, una tortilla de huevos, un batido de frutas, su ensalada favorita, un caldo de pollo, un cuenco de avena, coco fresco, una taza de té, una copa de vino, ajo, en fin, lo que sea. Tenga en cuenta que está accediendo tanto a sus conocimientos intelectuales como a su propia experiencia corporal, así que deje de tratar de saber con una certidumbre absoluta y universal si un alimento es o no verdaderamente sano.
Su tarea de esta semana consiste simplemente en incluir cada día al menos tres de estos alimentos o ingredientes en sus comidas.
Coma con conciencia, prestando atención a la sensaciones placenteras de la lengua, el estómago y cualquier otra parte del cuerpo que registre sensaciones de placer. Fíjese en las maneras especiales en que usted siente un placer sano. ¿Lo hace sentirse más ligero? ¿Más satisfecho? ¿Feliz consigo mismo? ¿Le da una sensación de logro? ¿Puede intuir los beneficios a largo plazo que esto le puede aportar a su salud?
A medida que permita que se revelen más plenamente los placeres producidos por alimentos sanos, de calidad, escrupulosamente preparados, encontrará que disminuye su tolerancia a los alimentos de baja calidad. Habrá cultivado un gusto superior que será más acorde con sus necesidades metabólicas. El resultado es que tendrá alimentos más placenteros entre los que escoger porque su harén de alimentos se habrá multiplicado y usted hará mejores elecciones en general sobre los alimentos que le proporcionan sustento y goce.
El siguiente ejercicio es para aprender a hacer que los placeres "no sanos" funcionen para usted. Es como un crédito extra, así que hágalo únicamente si está interesado en confiar y creer en usted mismo.
**Ejercicio: Inventario de placeres de comidas prohibidas**
Dedique unos momentos a anotar en su diario cualquier tipo de alimento que lo estimule, independientemente de que usted u otras personas crean que son alimentos "prohibidos" o poco sanos. Incluya en su lista alimentos específicos que le proporcionen gran placer, comidas específicas, marcas específicas y cualquier otro detalle que sea importante para crear una plena sensación de placer.
Cuando haya terminado su lista, estúdiela. Observe sus reacciones a lo que ha escrito. ¿Qué le enseña esta lista acerca de usted mismo? ¿Cuán a menudo consume usted estos alimentos? ¿En compañía de quién? ¿Cuáles de ellos le despiertan el mayor deseo? ¿Cuáles le causan la mayor sensación de culpabilidad?
Su asignación de crédito extra de esta semana consiste en comer uno o dos de estos alimentos o comidas prohibidos. En lugar de desterrar estos placeres, colóquelos sobre un pedestal, venérelos trayéndolos a un nivel terrenal y poniéndolos sobre su mesa en una ocasión especial. Cómalos lentamente, tómese su tiempo y deshágase de toda culpabilidad. ¡Celebre!
Después de haber disfrutado de su placer prohibido, fíjese en cómo se siente. ¿Su cuerpo reacciona de alguna manera ante este alimento? ¿Éste hace que aumente o que disminuya su nivel de energía? ¿Qué pasa con su estado de ánimo? ¿Cómo se siente a la mañana siguiente? Consulte su sabiduría intestinal. ¿Este alimento debe quedar excluido de su dieta, o es realmente algo que usted necesita? ¿Puede consumirlo ocasionalmente y obtener algún beneficio de él? Elija usted.
Si usted sabe que es del tipo de personas que necesitan un placer prohibido cada día, planifique una hora específica cada día para comer su golosina, por ejemplo, después del almuerzo o la cena o ya entrada la tarde. El hecho de saber que recibirá su recompensa siempre a una misma hora hará que no tenga que preocuparse tanto por no obtener lo que desea y le proporcionará algo interesante que esperar cada día. Si se trata de una golosina de baja calidad, con alto contenido de grasa y de azúcar a la que usted no puede renunciar bajo ningún concepto, cómala solamente una o dos veces al mes. Si con eso no le basta, hágalo un poco más a menudo.
En la medida de sus posibilidades, sustituya sus alimentos prohibidos que le dan placer con versiones orgánicas de mayor calidad. Escoja un tamaño de porción que lo haga sentir que ha obtenido el placer que desea y, al mismo tiempo, le haga sentirse bien, a sabiendas de que ha respetado sus límites naturales. Recuerde, no existe ninguna fórmula para determinar las cantidades adecuadas a cada persona. De lo que se trata es de darle a usted mismo el poder de decisión en su relación con los alimentos y el placer. El resultado será la potenciación de su metabolismo. Si su nutricionista o experto en salud piensa que éste es un consejo inadecuado, déle a esa persona un abrazo y envíele chocolates.
_Priorizar el placer_
Coma lo que coma, su meta principal en la semana 5 es hacer que del 85 al 100 por ciento de sus comidas y merienda sean placenteras. Todo lo que entre en su boca deberá ser una oportunidad de deleite sensual. La estrategia para ayudarlo a llegar a ese punto es hacerse a sí mismo esta sola pregunta mientras come: "¿Este alimento me produce placer?"
Si la respuesta es afirmativa, disfrútelo. Si es negativa, tome un momento para analizar sus opciones. Puede cambiar lo que come o cambiarse a sí mismo. El hecho de empezar por cambiarse a sí mismo puede ayudarle a derivar más placer de las comidas. Esto significa comer prestando más atención y de forma relajada, y deshaciéndose momentáneamente de todas las preocupaciones para que pueda estar presente con su comida. Saboree lo más profundamente posible todo lo que coma para que pueda encontrar los placeres de sanación ocultos en estos alimentos. Opte por experimentar la alegría de comer cada vez que ingiera alimentos.
Si la comida le da muy poco placer incluso cuando la saborea plenamente, quizás sea porque usted no ha elegido sabiamente su alimento o porque éste es de muy poca calidad. El hecho de prestar atención a nuestra comida y sentirla con una capacidad de discriminación relajada a menudo nos revela que no disfrutamos realmente mucho de los alimentos que escogemos. Consulte a su sabiduría intestinal, o sea, su sistema nervioso entérico, para descubrir más ideas acerca de estos alimentos y de si debe o no eliminarlos.
Sabemos que ciertos alimentos que no dan placer en el momento pueden proporcionar beneficios para la salud más tarde en el día (o más adelante en la vida). Para muchas personas, y en especial los niños, en esta categoría entran los vegetales, ensaladas, granos integrales, sopas caseras, algas y hierbas e infusiones medicinales. Una vez más, consulte a su sistema nervioso entérico para determinar el lugar que ocupa cada alimento particular. Muchas veces, el hecho de saber que un alimento es beneficioso para la salud constituye un placer en sí mismo.
Del mismo modo, muchos alimentos que aportan placer a corto plazo pueden ir en detrimento de nuestro placer más tarde en el día (o más adelante en la vida). El consumo excesivo de azúcar, café y frituras es un ejemplo típico de esto. No obstante, si estos alimentos se comen ocasionalmente y en cantidades moderadas, pueden ser neutrales o incluso beneficiosos. Una vez más, la sabiduría de su sistema entérico es la última palabra cuando se encuentre ante estas opciones.
El secreto de la activación del poder metabólico del placer en su cuerpo es la confianza. Del mismo modo que usted ha aprendido con el tiempo a confiar en un amigo o en un socio de negocios, también es necesario que confíe en el placer. Deshágase de sus recelos y conceda al cuerpo y el alma lo que éstos necesitan. Confíe en el placer, confíe en su capacidad de experimentarlo y de controlarse a sí mismo, y confíe en que, incluso si usted consume una cantidad excesiva de un alimento placentero y se siente culpable o enfermo, aún puede recuperarse, reorganizarse y redescubrir continuamente las posibilidades de regocijo y armonía con las comidas. Conceda al placer la confianza que merece, y las recompensas vendrán por sí solas.
**Ejercicio: Inventario del placer personal**
Seguidamente, haga una lista que incluya todo lo que le da placer en la existencia: personas, lugares, vacaciones, temas de conversación, una silla favorita, una noche perfecta, un producto de belleza, un baño, una revista favorita, cualquier cosa legal o ilegal, deleites sensuales, flores, tonterías, cosas sencillas. Si nunca ha hecho un inventario total de lo que lo deleita, sea completo y atrevido. Fíjese en cómo usted es capaz de reconocer y admitir algunos placeres de su lista, mientras que otros tal vez le parezcan tabú.
Una vez que haya puesto su alma al desnudo y haya revelado todos sus placeres terrenales, lea bien esa lista como si usted fuera un científico social que se estudia a sí mismo. Interésese más en el tema de su relación con el placer. ¿Esta lista le enseña algo acerca de usted mismo que antes no conocía? ¿Cuáles son los placeres que usted se permite disfrutar con mayor insistencia? ¿Cuáles son los placeres que parecen estar más ausentes de su vida? ¿Cuáles son los mayores placeres? ¿Los más sencillos? ¿Los que más anhela tener? ¿Cuáles se le dan con mayor naturalidad? ¿Cuáles son más "problemáticos"?
A menudo, recibir un placer de la comida cobra una importancia desmedida cuando nos estamos privando de placeres en otros aspectos de la vida. Al obtener amor de diversas maneras, hacemos que la responsabilidad de satisfacernos no recaiga solamente sobre nuestras comidas. Esta semana, además de comer uno o dos alimentos "prohibidos", incluya en su vida un placer no proveniente de la comida al menos dos veces cada día. Permítase sentirse enriquecido por las cosas que sabe que lo deleitan. Además, escoja un placer que pueda disfrutar una vez esta semana y que sea de los que sus efectos se hacen sentir durante varios días. Podría ser un masaje, una visita a un amigo especial, una llamada de larga distancia o una salida a un sitio inspirador.
El placer es, quizás, la recompensa por excelencia. Desde el punto de vista biológico, hace que aumenten nuestras probabilidades de supervivencia, que mejore nuestra salud y que se revitalice nuestro metabolismo. Desde el punto de vista psicológico, su abundancia da la sensación de bienestar, conexión con el prójimo y, simplemente, diversión. Desde el punto de vista espiritual, la recompensa del placer es el descubrimiento de una esencia sagrada oculta dentro de toda la creación terrenal. Ningún otro nutriente puede restaurar del mismo modo el resplandor al cuerpo, el corazón y el alma. Es hora de dar de nuevo al placer la bienvenida a la mesa.
_Lecciones clave_
• Una experiencia placentera con una comida potencia la absorción de nutrientes.
• Una experiencia no placentera la reduce.
• El placer cataliza la respuesta de relajación, lo que contribuye a que predominen el sistema parasimpático y la plenitud digestiva.
• La producción excesiva de cortisol debido al estrés o la ansiedad nos vuelve insensibles al placer. Esto nos hace comer más durante épocas de estrés para poder registrar los efectos placenteros de los alimentos.
• Estamos programados genéticamente para desear y disfrutar los sabores dulces y las grasas. Comer dulces y grasas de calidad garantiza un metabolismo sano.
• La manera en que experimentamos el placer con los alimentos es un espejo de cómo experimentamos el placer en la vida.
SEMANA 6
El poder metabólico del pensamiento
_Los pensamientos rigen el mundo_
RALPH WALDO EMERSON
Una de las piedras angulares del metabolismo nutricional no es una vitamina, ni mineral ni molécula. Es nuestra relación con los alimentos. Es la suma de nuestros pensamientos y sentimientos más íntimos acerca de lo que comemos. Examinemos la palabra relación. Cada uno de nosotros, aunque no lo sepamos, forma parte de una unión íntima, permanente y dedicada con el comer. No es accidental que las mismas palabras que describen nuestra relación con las personas caracterizan por igual nuestras relaciones con los alimentos: amor, odio, placer, dolor, expectativas, decepciones, emociones, aburrimiento, incertidumbre, cambio. La relación con los alimentos es una de las más profundas y reveladoras que tendremos jamás.
Rumi, el gran poeta sufi, afirmó una vez: "El saciado y el hambriento no ven la misma cosa cuando tienen una hogaza de pan ante sus ojos". Y el notorio mafioso Al Capone observó astutamente: "Cuando yo vendo bebidas alcohólicas, le llaman contrabando; cuando mis clientes las sirven en bandeja de plata en Lake Shore Drive, le llaman hospitalidad". Efectivamente, la manera en que cada uno de nosotros piensa en la comida es tan profundamente relativa que, si un grupo de personas estuviera mirando el mismo plato de comida, ninguna de ellas vería la misma cosa.
Digamos, por ejemplo, que estamos examinando un plato de pasta, pollo y ensalada. Una mujer que desee bajar de peso vería calorías y grasa. Reaccionaría favorablemente ante la ensalada o el pollo, pero vería la pasta con temor. Un atleta que trate de desarrollar su masa muscular, al mirar la misma comida, vería proteínas. Se concentraría en el pollo y no prestaría mucha atención a los otros alimentos. Un vegetariano puro vería la desagradable presencia de un animal muerto y no tocaría nada de ese plato. Un granjero avícola, por otra parte, vería con orgullo un buen trozo de carne de ave. Una persona que esté tratando de curar una enfermedad a través de la dieta vería en potencia una medicina o un veneno, según si el plato de comida es permisible o no en la dieta que ha elegido. Un científico que estudie el contenido de nutrientes en los alimentos, vería una colección de sustancias químicas.
Lo sorprendente es que cada una de estas personas metabolizará de forma muy distinta la misma comida en respuesta a sus propios pensamientos. En otras palabras, lo que uno piensa y siente acerca de una comida es un factor tan importante para determinar su valor nutricional y su efecto sobre el peso corporal como lo son los propios nutrientes.
¿Le parece increíble? Veamos los principios científicos en que se basa esto.
_Cómo el cerebro procesa las comidas_
La autopista de información formada por el cerebro, la médula espinal y los nervios es como un sistema telefónico a través del cual su mente se comunica con sus órganos digestivos. Digamos que usted está a punto de tomar un helado. El concepto y la imagen de ese helado se registran en el centro superior del cerebro: la corteza cerebral. La información se retransmite desde allí por vía electroquímica al sistema límbico, que se considera la parte "inferior" del cerebro. El sistema límbico regula las emociones y las principales funciones fisiológicas, como el hambre, la sed, la temperatura, el apetito sexual, el ritmo cardiaco y la presión sanguínea. Dentro del sistema límbico se encuentra un conjunto de tejidos del tamaño de un guisante conocido como hipotálamo, en el que se integran las actividades de la mente con la biología del cuerpo. En otras palabras, toma la información sensorial, emocional e intelectual y la procesa hasta obtener respuestas fisiológicas. Esto es poco menos que un milagro.
Si el helado es de su sabor favorito (por ejemplo, de chocolate) y usted lo consume con pleno deleite, el hipotálamo modulará esta información positiva mediante el envío de señales de activación a través de las fibras nerviosas parasimpáticas a las glándulas salivales, el esófago, estómago, intestinos, páncreas, hígado y vesícula biliar. Se estimulará la digestión y usted conseguirá una mejor descomposición metabólica del helado y, por lo tanto un consumo más eficiente de sus calorías.
Si siente culpabilidad o se juzga desfavorablemente por tomarse el helado, el hipotálamo tomará esta información negativa y enviará las señales correspondientes a través de las fibras simpáticas del sistema nervioso autónomo. Esto pone en marcha respuestas inhibitorias en los órganos digestivos, lo que significa que usted ingerirá su helado, pero no lo metabolizará del todo. Tal vez permanezca más tiempo de lo debido en su sistema digestivo, lo cual puede ir en detrimento de su flora intestinal beneficiosa y hacer que aumente la liberación de subproductos tóxicos en el torrente sanguíneo. Además, las señales inhibidoras en el sistema nervioso pueden mermar la eficiencia de su quema de calorías, lo cual le haría almacenar en forma de grasa corporal una mayor cantidad de su helado repleto de culpabilidad. Por eso los pensamientos que uno tenga sobre los alimentos que ingiere se vuelven realidad instantáneamente en su cuerpo a través del sistema nervioso central.
Nuestros pensamientos también tienen un efecto directo sobre la secreción de hormonas, que son unas de las sustancias químicas metabólicas más potentes que conocemos. La información producida por la ingestión de helado viaja desde la corteza cerebral hasta el hipotálamo y produce su efecto sobre la pituitaria, la glándula principal del sistema endocrino que se encuentra situada en la base del cerebro. La glándula pituitaria trasmite información del reino de la mente al idioma de las hormonas. Retransmite señales hormonales al páncreas, las glándulas suprarrenales, la glándula paratiroidea, los riñones y la glándula tiroidea. ¿Recuerda la respuesta de insulina de la fase cefálica que es capaz de hacerlo aumentar de peso con sólo pensar en el helado? Esto se debe a un mecanismo endocrino que opera a través del páncreas.
O analicemos la importancia de la glándula tiroidea. Muchas personas ya saben que una tiroides que funcione adecuadamente es un requisito clave para mantener un metabolismo sano. Si usted no produce suficiente hormona tiroidea, es muy probable que se sienta cansado, perezoso o deprimido. Y probablemente sentirá que, por muy pocos alimentos que ingiera, de todas formas no logrará bajar de peso. Resulta interesante el hecho de que mantener una actitud sana con respecto al helado promueve la liberación de la hormona tiroidea, que a su vez hace que aumenten su producción de hormonas digestivas y la motilidad del tracto digestivo y acelera la tasa metabólica de casi todas las células del organismo. ¡Y para lograr todo esto no tiene que tomar un medicamento para la tiroides sino profesar cariño y respeto al helado que está tomando!
Por otra parte, la ansiedad al pensar en el helado tendría un efecto inhibitorio sobre la hormona tiroidea, que se traduciría en la reducción del metabolismo y el aumento del consumo de grasa. También puede estimular la liberación de hormonas de estrés que, como hemos visto, contribuyen a la ineficiencia de la digestión, el desaprovechamiento de nutrientes, la pérdida de calcio y el aumento de peso.
Así pues, no sólo hemos visto que el hecho de comer bajo estrés hace que disminuya el metabolismo, sino que los pensamientos estresantes tienen el mismo resultado. El cerebro no hace distinción entre un factor causante de estrés verdadero y uno imaginario. Si uno está sentado en una habitación feliz y contento, sin nadie que lo moleste, y comienza a pensar en una persona que lo agravió hace cinco años, si la carga negativa de esa experiencia le sigue afectando, su organismo pasará rápidamente al estado fisiológico de estrés: aumento del ritmo cardiaco y de la presión sanguínea y disminución de la función digestiva.
Cualquier sentimiento de culpabilidad en relación con la comida, de vergüenza en relación con el cuerpo o de juicio negativo en relación con la salud son para el cerebro factores causantes de estrés e inmediatamente se convierten en sus equivalentes electroquímicos en el cuerpo. Usted podría comer la comida más sana del mundo pero, si tiene pensamientos tóxicos, la digestión de sus alimentos se reduce y su metabolismo de almacenamiento de grasas aumenta. Igualmente, tal vez esté consumiendo una comida poco favorable desde el punto de vista nutricional pero, si su corazón y su cabeza están bien situados, el poder nutritivo de su alimento será mayor.
_Comidas con efecto placebo_
Para poder apreciar con plenitud el poder de la mente sobre el metabolismo, examinemos de una forma novedosa uno de los fenómenos más interesantes de la ciencia: el efecto placebo. Le relato mi ejemplo favorito de esta extraordinaria fuerza.
En 1983, unos investigadores médicos estaban probando un nuevo tratamiento de quimioterapia. Un grupo de pacientes de cáncer recibieron efectivamente el medicamento que se estaba probando mientras que otro grupo recibió un placebo (una sustancia química falsa, inocua e inerte). Como quizás usted sepa, la ley exige que las empresas farmacéuticas comparen todos los nuevos medicamentos con un placebo a fin de determinar si el producto en cuestión es realmente eficaz. En el transcurso de este estudio, nadie se sorprendió al ver que el 74 por ciento de los pacientes de cáncer que recibieron la verdadera quimioterapia exhibieron uno de los efectos secundarios más comunes de este tratamiento: la pérdida del cabello. Sin embargo, lo sorprendente fue que el 31 por ciento de los pacientes que recibían la quimioterapia de placebo (una inyección inerte de solución salina) también sufrieron el efecto secundario de perder el cabello. Tal es el poder de las expectativas. La única razón de que los pacientes que recibieron el placebo perdieran el cabello era que ellos mismos creían que lo perderían. Al igual que muchas personas, vinculaban la idea de la quimioterapia con la calvicie.
Entonces, si el poder de la mente es tal que puede hacer que se nos caiga el cabello cuando recibimos un placebo, ¿qué cree usted que sucede cuando nos decimos a nosotros mismos "Esta tarta engorda, no debería comerla" o "Comeré este pollo frito, pero sé que me hará daño" o "Disfruto comer mi ensalada porque sé que es muy buena para la salud"?
Está claro que no quiero decir que podemos consumir un veneno sin que nos haga ningún daño si pensamos que nos hará bien. Lo que sugiero es que nuestras creencias acerca de cualquier sustancia que consumimos pueden influir fuertemente en cómo esta sustancia afecta al organismo. Cada día, millones de personas comen y beben mientras ocupan sus mentes con pensamientos firmes y convincentes sobre su comida. Veamos algunos de los importantes efectos que hemos adjudicado a ciertos alimentos:
"La sal me produce hipertensión."
"La grasa me hace aumentar de peso."
"El azúcar me destruye los dientes."
"No puedo pasar el día sin tomar café."
"Esta carne aumentará mi nivel de colesterol."
"El calcio es bueno para los huesos."
De cierto modo, algunas de esas afirmaciones pueden ser válidas. Pero, ¿no será que estamos estimulando esos efectos? Además, si son en verdad un resultado inherente de ingerir esos alimentos, ¿se da cuenta de cómo podemos potenciarlos con la fuerza de nuestras expectativas?
El efecto placebo no es algo raro e insólito. Aparece bastante comúnmente. Los investigadores han calculado que del 35 al 45 por ciento de los medicamentos por receta podrían deber su eficacia al poder del placebo y que el 67 por ciento de los medicamentos expendidos sin receta, como los remedios para dolores de cabeza y tos y los inhibidores del apetito, también se basan en el efecto placebo. En algunos estudios, la respuesta al placebo llega a ser del 90 por ciento.
Me sorprende que nadie en la comunidad científica haya reconocido el evidente vínculo que existe entre el poder del placebo y los alimentos. De hecho, el efecto placebo viene incorporado en el proceso nutricional. Está muy presente en todas nuestras comidas cotidianas. Dicho en términos sencillos, el poder del placebo es el mecanismo mediante el cual el metabolismo responde a los pensamientos, sentimientos y expectativas. Es como presentar una receta médica en su propia farmacia nutricional interna. Todo lo que creemos sufre una alquimia que lo convierte en señales enviadas a través de los procesos nerviosos, el sistema endocrino, la circulación de neuropéptidos, la red inmunológica y el tracto digestivo.
En un estudio fascinante los investigadores descubrieron que los sujetos que recibían un placebo y se les decía que era vitamina C, tenían muchos menos resfriados que los sujetos que recibían vitamina C verdadera y se les decía que era un placebo. En un estudio realizado en la Universidad Cornell sobre un medicamento de supresión del apetito, los pacientes que recibieron este medicamento y no se les dijo nada sobre sus efectos secundarios no mostraron ningún cambio en la ingestión de calorías ni en el peso corporal. Cuando se les dijo que el medicamento les suprimiría el apetito, empezaron a comer menos y a bajar de peso. De hecho, numerosos estudios han mostrado que los placebos son tan eficaces para reducir el apetito como cualquier otro medicamento sin receta.
Recordemos lo mencionado en la semana 1 acerca de los franceses y el poder metabólico de la relajación. Muchas personas que han estado en países como Portugal, España, Holanda, Francia, Dinamarca, Suecia y Brasil han notado que la mayoría de las mujeres en esos países no muestran mucho interés en consumir alimentos sin grasa, en contar calorías ni en restringir su ingestión de dulces. Además, aunque no usan bandas caminadoras ni salen a correr, y consumen más grasa que las mujeres estadounidenses, son de todos modos más felices, saludables y delgadas. Creen que los alimentos que consumen tendrán un efecto positivo en sus organismos. Comparemos esto con el sinnúmero de mujeres estadounidenses cuya cultura las ha condicionado a preocuparse por gramos de grasa, recelar de los alimentos y hacer dieta incesantemente. ¿Se da cuenta de cómo esos pensamientos se convierten en una profecía autocumplida en relación con el metabolismo a través del poder del placebo?
_Alimentos beneficiosos y dañinos_
Hay un prejuicio nutricional sobre el que yo quisiera alertarlo y que pesa en las mentes de muchos, hace el peor daño al metabolismo y sería mejor que lo elimináramos de nuestra dieta mental. Se trata de esta idea obsoleta: algunos alimentos son beneficiosos y otros son dañinos.
Aunque parezca extraño, el concepto de los alimentos buenos y malos carece en gran medida de base científica. Como hemos visto, el valor metabólico de cualquier alimento está profundamente influenciado por factores que no son inherentes a dicho alimento, sino que dependen de quien los come: la relajación, la calidad, la conciencia, el placer, etc.
**En realidad, no existen alimentos buenos ni malos.**
Permítame explicar.
Sí, está claro que algunos alimentos contribuyen a su salud y que otros la afectan. Cuando digo que no existen alimentos buenos ni malos, lo que quiero decir es que ningún alimento es bueno o malo en sentido moral. En otras palabras, nadie podría decir que existe una conspiración perversa entre el tocino y los huevos para hacer que aumente nuestro nivel de colesterol. Tampoco ha podido asegurar nadie que su ensalada ha sido enviada por ángeles. Los alimentos son neutrales desde el punto de vista moral. Lo mismo ocurre con cualquier otro objeto en el universo. ¿Es bueno o malo un bate de béisbol? Depende de cómo lo utilicen. Puede usarse para batear un cuadrangular y de este modo hacer felices hasta el delirio a miles de fanáticos o puede convertirse en un medio de destrucción si se usa para romper la ventanilla de un auto y arruinar el día a su propietario.
¿Es un alimento determinado bueno, o malo? Depende de cómo lo use. Esta distinción es de suma importancia si uno desea tener alguna probabilidad de llevar una relación feliz con las comidas y con su organismo. Por eso, gran parte de la infelicidad que contamina nuestra atmósfera emocional en lo que respecta a las comidas es producto de las consecuencias de asumir actitudes moralistas sobre los alimentos. Porque, si uno decide clasificar un alimento como "malo" y luego lo come, ¿qué dice eso de usted? Que usted es una mala persona. Y, como todos sabemos, las malas personas deben ser castigadas severamente para que no se les ocurra volver a hacer el mal. Cuando asumimos actitudes moralistas sobre los alimentos, nos ponemos a nosotros mismos en la extraña situación de ser al mismo tiempo culpables y jueces. Quizás nos sentenciemos a una triste dieta baja en calorías, a dosis extra de ejercicios castigadores, o quizá simplemente a las sensaciones tradicionales de culpabilidad, vergüenza y maltrato a uno mismo. Todo esto, por supuesto, crea un estado de estrés fisiológico y uno sabe lo que eso significa para el metabolismo. La conclusión a la que quiero llegar es que los distintos remedios que se nos ocurren para hacer frente a nuestros delitos crean en realidad un resultado mucho peor que el propio delito. (Tomen nota, políticos y legisladores.)
También pasa otra cosa cuando clasificamos un alimento como beneficioso o dañino. Detenemos el proceso de pesquisa y descubrimiento. Dejamos de tener curiosidad. Si un colega nos dice que el chico nuevo que empezó a trabajar en nuestra empresa es un idiota, ya le hemos puesto una etiqueta. Quizás nunca lleguemos a conocerlo, por lo que tal vez perderíamos la oportunidad de entablar una buena amistad. Lo mismo ocurre con los alimentos. Si clasificamos el azúcar como mala, dejamos de indagar sobre los pormenores de los matices y complejidades de este alimento. ¿Son indeseables todos los tipos de azúcar o hay algunas que sean mejores que otras? ¿El hecho de consumir el azúcar en combinación con otros alimentos mitiga algunos de los efectos negativos de aquélla? ¿Produce el azúcar una reacción distinta en los niños que en los adultos? Asumir actitudes moralistas sobre cualquier cosa o cualquier persona limita gravemente nuestro conocimiento del mundo y nos mantiene sumidos en el miedo, la ignorancia y el juicio.
Esta situación encuentra su mejor ejemplo en el alcohol. Los estadounidenses tenemos una relación moral muy curiosa con esta sustancia. La bebemos, la disfrutamos, abusamos de ella en proporciones asombrosas, y nuestros científicos no se ponen de acuerdo en si es una medicina o un veneno. (Sugerencia: es las dos cosas.)
Entonces, ¿el vino es beneficioso o dañino? Depende de cómo lo use. Sólo usted puede determinar la dosis adecuada para su organismo. A algunas personas les sientan bien varias copas cada noche. Otros indican que, aunque antes toleraban bastante bien el alcohol, ahora una pequeña dosis los hace sentir cansancio. Se trata de cambios naturales que ocurren en el organismo y que cada persona debe determinar por sí misma. Usted es el único experto cuando se trata de su propio bienestar. Y, a medida que se permita a sí mismo desarrollar estos conocimientos naturales, junto con su curiosidad, se volverá más experto en determinar cuándo le conviene prestar atención a los consejos de otros expertos.
Cuando hablamos del poder de la mente sobre los alimentos, estamos entrando en un nuevo territorio científico. En general, los investigadores no se han pronunciado sobre este tema porque hay muy poco interés en él y es un terreno difícil cuando se trata de diseñar un estudio válido. No obstante, las pruebas que yo necesito las obtengo en el frente de batalla. El hecho de trabajar directamente con las personas y ver cómo cambia su salud o se transforma su peso con la simple modificación de sus creencias negativas es la verdadera prueba viviente.
Krista, una asistente administrativa de 37 años, llevaba toda una vida haciendo dietas como un yoyo y su peso fluctuaba entre 140 y 152 libras. Al hacer dieta, Krista comía los alimentos que consideraba "beneficiosos": un yogur en el desayuno, una ensalada en el almuerzo, una pizca de pollo en la cena y ningún postre ni nada dulce. Pero, si se atrevía a desviarse de su dieta y claudicar ante los alimentos "dañinos" (pan, helado, pizza y meriendas chatarra) perdía el control, se castigaba a sí misma, vivía en un estado de ansiedad y se excedía en secreto con la comida. Aumentaba de peso y perdía su dignidad. En su mente, Krista se portaba bien o mal, enteramente según lo que comiera. No había término medio. Krista quería desesperadamente dejar de hacer dieta y mantenerse en su peso deseado pero, después de casi dos décadas sin resultados duraderos, se sentía desahuciada.
Sugerí a Krista que lo mejor que podría hacer para obtener los resultados que deseaba sería concentrarse en lo que más necesitaba cambiar: su forma de pensar. Específicamente, debía desechar todos los pensamientos relacionados con la clasificación de los alimentos en "beneficiosos o dañinos". Ésa era la raíz de su problema, y la sumía en una batalla con la biología que ocasionaba una cascada de conductas dañinas cuyo resultado no era la disminución de la grasa corporal, sino su aumento.
Le pedí a Krista que supusiera que los alimentos no eran buenos ni malos desde el punto de vista moral, sino neutrales. Le pedí que dejara de verse a sí misma como una mala persona si consumía un alimento malo. O sea, que dejara de castigarse. También le pedí que asumiera una nueva óptica sobre los alimentos y que los considerara amigos suyos. Desechar pensamientos anticuados y probar con pensamientos nuevos es como cambiar de ropa. No es tan difícil, basta con intentarlo. Krista accedió a hacer el mayor esfuerzo posible por acoger favorablemente las comidas y relacionarse con ellas de una nueva manera. Al hacerlo, también abría el camino a la posibilidad de recibir los beneficios metabólicos de la relajación, la conciencia, el placer, el ritmo y la calidad. Logró obtener buenos resultados porque abandonó una forma de pensar que la mantenía apresada en un estrés fisiológico profundo, el mismo tipo de estrés que hace aumentar el cortisol y la insulina y acumular peso.
Al fin Krista pudo estabilizar su peso en poco más de 140 libras. Lo más importante es que se sintió capaz y libre de disfrutar las comidas y recuperó el respeto a sí misma. Todo esto comenzó con la modificación de un solo pensamiento que limitaba su metabolismo.
_Motivación, ejercicios y metabolismo_
Tengo otro relato que compartir con usted sobre el poder metabólico del pensamiento. Se refiere a dos clientes que me dieron una de las grandes oportunidades de darme cuenta de algo muy importante en mi carrera profesional. En mis primeros tiempos como nutricionista, un facultativo de Nueva York me remitió a una mujer de 48 años llamada Toni. El médico me advirtió que se trataba de una paciente difícil que quería bajar de peso pero no lo conseguía. El médico había sometido a Toni a numerosas pruebas pero no encontró nada mal en ella; le sugirió distintas dietas pero ella no logró bajar ni una libra. Lo más sobresaliente de este caso era que Toni era maratonista. Comía apenas 1.300 calorías diarias, corría de ocho a diez millas diarias durante la semana laboral y unas 15 millas los sábados, por lo que era una candidata legítima a perder 15 libras.
Cuando Toni entró en mi oficina me sorprendió comprobar que su aspecto no tenía absolutamente nada que ver con el de una maratonista. Era de baja estatura, rolliza y muy agitada. Nunca había visto a nadie con tanto pánico por su peso. Toni había gastado miles de dólares en análisis de sangre y en someterse a todo tipo de exámenes físicos para averiguar qué andaba mal, pero nunca se le detectó ningún problema de salud. Toni era una mujer muy inteligente y exitosa, pero no lograba explicarse por qué, si hacía tantos ejercicios y comía tan poco, no veía ningún resultado después de un año de entrenamiento físico.
Después de hacerle ciertas preguntas a Toni determiné con rapidez que, contrariamente a mis sospechas, ella decía la verdad. Realmente corría maratones y se sometía a una dieta brutal.
Yo estaba seguro de que podía ayudarla. Se veía claramente que la dieta de Toni era deficiente en proteínas, grasas y calorías, lo que hacía que su organismo respondiera en modalidad de supervivencia y le redujera su metabolismo. Toni comía de prisa, no recibía ningún placer de los alimentos y rara vez consumía una comida nutritiva. Había mucho en qué trabajar. Le dije a Toni que serían necesarias ocho sesiones a lo largo de dos meses para que empezara a bajar de peso. Le expliqué que tenía que alimentarse más, e incluir más grasa y proteínas en su dieta, y que debía aprender a relajarse y a disfrutar del placer de los alimentos.
Toni me miró como si yo estuviera loco e insistió en que, si comía tan sólo un poco más de lo que acostumbraba comer, aumentaría definitivamente de peso. Dejó claro que no me creía pero reconoció que estaba al perder los cabales y estaba dispuesta a probar cualquier cosa. Además, me hizo jurarle que el nuevo régimen no la haría aumentar ni una libra. Sin que yo se lo pidiera, me entregó un cheque por el precio de las ocho sesiones y abandonó mi consultorio con más agitación que cuando había llegado.
Al cabo de dos semanas Toni pesaba seis libras más y amenazó con ponerme una demanda. Sus peores pesadillas se habían hecho realidad. Yo me sentí devastado. El abogado de Toni comenzó a enviarme cartas intimidatorias. Rápidamente devolví a Toni su dinero, ofrecí todo tipo de disculpas y todo el asunto quedó en el pasado. Pero nunca olvidé su caso y seguí perplejo con respecto a su caso.
Pasaron siete años. Vino a mi consultorio una mujer que podría ser la hermana de aquella maratonista que yo aún no olvidaba. Sheila era otra mujer muy exitosa, corredora de bolsa entrada en los cuarenta, de baja estatura, rolliza y saludable; era una consumada maratonista incapaz de bajar una libra. De ser por mí, la hubiera remitido instantáneamente a otro especialista, pero varios amigos de ella que habían recurrido a mis servicios le habían contado lo maravillosamente que les había ido, de modo que Sheila estaba deseosa de probar suerte conmigo. No pude negarme a atenderla, pero tampoco se me ocurría ninguna estrategia distinta a la que había probado sin suerte siete años atrás. Cualquiera diría que el universo se estaba burlando de mí.
Di a Sheila los mismos consejos que había dado a Toni: que se alimentara más, que consumiera en especial más grasas y proteínas, y que comiera sosegadamente. En dos semanas, Sheila aumentó cuatro libras. Me sentí como un estafador y ya estaba dispuesto a entregarme a las autoridades. Pero, sorprendentemente, Sheila no se enfadó ni se desanimó. Estaba tan inspirada y tenía una actitud tan positiva sobre los beneficios que sus amigos habían obtenido de mis consejos, que estaba segura de que yo podría hallar una solución.
Fue entonces que me di cuenta de algo muy importante, como mencioné antes. Un amigo fisiólogo del deporte me explicó que el ejercicio intenso puede producir una reacción muy parecida al estrés. Sí, los ejercicios aeróbicos son excelentes y tienen una larga lista de increíbles beneficios metabólicos. Lo sé porque yo mismo valoro altamente el ejercicio. Pero, en el contexto equivocado, el esfuerzo físico puede desgastarnos, elevar los niveles de cortisol y de insulina, generar sustancias químicas inflamatorias y mantenernos atrapados en un metabolismo de supervivencia en el que almacenamos grasa vigorosamente e impedimos el desarrollo de los tejidos musculares. Según los conocimientos convencionales, el peso se determina en función de las calorías ingeridas y las calorías consumidas. O sea, que mientras más ejercicios uno haga más deberá bajar de peso. Pero en realidad el tema de los ejercicios tiene además otros matices. El Doctor en Medicina Kenneth Cooper, abuelo del movimiento por la actividad física en Estados Unidos y antiguo proponente del ejercicio intenso, ha dado un giro de 180° en relación con los ejercicios aeróbicos vigorosos. Los resultados de su investigación en el Centro Cooper de Ejercicios Aeróbicos en Dallas, Texas, han sido tan sorprendentes que creo que toda persona que haga ejercicios de alta intensidad debería tomar nota. Básicamente, Kenneth Cooper descubrió que el ejercicio de intensidad moderada a baja durante no más de 30 minutos, tres o cuatro veces por semana, es la mejor receta para mantener la salud, el peso y la buena forma física.
En la visita siguiente de Sheila, le pregunté por qué corría maratones. Respondió que tenía que hacer algo para mantener la forma física y que le gustaba correr. Le pregunté si en realidad correr le gustaba tanto o si había otras formas de hacer ejercicio que le gustarían más. Se sintió incómoda con mis preguntas y tomó a mal cuando le di a entender que ella secretamente aborrecía correr. Pero al fin logramos llegar en nuestra conversación a una sincera conclusión: Sheila corría para castigarse por el hecho de que su cuerpo acumulaba grasa con facilidad. No ejercitaba porque le gustara el movimiento, sino que corría porque odiaba el peso excesivo. A mi juicio, los pensamientos de miedo intenso que la motivaban estaban ocasionando una reacción fisiológica de estrés. El estado de luchar o huir aumentaba exponencialmente al practicar una forma de ejercicio que no era adecuada para su cuerpo sino que, de hecho, potenciaba aún más la química del estrés. Corriendo no iba a llegar adonde quería, y la prueba de ello era su peso.
Sheila comprendió esto y accedió a abandonar todo su entrenamiento de maratón. Le pedí que, en lugar de correr, hiciera algo que le gustara. Decidió tomar lecciones de baile tres veces por semana, y lecciones de yoga otras tres veces por semana, y salir a caminar de vez en cuando.
Al cabo de tres meses, Sheila perdió el peso que había aumentado en las primeras semanas de su nueva dieta, además de perder ocho de las diez libras que originalmente esperaba bajar. Se sintió satisfecha con su cuerpo y aliviada de no tener que correr como un hámster en su rueda, y disfrutaba verdaderamente su nueva actividad física.
La moraleja de este relato no es que el ejercicio sea malo, sino que tenemos que examinar las fuerzas que nos motivan a hacer ejercicios. Los hábitos sanos motivados por el miedo no son tan sanos en definitiva. Los pensamientos que nos limitan profundamente no pueden tener otro efecto que el de suprimir el metabolismo, aunque hagamos intensas sesiones de ejercicio para quemar calorías.
¿Ve en esto alguna implicación respecto de su propio estilo de ejercicio?
_Semana 6: Su tarea principal_
Esta semana es su oportunidad de transformar pensamientos y sentimientos que suprimen el metabolismo y limitan la felicidad. Su tarea principal es identificar los pensamientos que le roban energía y sustituirlos con pensamientos que le aporten energía. Piense en la semana 6 como un nuevo comienzo en cuanto a la manera de usar su mente para contribuir al logro de sus intenciones más elevadas.
**Ejercicio: Piense nutricionalmente**
Tome lápiz y papel y haga un inventario de los pensamientos más comunes que usted se repite a sí mismo acerca de la comida, de la nutrición y de su cuerpo. Estos pensamientos son las consignas que, en conjunto, conforman su relación con los alimentos y que en última instancia contribuyen al metabolismo o lo obstaculizan. Utilice las preguntas siguientes para ayudarlo con su inventario. Dé respuestas específicas y completas.
¿Qué efecto espera que le produzcan los alimentos?
¿Cuáles reglas sobre la nutrición tienen más peso para usted?
¿Qué alimentos figuran en su lista de comidas "beneficiosas"?
¿Qué alimentos se encuentran en su lista de comidas "dañinas"?
¿Cuáles son sus reglas en relación con la salud, el peso y la longevidad?
¿Cuáles son sus temores en relación con la salud, el peso y la longevidad?
¿Ve la comida como enemiga, como aliada, o como una combinación de ambas?
Éstos son algunos ejemplos de "consignas" típicas sobre los alimentos:
"La comida me engorda."
"Es malo sentir hambre."
"No merezco disfrutar la comida."
"Si como lo que quiero, no podré detenerme."
"Comer me hace feliz y me mantiene delgado."
"Estas vitaminas me harán bien."
"La sal es mala para la presión arterial."
"Las ensaladas son buenas para la salud."
"El vino es beneficioso."
"El vino es dañino."
"Cualquier alimento que contenga grasa es dañino."
Y así, sucesivamente.
A continuación, repase su lista y coloque una marca junto a los pensamientos que potencian su metabolismo y una cruz junto a los pensamientos que los menoscaban. Un pensamiento capaz de potenciar su metabolismo estimula la apertura, las posibilidades y el disfrute de la experiencia vital. Un pensamiento que roba su energía resulta pesado y limitador y está dirigido a hacer que nos juzguemos a nosotros mismos.
Seguidamente, modifique los pensamientos que le roban energía hasta convertirlos en pensamientos inspiradores desde el punto de vista metabólico. Por ejemplo, si su pensamiento era "Comer me produce frustración", su nuevo pensamiento podría ser "Comer me da sustento". Si su pensamiento era "La comida me hace engordar", su pensamiento siguiente podría ser "Dejo atrás mis temores sobre el exceso de peso". Si su pensamiento era "El helado es dañino", su nuevo pensamiento podría ser "Puedo tomar helado o dejar de tomarlo. Cualquiera de las dos opciones ha de ser beneficiosa para mi metabolismo si la escojo sabiamente".
Otros pensamientos positivos y favorables a la energía pueden ser: "Confío en la sabiduría de mi organismo"; "Celebro mi apetito"; "No me castigaré más por comer alimentos 'dañinos'"; y "He decidido comer relajadamente". Su tarea consiste en llenarse abundantemente cada día de pensamientos nuevos e inspiradores. Haga estas afirmaciones mientras come. Repítalas para sus adentros antes de acostarse a dormir. Cuando un pensamiento distinto le pase por la cabeza, corríjalo con cuidado y clemencia. En general, deshágase esta semana de conceptos moralistas sobre alimentos buenos y malos y permita que la sabiduría de su cuerpo determine lo que es mejor para usted.
Monitoree cuidadosamente sus pensamientos como lo haría con la ingestión de alimentos en una dieta estricta. En lugar de permitir que sus pensamientos lo definan a usted, recupere la facultad de controlar lo que ocurre en su mente. En la medida de sus posibilidades, deshágase de todo pensamiento negativo sobre los alimentos, el peso y el cuerpo. Detenga el flujo de sustancias químicas tóxicas creado por su "farmacia" interna a causa de sus pensamientos llenos de miedo. La libertad y la vitalidad serán los resultados inevitables.
**Ejercicio: Cambie sus creencias básicas**
Su tarea siguiente será identificar y enumerar en una lista las creencias básicas limitadoras que usted tiene sobre los alimentos, el cuerpo, la salud y la sexualidad. Esto es una forma de profundizar más aún en nuestra manera de pensar. Es cuestión de descubrir los mantras negativos que nos decimos a nosotros mismos en silencio y sin percatarnos. Estos mantras ocultos son los programas informáticos que hacen que el cerebro y el cuerpo elaboren un mundo metabólico de miseria y privación. Su identificación y corrección son un gran paso con miras a potenciar la química del organismo. Veamos a continuación algunos ejemplos de creencias básicas limitadoras.
"Algo anda mal con mi metabolismo y no podré ser feliz mientras no lo arregle."
"Nunca nadie me podrá querer de veras si no tengo el peso perfecto."
"Nunca me basta con lo que este mundo me ofrece: nunca hay suficiente amor, ni satisfacción, ni dinero, ni comida."
"Yo no escogí este cuerpo, ni esta apariencia, ni las circunstancias de mi vida. La vida que tengo no es la que me corresponde."
"El tiempo no me alcanza para alimentarme. Las necesidades de mi organismo vienen de últimas."
"No puedo manifestar toda mi pasión y sexualidad. Sería peligroso."
"Estoy destinado a padecer la misma enfermedad que padecía mi madre [o padre]."
"Si pudiera encontrar la dieta perfecta, la forma perfecta de comer, entonces sería feliz."
"El pasado siempre se va a repetir. Es inevitable que me decepcione con los métodos para bajar de peso o con los intentos de mejorar mi salud."
"El mundo está en deuda conmigo. No he recibido la parte que me corresponde. La gente, la vida, el propio Dios, están en deuda conmigo."
"Soy una víctima. Los acontecimientos infelices en mi vida han sido injusticias que se me han infligido. He sido ultrajado. Nada de esto es culpa mía."
"Soy un impostor. No soy suficientemente bueno. Tengo que fingir que soy alguien que en realidad no soy. Si las personas pudieran conocer al verdadero yo, me abandonarían por completo."
¿Cómo puede descubrir sus creencias básicas limitadoras? Hace falta un poco de introspección y mucha honestidad con uno mismo. Busque un momento tranquilo para reflexionar sobre la pregunta: "¿Cuáles son los temores más profundos que rigen mi vida?" Sus creencias básicas limitadoras quedarán al descubierto al dar respuesta a esta pregunta. A veces una pregunta fuerte como ésta debe ponerse en remojo durante unos días. Preste atención a sus sueños. Permita que las respuestas afloren a su conciencia. Tal vez encuentre que puede identificar con claridad una creencia básica negativa, o incluso un puñado de ellas. Una vez que las haya anotado en papel, deje de alimentarlas y reestructúrelas en forma de creencias edificantes e inspiradoras. Anote junto a cada creencia negativa su contrapartida sana. Si la creencia antigua era "Nunca nadie me podrá querer de veras ni encontrarme atractiva si no tengo el peso perfecto", su nueva creencia esencial podría ser "Soy atractiva y deseable tal como estoy". Si la creencia antigua es "Vivo en el cuerpo equivocado", podría sustituirla por "Mi cuerpo es el vehículo perfecto para que yo aprenda las lecciones del amor y pueda madurar como persona".
Repítase a sí mismo estas nuevas afirmaciones cada día, reflexione sobre ellas por la noche, péguelas con cinta adhesiva a su refrigerador o pida a un ser querido o a un amigo que se las repita con la mayor frecuencia posible. La transformación comienza cuando usted verdaderamente asimila y pone a prueba esta nueva manera de pensar y de ser. Encontrará el éxito en un esfuerzo comedido, consciente y continuo a lo largo de la semana. Si ve que va a recaer en las creencias antiguas, modifique cuidadosamente el rumbo de sus pensamientos. Ésta es una manera profunda de cambiar su mundo interior y, por lo tanto, la química del organismo.
**Ejercicio: Inventario de inspiración**
Dedique un momento a analizar por qué hace lo que hace cuando está en juego su salud. ¿Qué lo motiva a seguir una dieta beneficiosa? ¿O a tomar suplementos vitamínicos o medicamentos? ¿Por qué hace ejercicios? ¿Qué fuerzas funcionan en su mundo interior para impulsarlo a la acción? Haga una lista de todas las estrategias que aplique durante el año con la intención de obtener beneficios para la salud. Luego, junto a cada estrategia, anote si su motivación es el miedo o el amor. ¿Usted sigue una dieta sana porque estima la salud o porque aborrece la enfermedad? ¿Hace ejercicios porque disfruta el movimiento y la sensación de estar en buena forma física o porque aborrece la grasa corporal?
Seguidamente, examine la distinción entre motivación e inspiración. La motivación, aunque es un atributo potencialmente positivo, la utilizamos a menudo para empujarnos a actuar de formas que no están en verdadera consonancia con nuestros valores esenciales. Las personas que dicen que son "muy motivadas" suelen sufrir un gran nivel de estrés y sentirse físicamente agotados de perseguir metas que nunca logran alcanzar. La inspiración, por otra parte, es una facultad que llega a través de nosotros pero que, al parecer, no se origina en nosotros. Es expansiva, vivificante e infinitamente abundante y nos enriquece desde el punto de vista metabólico. ¿Cómo se registra la inspiración en su cuerpo? ¿Hace que su metabolismo sea diferente?
Durante la semana 6, le corresponde la tarea de practicar los principios alimenticios que ha aprendido gracias a la inspiración. Consuma alimentos de calidad con relajación, conciencia, placer y ritmo. No lo haga por el miedo a las grasas ni a las enfermedades, sino por el amor de vivir una vida sana. Quizás la manera más fácil de sentir inspiración sea invocarla. Pídale que entre a través de su corazón. Recuerde una época en la que se sentía inspirado a nutrirse con buenos alimentos y con el cuidado de su cuerpo. ¿Cuáles eran esas circunstancias? ¿De dónde vino su inspiración? ¿Cómo la mantuvo? Visualice la persona que era usted cuando tuvo esa inspiración, sienta en su cuerpo la sensación que ésta le producía e invite a ese ser inspirado al momento presente. A partir de ese punto, haga una lista de cualquier cosa que pueda hacer esta semana (especialmente las cosas pequeñas) que son inspiradoras para la salud. Practíquelas con gratitud y con una sonrisa.
**Ejercicio: Una nueva manera de moverse**
Muchos vemos el ejercicio físico no tanto con amor por el ejercicio, sino como un regaño por haber acumulado grasa en el cuerpo o por el simple hecho de comer. En ese caso, aunque tal vez obtengamos algunos de los beneficios del ejercicio, nuestro mundo oculto de miedo y juicio contra nosotros mismos hará que nuestro metabolismo no esté a la altura de su potencial.
De modo similar, muchos de los que optamos por no hacer ejercicios también tomamos esa decisión desde una posición de juicio y castigo. Abandonamos nuestros cuerpos por vergüenza o pesadumbre o por la falsa creencia de que, una vez que perdamos el control de nuestro apetito y del ejercicio, nunca podremos recuperarnos. Creemos en secreto que no nos merecemos nada mejor.
Es hora de acercarnos más a nuestros propios corazones y almas y examinar lo que más a menudo nos detiene (nuestros temores) y administrar la medicina adecuada, o sea, la compasión.
Sería útil hacer distinción entre "movimiento" y "ejercicio". Para muchos, la idea del ejercicio tiene la connotación de un castigo impuesto y repetitivo. Es algo que tenemos que hacer, pero que no nos gusta hacerlo. El movimiento, en cambio, es el antídoto al ejercicio. Es una celebración del cuerpo. Es inspirado y natural y proviene del regocijo celular. Un mismo ejercicio, por ejemplo, trotar o usar la escaladora Stairmaster, puede hacerse por amor y no por castigo. Todo depende de nuestra manera de pensar.
En su diario, anote respuestas bien reflexionadas a las siguientes preguntas:
¿Hago ejercicios porque me gusta el movimiento?
¿Uso tal vez el ejercicio como castigo?
¿He abandonado mi cuerpo en lo que respecta al movimiento o el ejercicio?
¿Cuáles son los juicios específicos que he hecho sobre mi cuerpo?
¿Estos juicios me sirven de algo?
¿Qué me impide moverme de forma inspirada en mi vida cotidiana?
¿Cómo sería mi vida si el movimiento gozoso fuera cosa de todos los días?
¿Qué tipo de movimiento o ejercicio me inspiraría?
Durante la semana 6, transforme la manera en que hace ejercicios. Defina sus movimientos por la celebración. Comprométase a encontrar felicidad en su dimensión física. El procedimiento es sencillo: observe sus pensamientos mientras hace ejercicios. Cuando note que el crítico interno asume el control, sustitúyalo cariñosamente por el bailarín indulgente y donoso. De este modo el ejercicio, al igual que la comida, se convierte en una meditación sobre la conciencia. Como hace con las comidas, respire profundamente y con atención al ritmo de sus movimientos. Esto nos transporta al presente y a una relación auténtica con el organismo. No es necesario que cambie el tipo de ejercicio que hace. Quien tiene que cambiar es usted, quien hace el ejercicio.
Monitoree sus propias sesiones de ejercicio para determinar si está recibiendo placer. Muchas personas encuentran que, una vez que se percatan de sus prejuicios mentales negativos contra el ejercicio y se liberan de ellos, tienen más energía, resistencia y una agradable presteza del cuerpo y el ser. Como crédito extra en esta semana, busque una nueva manera de mover su cuerpo. Busque un ejercicio o disciplina corporal que sea distinta a su forma normal de moverse. En la actualidad abundan las opciones: Pilates, Gyrotonics, Feldenkrais, Nia. No piense en ellos como sustitutos de su rutina cotidiana, sino como adición a ésta. Si está acostumbrado a los aeróbicos, añada un entrenamiento ligero con pesas. Si usted es de las personas que gustan de los ejercicios competitivos, escoja una forma de movimiento más artística, como el baile o el Tai Chi. Si tiene la tendencia a hacer ejercicios intensos (levantamiento de pesas, aeróbicos intensos, etc.) pruebe a hacer ejercicios de menor intensidad, como el yoga, el estiramiento y el nado. Confíe en su cuerpo esta semana. Permítale moverse. Pida a su inteligencia intestinal información sobre lo que desea su cuerpo y preste atención a ésta de una forma más profunda que nunca antes.
La clave para acceder al poder metabólico del pensamiento es cobrar conciencia de los pensamientos y luego optar por cambiarlos. Observe su mente con persistencia y paciencia. Afirme los pensamientos que le proporcionan energía y deshágase suavemente de los que le roben energía. Deshágase de todos los conceptos nutricionales que se basan en el juicio o el miedo. Invoque a la inspiración en lo que respecta a su dieta. Practique la aceptación de sí mismo. Sobre todo, crea en el poder de la mente para encauzar bien su destino metabólico en cada momento.
_Lecciones clave_
• Lo que pensamos se convierte por vía electroquímica en respuestas fisiológicas.
• Por lo tanto, el acto de pensar es uno de los elementos de nuestra nutrición.
• Los pensamientos negativos sobre los alimentos inhiben directamente la digestión a través de procesos nerviosos, hormonas, neuropéptidos y otras sustancias biológicas. Los pensamientos positivos sobre los alimentos potencian la digestión a través de esos mismos mecanismos.
• El efecto placebo es una prueba concreta de que nuestros pensamientos, creencias y expectativas puede influir en el efecto metabólico de un alimento o suplemento.
• El origen de nuestra motivación tiene una fuerte influencia en metabolismo. Las actividades sanas motivadas por el miedo pueden producir resultados insuficientes, mientras que la misma actividad motivada por la inspiración puede producir resultados más positivos.
SEMANA 7
El poder metabólico del relato
_El universo está formado por relatos, no por átomos_.
MURIEL RUKEYSER, POETISA
¿Alguna vez ha escuchado un relato que lo ha inspirado o le ha cambiado la vida? ¿O que le haya levantado el ánimo o le haya dado esperanzas? Los relatos que nos conmueven son como potentes medicamentos que avivan nuestro metabolismo. Dentro de cada uno de nosotros hay un narrador oculto que da su propia interpretación a cada aspecto de nuestro viaje. Y esa interpretación, sea positiva y favorable a la vida, o negativa y nihilista, pone en marcha nuestro metabolismo y desencadena un proceso bioquímico a imagen y semejanza de nuestro mundo interior. A medida que nos volvemos más diestros para percatarnos de los relatos secretos que sin darnos cuenta contamos, y a medida que estemos más dispuestos a ser "autores" de un relato generoso y sanador, nuestro metabolismo se eleva, poniéndose a la altura de la nueva norma que hemos establecido.
Examinemos la manera en que podemos aprovechar el poder metabólico del relato.
_El relato del ADN_
Si consulta con un médico que tenga un verdadero interés en la sanación y sepa lo que hace, la parte más importante y esclarecedora de la consulta será la anotación de su historia, o sea, su relato. Quién es usted: de qué familia proviene; qué come, bebe y sueña; dónde vive; cómo trabaja y se divierte; cuáles son sus relaciones. O sea, cada detalle sobre su persona es como una ventana hacia su metabolismo. Su historia completa es su relato, y su relato lo es todo.
Quizás el libro de narraciones más importante en la biblioteca humana es nuestro ADN.
A nivel molecular, nuestro material genético revela una información imperecedera y amena. El relato de nuestro ADN consta de 23 capítulos, también conocidos como pares de cromosomas. Los aproximadamente 30.000 genes contenidos en los 23 "capítulos" de cromosomas conforman las subtramas, personajes, giros y rodeos en nuestro libro humano de la vida. Afortunadamente, hay muchos finales y posibilidades distintas en nuestro destino genético porque nosotros mismos escogemos muchas de las variables que influyen en la expresión de nuestros genes: lo que comemos, la manera en que hacemos ejercicios, nuestro lugar de residencia, nuestra forma de vivir y amar.
Si usted cree en la ciencia de la genética, entonces cree que el fenómeno del relato está incorporado en el cuerpo y es la realidad esencial de nuestro ser. Si Shakespeare estaba en lo cierto (y me imagino que así es) cuando dijo que "el mundo entero es un escenario", entonces los papeles que desempeñamos y la química que nos define no pueden menos que ser la misma cosa. Nos guste o no, somos personajes en una obra universal, de la que somos coautores. Las tramas que hilamos son el alimento que da combustible al cuerpo y anima nuestra experiencia. Nuestro relato se posiciona en la "silla de director" que se encuentra en cada célula y organiza al equipo de producción molecular hasta crear la película de nuestra vida. Los efectos del relato se hacen sentir desde los niveles más densos de la biología hasta la atmósfera más enrarecida del alma.
Lo que estoy sugiriendo es esto: que el ADN no es más que el equivalente bioquímico de un relato, y nuestro relato personal es el equivalente sutil del ADN. En otras palabras, la materia y la energía intercambian traviesamente una vez más sus papeles. Por fortuna, uno no tiene que depender del mapa del genoma humano para recibir los beneficios de la ingeniería genética. Modificar su relato es un método mucho más seguro y juicioso de redirigir su ADN y, por lo tanto, el rumbo de su metabolismo.
_¿Quién está comiendo?_
Si usted desea ver el poder metabólico del relato en acción, le basta con examinar una de las posesiones más valiosas que tiene: su personalidad. En contra de la creencia popular, ni usted ni yo podemos aducir legítimamente que somos una persona. Cada uno de nosotros es más bien una multitud. Cada persona es una colección de personalidades y arquetipos: madre, hija, hermana, amante, bruja, diosa, virgen, prostituta; padre, hijo, hermano, guerrero, rey, asesino, víctima, payaso. Por supuesto, la lista es infinita. Cada uno de estos personajes tiene su propio relato, y cada uno de ellos desempeña un papel en aras del relato global de nuestra vida. De hecho, en la actualidad muchos psicólogos sugieren que, de cierto modo, la multiplicidad de personalidades es el modelo más acertado para describir la forma en que realmente funcionamos. En otras palabras, la persona a quien usted llama "yo" es en realidad un montón de personas distintas, y quién es ese "yo" depende de quién esté al mando en cada momento.
Sorprendentemente, los investigadores han descubierto que, en pacientes con trastorno de personalidad múltiple, cada personalidad tiene una fisiología especial y distinta. Pueden notarse variaciones singulares y mensurables en el ritmo cardiaco, la presión sanguínea, la respuesta galvánica de la piel y los niveles de hormonas según cuál sea la personalidad predominante en cada momento. Por ejemplo, una persona había sido diagnosticada clínicamente como diabética insulinodependiente, pero sólo lo era en una personalidad específica. Otra paciente presentaba una severa alergia a los cítricos que le provocaba urticaria por todo el cuerpo, pero a ésta también le ocurría solamente en una personalidad. El investigador podía ver cómo desaparecía la urticaria cuando la paciente cambiaba a otra personalidad.
Si parece inverosímil decir que cada personalidad distinta que habita en esas personas tiene un metabolismo distinto, tenga en cuenta que ya la ciencia ha determinado que cada modalidad de conciencia (la vigilia, el dormir, los sueños, el estrés, la relajación, etc.) tiene su propia química. Como somos seres bioquímicos, cada estado cognitivo tiene su equivalente bioquímico.
Lo que nos enseñan las personas que padecen de personalidad múltiple es que el relato que vivimos y el metabolismo que experimentamos forman parte de una misma filigrana. En cualquier comida, o en cualquier momento, uno de los muchos personajes que habitan en lo más hondo de nuestro ser está sentado a la cabeza de la mesa. Tiene sus propios hábitos peculiares, sus propias necesidades singulares, y su particular metabolismo nutricional.
Jeannette confiesa que le encanta el pastel esponjoso pero, como el azúcar que contiene le produce una reacción hipoglucémica, ella lo evita por completo. No obstante, cuando visita a su abuela, ésta siempre le sirve pastel esponjoso y en esas ocasiones no le causa ningún problema. Durante la infancia de Jeanette, su abuelita y el pastel esponjoso eran una misma cosa, y los recuerdos de esas visitas son muy especiales para ella. ¿No será que su "personalidad de nieta" tiene una mejor capacidad de regulación de la glucosa en sangre?
Sarah, consultora de negocios, afirma: "Tengo dos estómagos, uno es kosher y el otro no. En mi casa sigo estrictamente la ley dietética judía. Si ingiero en mi apartamento alguna comida que no sea kosher, me muero de las náuseas y de los deseos de vomitar. Sin embargo, durante los almuerzos de trabajo no siempre puedo darme el lujo de mantener la costumbre kosher; en esos casos hay algo en mi interior que asume el control y entonces puedo asimilar cualquier alimento sin ningún problema".
La interrogante principal en este caso es la siguiente:
**Cuando usted se sienta a la mesa, ¿quién está comiendo?**
Jack, un ingeniero de 29 años, se quejaba de mala digestión y acidez estomacal y de que le era imposible bajar de peso. Tenía antecedentes familiares de diabetes y enfermedades cardiovasculares, por lo que consideraba imprescindible bajar quince libras. El problema era que Jack no tenía fuerza de voluntad. Comía adecuadamente durante varios días y entonces su digestión funcionaba bien. Pero luego se dejaba recaer en una dieta con alto contenido de queso crema y papas fritas, y pocos vegetales, que le producía intensos trastornos gástricos. Jack, con su mente metódica de ingeniero, no lograba entender por qué comía contra sus propios deseos.
Me di cuenta de que una parte de la personalidad de Jack estaba claramente interponiéndose y le sugerí que, durante varias semanas, antes de comenzar cualquier comida o merienda, se hiciera a sí mismo una sencilla pregunta: "¿Quién está comiendo?" Le expliqué la posibilidad de que nuestro mundo interior esté habitado por distintos personajes arquetípicos y que tal vez le convendría identificar exactamente quién estaba a la mesa en cualquier momento dado. Le pedí que no luchara contra ninguna de esas voces, que no las juzgara ni las dominara, ni las modificara de ninguna manera. Sólo debía observar y reunir información.
A Jack esto le pareció al mismo tiempo gracioso y sugestivo. Tomó en serio el consejo y esto fue lo que descubrió: "Cuando comprobé quién estaba comiendo, vi que mi personalidad rebelde siempre está presente cuando estoy rompiendo las reglas, y esa personalidad es la que sufre de acidez. Asume el control cada vez que alguien trata de mandarme o de imponerme normas. Siempre pensé que yo no tenía fuerza de voluntad en relación con la comida, pero sí la tengo: está dentro de mi personalidad rebelde. Simplemente tengo que encontrar una manera de que esto funcione a mi favor, no en mi contra".
En muy poco tiempo Jack aprendió a escuchar a su rebelde interno, a dialogar con él y a entenderlo y aceptarlo, así como a darle lo que necesitaba para que Jack también pudiera obtener lo que necesitaba. Siempre que Jack permitiera al rebelde infringir alguna regla una o dos veces por semana, todos estaban felices. Se dio cuenta de que en realidad era el rebelde quien le proveía su fuerza y su carácter vivaz. Sus problemas digestivos mejoraron significativamente al cabo de unas semanas y logró bajar de peso lentamente en un período de cuatro meses.
Piense en algunas de las diversas personalidades que usted tiene, los distintos rostros que adopta según esté en compañía de amigos o familiares, en el trabajo o en vacaciones, y en sus lados ocultos que afloran cuando se dan las circunstancias precisas. ¿En qué se diferencian estas personalidades en cuanto a sus preferencias de alimentos? ¿Ocurren cambios perceptibles en su cuerpo según la personalidad que esté al frente? ¿Nota algún cambio en la digestión?
¿Se da cuenta de cómo el fenómeno de la personalidad múltiple puede influir en su metabolismo cotidiano?
_¿Cuál es su relato?_
Luccia, de 35 años y madre de dos hijos, tenía dificultad para bajar de peso. También tenía dolores de cabeza crónicos y alergias cada vez peores. Al igual que muchas personas instruidas y dedicadas que me he encontrado, Luccia había probado suerte infructuosamente con varias estrategias tradicionales y holísticas. Luccia era una persona muy preparada y una excelente maestra de enseñanza especial, y en ese momento estaba dedicando toda su energía a su familia. No me tomó mucho tiempo darme cuenta de que ella estaba viviendo una vida de mártir. Sus actividades diarias consistían en preparar la comida, limpiar, llevar a los niños a todas partes, darles meriendas, atender a su esposo antes y después del trabajo y visitar a su madre anciana. Luccia nunca se sentaba a comer, ni tampoco se preparaba su propia comida. Comía las sobras que dejaban sus hijos y se alimentaba de forma arrítmica, bajo estrés y sin sentir placer ni prestar atención, o sea, de la manera típica en Estados Unidos.
Al relatarme más sobre su vida doméstica y su mundo interior, dejó entrever una trama oculta. Luccia vivía al servicio de los hombres y se sentía inferior a los hombres que formaban parte de su vida. Ella se había criado en la región norte central de Estados Unidos pero su esposo provenía de otra cultura, en la que tradicionalmente las mujeres debían hacerlo todo. Ella aceptaba en silencio las exigencias retrógradas de su esposo con una sonrisa y de una manera incompatible con sus verdaderos sentimientos de que aquella situación era injusta. Su hijo varón de catorce años había asumido los hábitos de su padre y no hacía nada por ayudar en la casa. El hijo que ella amaba iba a convertirse al crecer en el tipo de hombre que la hacía sentir insignificante y sin valor.
Para Luccia, las dietas y el ejercicio no funcionaban, y los medicamentos sólo encubrían sus síntomas debido a una razón: su relato era tóxico. Sus creencias esenciales eran: "Yo no importo. Mis necesidades van de últimas. Los hombres son superiores a mí. Ésa es mi suerte en la vida, así que tengo que enterrar todos mis sentimientos de dolor y protesta y tomar las cosas con una sonrisa falsa". Este relato hacía recaer sobre su metabolismo la carga de una respuesta fisiológica de estrés, y ésta a su vez le afectaba la digestión y su capacidad de quemar calorías y contribuía a producirle dolores de cabeza.
Cuando hablamos, Luccia se sintió aliviada y le vinieron lágrimas a los ojos al ver que un hombre era capaz de reconocer su relato de orgullo herido. De forma rápida e intuitiva reconoció las conexiones entre los problemas que ocurrían en su cuerpo y la manera en que ella escribía la historia de su vida. Le sugerí que el cambio principal que debía hacer era comenzar a creer que ella sí importaba, que los hombres eran sus iguales y que debía actuar sobre esa base. A partir de este nuevo punto, definimos algunas medidas sencillas que ella podía tomar: exigir a su hijo que cocinara, limpiara y ayudara con el cuidado de su hermana menor; hacer que su esposo se preparara su propio desayuno y ayudara a preparar su almuerzo, y que escogiera una noche de cada semana para una cena romántica en un restaurante de su elección. Esta cena semanal sería una oportunidad para que su esposo la tratara como una reina.
Estas estrategias podrían parecer mundanas, pero en realidad estaban llenas de significado. Luccia estaba contribuyendo a reescribir no sólo su propio relato sino el de sus familiares inmediatos y quizás incluso los de las generaciones anteriores a ella que ayudaron a legar estos relatos a sus descendientes. Con el paso de unos meses, Luccia logró bajar las libras que tanto le molestaban y sus problemas digestivos y dolores de cabeza se redujeron al mínimo. Tenía menos peso en su cuerpo porque su relato se había vuelto más ligero.
¿Se da cuenta de cómo, bajo cada problema que podamos afrontar con las comidas, la salud o el peso hay un relato que da forma a nuestra realidad metabólica?
Reescribir nuestros relatos es no sólo un acto radical de respeto a uno mismo, sino una poderosa forma de autoiniciación. En la secundaria y la universidad, otras personas son quienes deciden si usted reúne los requisitos para graduarse. En la escuela de la vida, usted tiene la última palabra. La "graduación" de la vida significa que usted es quien decide cuándo es hora de elevarse a sí mismo. Usted elige convertirse en el dramaturgo de su propio viaje y escribir un cuento fiel a su corazón.
De hecho, sea cual sea el toque que damos a nuestro relato (feliz, esperanzador, paranoico, positivo) es la manera precisa en que nuestras moléculas emprenden el movimiento y asumen esas cualidades a nivel celular. No es ningún accidente que los físicos moleculares, al referirse a las partículas subatómicas, hablen de factores como el encanto y la personalidad. Dé un toque negativo a su relato sobre su salud, peso, alimentación, ejercicios o vida, y habrá creado las condiciones precisas para agotar los recursos del cuerpo y hacerlo envejecer a gran velocidad. Esto se debe a que la negatividad crea una respuesta fisiológica de estrés que contribuye al agotamiento de oxígeno, la formación de radicales libres y la producción de sustancias químicas inflamatorias, mutagénicas, causantes de reacciones autoinmunológicas y citotóxicas. Dé un toque positivo a su fábula personal y creará la química de la relajación y el placer, la cual cataliza la oxigenación, la circulación, la inmunidad, la asimilación de nutrientes, la capacidad de quemar calorías y la regeneración celular. Los principios científicos son claros y sencillos.
Resulta fascinante cómo nuestra cultura eleva sus relatos a sitiales elevados y honorables (basta con pensar en El mago de Oz, Lo que el viento se llevó, El señor de los anillos) pero, cuando se trata de investigaciones y descubrimientos científicos, el relato queda relegado. Lo que quiero decir es que la validez de un estudio médico se basa en su capacidad de eliminar todos los factores intangibles, invisibles y enriquecedores y allanar todas las variables para que podamos poner a prueba un medicamento o un nutriente en una población teóricamente homogénea. Eliminamos así el relato. Por eso el mayor insulto que puede lanzarse a un científico es "Sus pruebas son anecdóticas". O sea, "No existen pruebas reales. Todo lo que tiene es un montón de relatos".
Y, sin embargo, todo lo que existe son relatos. La vida es un relato. Y la ciencia no es más que uno de los relatos sobre el mundo. Cuando al fin nos dediquemos a determinar cómo funciona de veras el organismo, se reconocerá el verdadero valor del relato: no es la prueba más insignificante, sino la más válida. Tan hondo hemos caído en el relato mecánico de las cosas, que la magia del mundo ha retrocedido. Si usted no vive en un cuerpo encantado, ¿cuán intenso cree que será su fuego metabólico? De hecho, nuestra llama interior cobra vida gracias a la fricción creada cuando el relato del alma se abre paso a través de los caminos del cuerpo, o sea, sus nervios, vasos, tendones y células. Permita que la rica imaginación del alma eche raíz dentro de usted y verá cómo su metabolismo recibe los nutrientes más vitales que jamás necesitará.
_Un nuevo comienzo nutricional_
¿Alguna vez ha tenido la experiencia de mirar a través de la ventanilla de un avión y de repente ver su vida desde una perspectiva más elevada? El agitado mundo que unos momentos antes lo rodeaba parece ahora tan pequeño en el contexto de cosas más importantes. La amplia vista le ha permitido retirarse un tanto, ha suavizado algunas aristas afiladas y ha permitido que afloren revelaciones más profundas. Pues bien, sea cual sea el relato que usted esté viviendo en relación con los alimentos y la salud, quisiera sugerirle una forma de aprovechar esa experiencia para obtener una nueva perspectiva, hacer borrón y cuenta nueva, y revitalizar su metabolismo. Para lograrlo, es necesario que veamos al cuerpo y su alimentación desde la perspectiva más elevada posible.
El ardid que he descubierto consiste en tomar toda la información conocida sobre los alimentos y el metabolismo nutricional, todos los conocimientos del planeta sobre dietas, todo detalle que se le ocurra en relación con este tema, todo libro y estudio, y resumir todo esto en un relato de una sola oración que lo diga todo. ¿Le parece imposible?
Pues bien, creo que he logrado condensar en una sola frase el relato principal sobre la nutrición. Mire a ver qué le parece:
**Uno nace, come y muere.**
Eso es todo. Ésa es la esencia y la prueba principal de todos los métodos conocidos y por conocer sobre este tema, trátese de la dieta del doctor Atkins, la dieta de la Zona, la de South Beach, los alimentos sin cocinar, la comida chatarra, etc. Según lo que la comunidad científica ha descubierto hasta ahora, y basándonos en meras pruebas anecdóticas, todos estamos destinados a terminar de la misma manera independientemente de lo perfecta que sea nuestra dieta o nuestra musculatura. No digo todo esto por un deseo morboso de arruinarle el día, sino porque este conocimiento tiene el poder de liberarnos. Invertimos secretamente gran parte de nuestra energía en estrategias que buscan burlar a la muerte y evitar la ulterior desaparición física. Pero, si desde ahora podemos comenzar a aceptar nuestro destino final, podremos seguir nuestro camino con más júbilo y regocijo y crear condiciones fértiles para el tipo de metabolismo que deseamos expresar durante nuestra estancia en la Tierra.
Al tener presente el destino final de nuestro viaje de nutrición ("morir") y saber que es imposible cambiar el comienzo ("nacer") nos queda una infinidad de opciones en la parte restante del programa de la vida ("comer"). Es una pizarra en blanco. Una página vacía. Le corresponde a uno inventar el relato, elegir la trama. Cada uno de nosotros goza de libre albedrío en la nutrición. En la fase de su existencia de "comer", no hay ninguna regla aparte de las que usted y yo inventemos. ¿No es maravilloso que dispongamos de tanto espacio para crear? Por eso lo importante es que nos haríamos un gran favor si nos preguntáramos: "¿Qué objetivo persigo al comer de la forma en que como?"
De hecho, muchas personas responderán: "Ey, de todos modos me voy a morir, así que mejor aprovecho y como lo que se me antoje". Francamente, esta opción es perfectamente válida siempre que haya sido elegida a plena conciencia y con responsabilidad y capacidad de decisión. Algunas personas que dicen esto lo sienten así de veras y son felices; sin embargo; otras quisieran cuidarse más pero no disponen de los medios necesarios para liberarse a sí mismos del autocastigo y el abuso.
Al otro extremo del espectro, tenemos el poder de elegir un estilo de comer imbuido de un significado más profundo. Ciertamente, una de las formas más ponderadas en que podemos hacer evolucionar nuestra relación con los alimentos y, por lo tanto, nuestro metabolismo, consiste en sintonizar nuestra forma de comer con nuestro gran propósito en la vida. Eso significa ponerse en contacto con el relato de nuestra existencia y nuestra razón de estar en la Tierra, y comer de una forma que contribuya a esta gran misión.
Los enunciados de misión son bastante populares en estos tiempos. Casi todas las empresas han invertido tiempo, energía y dinero en expresar su finalidad general para que sus ejecutivos y empleados sepan exactamente quiénes son, y para que puedan funcionar con eficiencia y cortejar el éxito. Con esto quiero decir que, si empresas como Burger King y Jack-in-the-Box pueden tener su enunciado de misión, también podemos tenerlo usted y yo.
Intente escribir en tres oraciones o menos la esencia de su misión en el planeta Tierra. Quizás le sea útil comenzar por expresar su propósito en los términos más generales, con enunciados como: "Estoy aquí para criar a mis hijos y atender a mis seres queridos"; "Estoy aquí para compartir mi amor con el mundo"; "Estoy aquí para contribuir con mis talentos a hacer un mundo mejor". También puede expresarlo en términos específicos, como: "Estoy aquí para contribuir a la sanación de otros"; "Estoy aquí para ayudar a las personas a invertir su dinero, crear riqueza y apoyar las buenas causas".
Sea cual sea el enunciado de misión que escoja de momento, le sugiero que proceda a ver su dieta, sus hábitos de ejercicio y el sustento de su cuerpo en el contexto de esa misión. En otras palabras, ¿cuál forma y estilo de comer le ayudaría más si su misión fuera hacer una contribución importante a su familia y sus seres queridos? Cuando menos, tendría que ser un enfoque nutricional que lo mantenga feliz, sano y bien alimentado para que pueda seguir dando lo mejor de sí. Si la misión de su vida está centrada en su trabajo y su carrera profesional, es probable que le convenga un tipo de relación con la comida que sea lo suficientemente flexible para adaptarse a su estilo de trabajo pero que, al mismo tiempo, le permita mantener su mente clara y su cuerpo con energía y liviano. Si su misión incluye la posibilidad de hacer que el mundo sea mejor, entonces debería escoger una forma de comer que sea favorable al planeta, a su suelo y a todas las criaturas que viven en él.
Vuelva a mirar su enunciado de misión y haga en su diario una lista de todos los detalles específicos y atributos generales de cómo se desenvolverían sus comidas del día si estuvieran sintonizadas con su propósito superior. Una vez que tenga esto anotado, tendrá una estrella guía para su metabolismo. De esta forma le será más fácil trabajar consigo mismo en lugar de ir en su contra. Y sí, hay cabida también para sus deseos más personales y egoístas, o sea, un cuerpo más delgado y un físico más sensual. Pero ahora puede pasar a una perspectiva más cercana al alma porque priorizará lo que hay que priorizar. Al hacerlo tal vez encuentre que se desvanece parte de la carga que ha experimentado para llegar a la perfección metabólica. Paradójicamente, al librarse de esa manera de este enfoque basado en el miedo, liberará el metabolismo y esto le permitirá naturalmente alcanzar un nivel más elevado. Después de todo, ¿cómo va a bajar de peso si no puede aligerar su carga? Y, ¿cómo va a aumentar el fuego de su metabolismo interno si lo merma constantemente con sus temores?
_Un final dietético feliz_
Otra forma importante de potenciar al máximo el poder metabólico del relato es inventar un final feliz para su alimentación. Mejor aún, quiero mostrarle un método que podría prácticamente garantizarle este resultado. Vea a continuación el procedimiento.
En primer lugar, haga una lista de todos los beneficios que espera recibir una vez que haya alcanzado sus metas en cuanto a alimentos, peso, energía, forma física y salud. En otras palabras, ¿por qué desea alimentarse de una forma más adecuada? ¿Por qué desea bajar de peso? ¿Por qué quiere mejorar su forma? ¿Para tener más energía? ¿Más salud? Dedique un momento a anotar todos los beneficios que espera recibir cuando haya llegado al destino deseado. Algunos de los beneficios típicos que las personas esperan obtener de un metabolismo más sano son "tener más energía"; "sentirse mejor con uno mismo"; "sentirse más ligero"; "usar la misma talla de ropa que antes"; "ser más atractivo y apetecible"; "tener más confianza en uno mismo"; "ser de una vez la persona que realmente uno es"; "lograr más cosas"; "tener una mejor experiencia vital".
Aquí está el ardid que le garantizará su final feliz:
**Cualesquiera que sean los beneficios que espera obtener al final de sus esfuerzos con la dieta, simplemente "recíbalos" desde el principio.**
Si usted piensa que será feliz una vez que baje diez libras, sea feliz desde ahora. Si se imagina que tendrá más energía cuando al fin coma adecuadamente, tenga más energía desde ahora. Si cree que tendrá más confianza en sí mismo y que otros lo encontrarán más atractivo cuando tenga el cuerpo que desea, siéntase así desde ahora. Cualesquiera que sean los beneficios que espera obtener al final, prodúzcalos al comienzo. Asuma esa personalidad, ese papel, ese relato. Haga como si ya fuera la persona que desea ser. Veneramos a las estrellas de Hollywood por su capacidad de hacernos creer en los relatos que cuentan y los personajes que representan. Es un talento fabuloso que, aunque no lo crea, usted y yo también poseemos. Actúe como si fuera la persona que desea ser y no sólo nos convencerá a todos nosotros, sino que también se lo demostrará a usted mismo. Generará literalmente la fisiología del personaje que está representando, pues el poder metabólico del relato es así de potente y real.
Si repasa la lista de los beneficios que espera recibir, se percatará de que casi todos son una opción que usted puede elegir en este instante. De hecho, quizás el mayor beneficio de cualquier dieta o programa de ejercicios, el beneficio que está por encima de todos los demás, es que seremos más felices. ¿Por qué esperar? Elija ser más feliz desde ahora, tener más energía y ligereza desde ahora, ser más sensual desde ahora, actuar de una forma más sana desde ahora, y encontrará que ya ha alcanzado su destino final. Entonces, mágicamente, por haber generado desde el principio los resultados finales, habrá creado el preciso entorno metabólico para que esos beneficios se materialicen realmente y se afiancen aún más.
Habrá cultivado la química de la relajación, el placer, la oxigenación intensa, la conciencia, el pensamiento independiente y el ritmo armonizado, todo lo cual aviva las llamas de nuestro metabolismo. La química que vamos creando por el camino influye en la conclusión química que alcanzamos al final. ¿Cree usted verdaderamente que le resultará fácil bajar de peso si el relato que está viviendo produce la fisiología del juicio contra sí mismo y la negatividad? ¿Honestamente se imagina que el enfoque nutricional adecuado le proporcionará más energía aunque el relato que usted sigue viviendo le merma su energía? Incluso en el peor de los casos, si usted elige ser feliz desde el comienzo y luego no baja de peso como deseaba, por lo menos habrá sido feliz.
Basta con un instante para que el metabolismo se reorganice en respuesta a nuestro relato. Recuerde un momento en el que usted se sentía bajo de energía o de metabolismo y una llamada telefónica o un visitante inesperado le levantaron el ánimo al instante. Esa persona o mensaje tenía cierto significado para usted, le permitió interpretar su relato del momento en forma positiva e inspiradora, y esto a su vez hizo girar sus partículas subatómicas de la manera precisa para activar en usted la química interna de las buenas sensaciones. Podemos invocar esta misma magia metabólica si reescribimos nuestros relatos en cualquier momento dado y hacemos que el final feliz que siempre hemos esperado ocurra en el presente.
_Semana 7: Su tarea principal_
Esta semana es su oportunidad de reflexionar sobre cómo usted escribe su relación con los alimentos y, por lo tanto, cómo relata la química de su organismo. Ésta es su oportunidad de atacar quirúrgicamente las tramas que no lo ayudan y sustituirlas con un relato más sano y lleno de vitalidad. Su tarea principal consiste en identificar su relato nutricional, replantearlo bajo una luz superior, y experimentar los resultados.
**Ejercicio: Su historia metabólica**
Comenzamos la semana 7 con las más importantes anotaciones en su diario del programa de sosiego. Su tarea consistirá en describir toda la historia de su alimentación, su dieta y el cuidado de su cuerpo. Considérelo como una biografía completa, desde la infancia hasta el presente, de su viaje metabólico personal. Haga este ejercicio cuando disponga por lo menos de una hora sin que nada ni nadie lo interrumpa. Sea considerado consigo mismo al describir sus recuerdos gratos o difíciles y al contar un relato riguroso y cabal. Tome nota de todo lo que pase por su mente sin editarlo y sin tratar de "que le salga bien". A continuación figuran algunas pautas que lo ayudarán en este proceso.
• Describa las distintas dietas, sistemas nutricionales y criterios en relación con las comidas que usted ha seguido desde la niñez y a lo largo de su vida.
• Enumere las creencias en materia de nutrición y dieta que usted ha seguido más firmemente y los cambios que han tenido lugar en esas creencias.
• Describa sus experiencias vitales importantes en lo que respecta a la salud, los niveles de energía y las enfermedades. ¿Cuáles han sido sus obstáculos más grandes en estas categorías? ¿Cuáles han sido sus resultados más satisfactorios? ¿Qué efecto han tenido en su mundo personal y su mundo interno las enfermedades, los accidentes o el tiempo de recuperación de éstos?
• Describa su relación con su cuerpo (y la imagen que tiene de éste) desde la niñez hasta la actualidad.
• Describa sus experiencias sobre sexualidad y sensualidad desde la niñez hasta el presente. ¿Cuáles fueron sus principales dificultades en este sentido? ¿Sus principales logros? Tome nota de cualquier vinculación entre estos factores y su relación con los alimentos y la imagen corporal.
• ¿De qué modo han influido sus padres, amigos, familiares y parejas en su experiencia sobre los alimentos, la salud y la imagen corporal a lo largo de los años?
• ¿Alguna vez se ha sentido traicionado en lo que respecta a su salud o su cuerpo? ¿Qué influencia tiene esto en usted actualmente?
• ¿Cuáles secretos mantiene más ocultos en relación con los alimentos, la salud y el cuerpo? ¿Cuáles son sus mayores temores? ¿Cómo le afectan estos secretos o temores?
• ¿Cuáles son sus recuerdos más positivos e inspiradores en relación con los alimentos, la salud, la sexualidad y su cuerpo? ¿Cuándo estuvo (o cuando estará) en la "flor de su vida" en estas categorías? ¿Cuándo ha sentido la mayor vitalidad? ¿Cuáles fueron sus logros más importantes con los alimentos?
Cuando haya terminado, dedique algún tiempo a releer y absorber lo que ha escrito. Es un documento contundente. Mantenga su presencia de ánimo y de sentimientos ante lo que pueda haber descubierto. Anote cualquier nueva revelación que pueda haber surgido durante este proceso o cualquier conexión que se pueda haber establecido. ¿Hay algún motivo común que resalte? ¿Puede identificar más claramente las creencias esenciales que influyeron en su pasado metabólico? ¿Ha encontrado algún atributo de su carácter que quisiera cambiar?
A menudo lo que nos impide deshacernos de hábitos no deseados en el presente es la forma en que interpretamos nuestro relato del pasado. Sin que sean detectados por el radar de nuestra conciencia, arrastramos relatos de una época anterior que nos lastran con su peso. Y mientras más tratamos de huir de nuestro pasado irredento, más nos inmoviliza éste sin que nos demos cuenta.
Una vez que haya repasado su historial metabólico (o sea, su viejo relato nutricional) y haya absorbido su significado en su vida, es hora de que mire al pasado con nuevos ojos.
**Ejercicio: Reescriba su pasado nutricional**
Ahora vuelva a repasar los motivos recurrentes y las tramas que ha descubierto al describir su historial metabólico y edítelos con osadía y creatividad. ¿Qué giro puede darle a su relato para que sea completamente positivo? ¿De qué modo puede reinterpretar su relato para que todo lo que parecía fracasos, abandonos, penurias, enfermedades, problemas con el peso, dificultades con la dieta e incertidumbres sean vistos como el camino perfecto que usted necesitaba seguir para aprender las lecciones más importantes para el crecimiento de su alma? ¿Puede insuflar amor a su relato? ¿Puede asumir que cree en la bondad de los seres humanos, con inclusión de usted? ¿Puede encontrar el perdón? ¿O aceptar serenamente lo que no puede ser de otra manera? Considere este ejercicio como una oportunidad de hacer las paces con una parte importante de usted mismo.
El hecho de reescribir su historial nutricional bajo la luz sabia y misericordiosa del alma es una forma irrebatible de liberar energía y hacer sanar las dolencias y heridas más persistentes. Una vez que haya creado su relato nuevo y completamente positivo sobre su pasado nutricional, dedique un tiempo a releerlo y absorberlo. Vuelva a él durante la semana para recordarse a sí mismo el verdadero relato de la vida de su cuerpo, un relato escrito con la sabiduría del amor.
¿Alguna vez ha conocido a alguien que haya experimentado un cambio dramático en su salud o su peso? ¿Alguien que haya "vuelto a nacer" en su cuerpo y ahora esté más vibrante, vivo, radiante, feliz e inspirado? ¿Cree cabalmente que semejante transformación puede ser el mero resultado de ingerir menos calorías, hacer más ejercicios o tomar un suplemento para sentirse bien?
Estoy dispuesto a apostar muchísimo dinero a que, independientemente de cualquier programa de nutrición, sistema o máquina de ejercicios que esa persona haya empleado, lo que realmente alimentó el fuego de su renacimiento metabólico fue un nuevo relato. Aunque uno haga la mejor dieta de la galaxia, si lleva una vida de miedo y está motivado por una trama pesimista, los beneficios de todos sus buenos esfuerzos nunca durarán mucho tiempo.
**Ejercicio: Su nuevo relato nutricional**
Su próxima tarea es crear un relato nutricional completamente nuevo que lo transporte de este momento al futuro. Ésta es su oportunidad de reiniciar su relación con los alimentos. Es su ocasión de trazar el rumbo de su vida dietética desde el nivel más profundo y, al hacerlo, influir en la calidad de la bioquímica de su organismo por muchos años más.
En su diario, escriba el guión de la película sobre sus comidas, cuerpo, salud y sexualidad, en la que usted desempeñará el papel principal por el resto de su vida. Cree una historia inspiradora con la que le encante vivir. ¿Qué comerá? ¿Cómo se sustentará? ¿Dónde comerá? ¿Con quién? ¿Cómo se sentirá? ¿Qué sensaciones tendrá su cuerpo? ¿Cómo sustentará a otros? ¿Cuál será su experiencia con el placer? ¿Cómo se deleitará? ¿Cómo asumirá la responsabilidad por la Tierra? ¿Por las plantas? ¿Por los animales? ¿Por las personas que padecen de hambre? ¿Cuál será su filosofía principal sobre los alimentos y la salud? ¿Qué principios específicos elegirá usted para creer en ellos y guiarse por ellos? ¿En qué serán distintas sus mañanas? ¿Sus conversaciones? ¿Cómo se ejercitará y se moverá? ¿Se mirará en el espejo con nuevos ojos? ¿Se honrará a sí mismo? ¿Qué dirá el último capítulo del relato sobre su nutrición? Vuelva a consultar el enunciado de misión de la páginas 175–76. Compruebe cómo encaja éste en su nuevo relato. Consulte además la sección titulada "Un final dietético feliz" en las páginas 177–79. Una vez más, compruebe cómo puede integrar este enfoque, teniendo siempre presente al principio el objetivo final de todos sus esfuerzos dietéticos. Describa por escrito con el mayor detalle posible los nuevos temas de su vida. Luego, en lo que queda de la semana 7 y más allá, ponga en práctica este nuevo relato.
Quizás sería útil que hiciera una lista de las formas prácticas en que puede comenzar a hacer realidad su nuevo relato. Cada día de esta semana, vuelva sobre lo que ha escrito para recordarse a sí mismo su nuevo relato. Disfrute este nuevo inicio. Crea en él. Crea en sí mismo. Pida a amigos y seres queridos que lo ayuden en su nuevo papel. Si no está seguro de dónde debe comenzar, empiece por lo que lo inspire más. Y note en su cuerpo y su ser los resultados de su nuevo relato nutricional.
**Ejercicio: ¿Quién está comiendo?**
Le presento a continuación un ejercicio final y divertido para la semana 7. Cada vez que se siente a comer o a merendar, pregúntese a sí mismo: "¿Quién está comiendo?" Diviértase con la tarea de identificar la subpersonalidad particular que está actualmente al mando y preparándose para comer. Algunos de los personajes comunes que vendrán a la mesa serán el rebelde, la niña o el niño, la víctima, el juez, el lobo, el saboteador, el perfeccionista, el hedonista o el adolescente. Definitivamente hay muchos más. Con sólo un poco de honestidad, encontrará que le resulta muy fácil identificar al personaje interno que habla más alto y que exige satisfacción.
Una vez que identifique a una subpersonalidad, entable un diálogo y una amistad con ella. Pregúntele qué desea. ¿Por qué está presente este personaje? ¿Quizás porque no le ha prestado la atención debida? ¿Tiene un mensaje para usted? ¿A qué trato podría llegar para satisfacer algunas de sus necesidades y aún así ocuparse de los requisitos más importantes de la salud y la felicidad de todo el elenco? Haga el mayor esfuerzo posible por entender amablemente esta voz, aprenda las lecciones que enseña y admita lo que le pueda aportar. Si usted es del tipo de persona que a veces tiene dificultad para controlar el comer en exceso o la adicción a la comida chatarra, pruebe con esta estrategia: invite a su adulto interior a la mesa más a menudo. Muchas personas creen que tienen que ponerse en contacto con su niño interior pero me he percatado de que, cuando se trata de comida, el niño interior está más tiempo de la cuenta en la palestra. Busque la forma de que su adulto interior participe en el proceso de nutrición. Fíjese en cómo este personaje adulto puede fácilmente ayudarle a hacer elecciones inteligentes en materia de comidas y lo socorre con gusto.
El simple hecho de preguntar "¿Quién está comiendo?" y comprobar quién está realmente presente y encargándose del metabolismo de cada comida nos permite sustentar mejor las partes de nuestra personalidad que quisiéramos alimentar más. Es un diálogo rico y gratificante con los personajes invisibles que realmente habitan nuestro universo privado. En última instancia, nadie viene a comer a menos que uno lo invite. ¿Está usted preparado para asumir el pleno control de su lista de invitados?
_Lecciones clave_
• Nuestro relato interior contiene importantes claves para liberar nuestro poder metabólico.
• Al reescribir nuestro relato podemos transformar literalmente la salud, mejorar la digestión y potenciar la capacidad de quemar calorías.
• Cada vez que comemos, una personalidad o personaje arquetípico concreto se sienta a la cabeza de nuestra mesa interior. La identidad de ese personaje determinará en gran medida nuestra experiencia de alimentación y el metabolismo de nuestras comidas. Tenemos el poder de escoger "quién come".
• Reconozca su misión más importante en la vida y permita que su relación con los alimentos contribuya a esa misión.
• Sean cuales sean los beneficios que espere recibir al final de una dieta, debe crear y experimentar esos beneficios desde el principio.
SEMANA 8
El poder metabólico de lo sagrado
_Somos vividos por poderes que aparentamos entender_.
W. H. AUDEN
¿Alguna vez ha tenido una experiencia religiosa, divina o extraordinaria que lo ha afectado profundamente? ¿Una experiencia que lo haya hecho sentirse renovado, renacido, transformado en cuerpo o espíritu? ¿Una experiencia que no es posible explicar, pero que usted sabe que sucedió? En caso afirmativo, seguramente ha experimentado el poder metabólico de lo sagrado.
Dado que cada uno de nosotros es un alma radiante que discurre por la vida dentro de un traje espacial biológico, cada experiencia del alma se registra por dentro como un acontecimiento metabólico. Experimentamos el mundo porque la química permite que lo hagamos. Nuestros sentimientos de amor, por ejemplo, deben su existencia a una química específica generada en el organismo que es singular y está concretamente vinculada con el amor. Lo mismo ocurre con los sentimientos de esperanza, lealtad, banalidad, cinismo y cualquier estado imaginable de la personalidad. Lo que somos y lo que sentimos de momento a momento tiene un preciso equivalente bioquímico.
El metabolismo sagrado es la química que se pone en marcha en el cuerpo cuando estamos imbuidos por la divinidad. Dado que lo Divino es la fuente de poder en que se basan todos los poderes, la química creada cuando experimentamos la divinidad se va por encima de todas las leyes conocidas del organismo. La química sagrada es una metaquímica. Sus efectos pueden comprender o incorporar estados psicofisiológicos conocidos, como la respuesta de relajación, la sincronización de los hemisferios cerebrales, la química del placer, la movilización del sistema inmunológico y otros. Pero definitivamente sus límites van mucho más allá de lo que puede explicar la ciencia. Cuando entramos en el reino del metabolismo sagrado estamos pisando un nuevo terreno científico. Los instrumentos más confiables de que disponemos son la observación, la experiencia y la luz de la verdad.
Algunas de las modalidades en que el metabolismo sagrado puede revelarse en el cuerpo incluyen la oración, el ayuno, la meditación, las experiencias relacionadas con la naturaleza, los deportes, el yoga, la música, el baile, una carpa de sudación, los intereses artísticos, el insomnio, la enfermedad, la recuperación, las experiencias cercanas a la muerte, los fármacos potentes, la intimidad sexual, los acontecimientos estresantes, la guerra, las lesiones, la caza, la aflicción, el enamorarse y los distintos tipos de rituales religiosos.
Cuando se activa en el cuerpo el poder metabólico de lo sagrado, se abre un portal a una fantástica variedad de medios de potenciación biológica que de otro modo no tendrían entrada. La historia está repleta de ejemplos de santos, yoguis, chamanes, mesías y personas comunes y corrientes con poderes metabólicos fantásticos y legendarios. Hay abundantes casos bien documentados que ponen de relieve aptitudes como la clarividencia, la telekinesis, la sanación espontánea, la fuerza descomunal y las increíbles dotes intelectuales de los autistas eruditos, por sólo nombrar algunos. Sin embargo, a menudo lo que calificamos de anómalo o milagroso responde simplemente a rasgos biológicos latentes que se activan cuando somos tocados por la mano de lo Divino.
Esto, por supuesto, sólo nos deja en el umbral. La mayor parte de lo que conocemos acerca de las capacidades de la forma humana no es más que una minúscula fracción de lo que es posible. ¿Será que los avances en el bienestar que la ciencia médica ha prometido durante décadas, pero que aún no logra proporcionar, no provendrán de nada externo a nosotros (expertos y tecnología) sino que llegarán a través de una relación coevolutiva con la divinidad? ¿Es posible que la realización de su destino metabólico se encuentre sembrada inteligentemente dentro de usted, a la espera que usted la descubra?
_Los ocho metabolizadores sagrados_
Lo que definitivamente es válido preguntarse es esto: ¿cómo podemos aprovechar el poder metabólico de lo sagrado? ¿De qué modo podemos cortejar útilmente sus poderes? Muchos consideran que la respuesta radica en la austeridad religiosa o en intensas horas de yoga o meditación. Pero yo sugeriría que lo sagrado sigue sus propios términos que están al alcance de todos aquí y ahora, y esos términos son: amor, verdad, valentía, compromiso, compasión, perdón, fe y entrega.
Estos ocho metabolizadores sagrados (hay más, sin duda) son sagrados porque son cualidades del alma que nos aproximan más a la esencia de la divinidad, a la inteligencia que nos creó. Al incorporarlos, nos asemejamos más a la fuente de donde provenimos, y nos acercamos más a la persona que debemos ser y que, muy adentro, sabemos que queremos ser. Considero que, cuando estos ocho metabolizadores sagrados se activan en nuestro sistema, pueden producir profundas facultades de sanación, cambios metabólicos positivos y efectos de rejuvenecimiento del cuerpo y el espíritu.
En esencia, estos ocho metabolizadores han sido vistos clásicamente como cualidades o rasgos, no como cantidades materiales propiamente dichas. No obstante, yo diría que cada metabolizador sagrado es al mismo tiempo fuerza y sustancia. E=mc2. En alguna parte del cuerpo, se producen moléculas de amor al activarse los sentimientos de amor. Quizá sean un tipo de sustancia química o quizás exista una molécula central de amor en torno a la cual se agrupan y giran otras moléculas. De modo similar, al sentir valentía el cuerpo crea el equivalente químico de ese rasgo para que podamos experimentar dicho sentimiento. Ésa es la realidad que se vive en la biología del cuerpo. Cada sentimiento tiene su correlación molecular. Estas sustancias surgen en respuesta a la invocación de esas cualidades por el alma. Primero viene el pensamiento o sentimiento y, luego, la molécula.
Ahora mismo, piense en una persona en su vida que lo saca de sus casillas o le causa estrés. Si usted piensa con suficiente intensidad en los defectos de esa persona, será como si hubiera acudido a su farmaceuta interno para que rápidamente le proporcionara sustancias estresantes que se distribuirían por todo su cuerpo. Creamos nuestra química instantáneamente, a la velocidad de la luz o más rápido aún. Del mismo modo que el Dios bíblico proclamó "Hágase la luz" y así se hizo, nosotros también nos creamos a nosotros mismos momento a momento. Cuando uno dice "Hágase la ira", el cuerpo construye instantáneamente por dentro un universo de ira. Cuando decimos "Hágase la bondad", se produce de modo similar la química de la bondad.
Tal es nuestro poder.
Observe su propia vida y probablemente notará que la dimensión superior dentro de la que se desenvuelven nuestras vidas tiene su admirable manera de evocar los ocho metabolizadores sagrados: amor, verdad, valentía, compasión, compromiso, perdón, fe, entrega. Éstos suelen desempeñar un papel central en las lecciones y momentos de transición más importantes de nuestras vidas. Mientras más anhela nuestra alma estas cualidades y más las evocamos y las creamos por medio de nuestros esfuerzos personales, más se acumulan literalmente estas moléculas en nuestro sistema y producen su magia metabólica. Si esto le suena inverosímil, tenga en cuenta que este concepto no se diferencia en nada de medicamentos como el Prozac. Uno toma un montón de píldoras producidas en una fábrica de sustancias químicas externa (en lugar de la fábrica interna del cuerpo) y tiene que esperar a que la masa crítica de moléculas se acumule en su sistema durante semanas antes de que le puedan levantar el ánimo.
Del mismo modo, mientras más fe uno tenga, o mientras más practique la fe, más moléculas de fe se acumulan en su torrente sanguíneo. La sustancia metabólica de la fe activa sistemas orgánicos esenciales como el corazón y el cerebro y ejerce sus efectos por todo el cuerpo. Éstos pueden ser efectos sencillos de vigorización y sanación, o efectos profundos como los que llevaron a la Madre Teresa o a Martin Luther King a ser quienes fueron.
Como los ocho metabolizadores sagrados son experiencias, "se sienten" dentro del cuerpo. A efectos de nuestra conversación podemos, por lo tanto, llamarlos "sentimientos". Y, al igual que con cualquier otro sentimiento, sólo es posible experimentarlos si los sentimos. Aunque parezca extraño, muchos de nosotros experimentamos estos sentimientos, pero sólo parcialmente. Tenemos fe... a veces, quizás. Tenemos amor... pero sólo hasta cierto punto. Sentimos compasión... pero sólo con respecto a unos pocos. Y podemos llenarnos de valor... pero no cuando nos enfrentamos a nuestros mayores temores. Cada vez que experimentamos esos sentimientos en forma parcial, no proporcionamos el sustento suficiente al alma y literalmente despojamos al organismo de nutrientes. Limitamos la circulación de la fuerza cósmica de la vida y suprimimos el metabolismo. A la inversa, mientras más profundos sean nuestros sentimientos, más potenciamos nuestro metabolismo y más nos acercamos a lo Divino.
Del mismo modo que el ejercicio físico obliga al cuerpo a desarrollar más los tejidos musculares, utilizar el oxígeno más eficientemente y aumentar nuestra capacidad de respiración, el simple hecho de ser almas vivientes en el planeta Tierra nos "exige" que actuemos con más fe y mayor compromiso, y que vivamos más cerca de la verdad. La vida misma es el régimen de ejercicios más adecuado. Los ocho metabolizadores sagrados son tan esenciales para el organismo como lo son los alimentos y el agua, y son literalmente necesarios en forma química. Si el alma ansía amor, también lo ansía el cuerpo. Si el alma ansía lecciones de perdón, entonces nuestras células ambicionan esas moléculas. Si la vida nos exhorta a la compasión, entonces este nutriente es necesario para el desarrollo y reparación del organismo. Si uno vive y respira, los reinos de lo Divino lo inducen a producir la química que le potenciará al máximo su metabolismo, por medio de las lecciones del alma que forjarán su mayor fuerza espiritual.
De modo que, si usted piensa que sus necesidades en materia de nutrición pueden satisfacerse adecuadamente sólo con alimentos, piense otra vez. Cuando no se cumplen los requisitos de la vida real impuestos por los ocho metabolizadores sagrados, el cuerpo se marchita, se debilita, pierde integridad, atrae enfermedades y produce los síntomas que sean necesarios para alertarnos con la lección de que el alma tiene hambre de sustento y atención. No podemos seguir recurriendo exclusivamente al reino biológico para resolver problemas de salud que no son más que efectos terrenales de los asuntos y fluctuaciones del alma.
Éste no es un punto de vista anticientífico. Al contrario, es una exhortación eminentemente procientífica a dar cabida nuevamente al lenguaje del alma y de lo sagrado en las salas de la medicina de donde ha sido desterrado. Ya es hora de que reconozcamos la realidad de lo Divino independientemente de cuáles sean nuestras creencias religiosas y de que invitemos a lo sagrado a influir en nuestras prácticas de sanación, alimentación, amor y demás faenas terrenales.
Phil, ejecutivo de 52 años de una empresa de computadoras, acudió a mí porque deseaba bajar de peso. Era un gigante de seis pies y tres pulgadas, de dulce expresión y buenas maneras, que había aumentado casi 40 libras en un año y no lograba deshacerse de ellas. Phil era una persona muy afectuosa que se enorgullecía de cultivar relaciones sólidas con las personas a su cargo. Su empresa se encontraba en una etapa de reorganización y numerosos despidos, y él estaba muy preocupado por sus empleados. Phil reconocía que no tenía gran motivación para bajar de peso, aunque sus médicos ya le habían dado numerosas señales de alarma. Había probado con varias dietas e incluso había consultado con un dietista, pero no lograba seguir ningún programa. Se sentía perplejo, porque se consideraba a sí mismo un hombre de mucha motivación, pero en este aspecto específico carecía de fuerza de voluntad. Phil sabía que estaba tropezando con algún obstáculo, pero no conseguía identificarlo, y por eso acudió a mí en busca de "motivación".
Pude ver claramente que Phil estaba preocupado por algo más profundo y, cuando le interrogué acerca de las circunstancias en que comenzó a aumentar de peso, reconoció que esto había coincidido con algunos problemas personales en su hogar de los que le resultaba difícil hablar. Su hijo había sido condenado a cinco años de prisión sin derecho a libertad condicional por vender marihuana. Esto había conmocionado terriblemente a toda la familia, incluido Phil, cuyo mundo se desmoronó. Su hijo estaba terminando la licenciatura cuando fue arrestado; era un estudiante excelente, responsable y popular. Tenía una magnífica vida por delante. Nunca se había imaginado que lo atraparían vendiendo marihuana. Phil estaba avergonzado: consideraba que había fracasado como padre y se sentía impotente para cambiar la situación y temeroso por el bienestar de su hijo, quien evidentemente no era del tipo de persona a quien podría irle bien en prisión (como si existiera ese tipo de persona).
Phil encaraba el desafío existencial más grande de su vida. Que su hijo estuviera en prisión carecía de sentido para él. Esto lo hacía buscar un significado a lo sucedido y desear un mayor entendimiento. Aunque a Phil no le gustaba la religión, se vio haciendo preguntas y buscando esperanzas en lugares que antes no tenía mucho interés en visitar. Ante esta circunstancia, le sugerí que obtendría su motivación para bajar de peso una vez que encontrara algún tipo de fe. Sus necesidades metabólicas serían procesadas cuando al fin se diera cuenta de que hay una sabiduría superior en la vida y un plan más grande en relación con lo sucedido a su hijo y que, de algún modo, ésa sería la medicina adecuada para el alma de todas las personas afectadas. Una vez que creyera en algo más allá, y se mantuviera conectado con esa creencia, podría confiar en que su hijo tendría guía y protección. De esta forma, podría perdonar a la persona que a su juicio era el mayor culpable, o sea, a sí mismo.
El exceso de peso de Phil era en realidad una cura oculta. Era el remedio contra su autocastigo, sus sentimientos de culpabilidad e impotencia y, en definitiva, su desconexión de lo Divino. Sus 40 libras de más representaban una abundante reserva de energía almacenada. Según la ciencia tradicional, eso es lo que es el exceso de grasa en el cuerpo. Y así es: la grasa es energía almacenada. Pero en el caso de Phil esta reserva de energía era más que un montón de calorías que servirían de combustible a sus ejercicios. Serían el combustible para una nueva vida, una nueva relación con lo Divino y un nuevo camino en la relación con su hijo.
Fiel a su propio carácter, Phil recuperó la motivación e hizo la transición a un nuevo relato que lo conectaba con lo sagrado. Resulta interesante que Phil decidió que no quería que yo le diera ningún consejo dietético para bajar de peso. Había recuperado la confianza en sus propias perspectivas y no quería interferencia de los métodos de ninguna otra persona. Al fin logró bajar 30 de las 40 libras que había aumentado. Después de alcanzar este nivel, decidió no hacer más ningún esfuerzo y soportar con gusto el peso restante hasta que su hijo fuera puesto en libertad. Eso era lo que le parecía correcto, así lo había estructurado y era lo que le daba la inspiración que necesitaba. Al acceder a las facultades metabólicas sagradas de la confianza, la fe y el perdón, Phil se permitió a sí mismo bajar de peso mediante el establecimiento de una conexión personal con el cosmos.
Piense por un momento en las ocasiones de su vida en que usted se ha visto en la necesidad de mostrar más amor, más compasión, mayor valor o confianza, un compromiso más firme, una capacidad más profunda de perdón y una fe más abarcadora, o las ocasiones en que debió expresar una verdad más importante. ¿De qué manera le cambiaron la vida esas experiencias? ¿Le afectaron su organismo? ¿Su salud? ¿Su nivel de energía? ¿Le confirieron un don metabólico permanente y discernible? ¿Cómo puede invocar estas cualidades de modo que sus facultades de sanación estén disponibles de una manera más constante en su vida?
_Lecciones de nutrición para el alma_
A estas alturas espero que ya esté comenzando a darse cuenta de que cada problema o desafío que enfrentamos en relación con los alimentos y la nutrición tiene un profundo componente espiritual. Por supuesto, es útil y necesario atender nuestras necesidades metabólicas con las herramientas de la medicina y la ciencia. No obstante, para poder transformar y sanar verdaderamente los males del cuerpo, es necesario comprender la perspectiva del alma. Por esta razón, quisiera sugerirle una manera de ver el metabolismo desde el punto de vista de lo sagrado. Es una versión radicalmente nueva de nuestra forma de ver el cuerpo físico. Nuestra perspectiva predominante sobre la salud se basa en que las enfermedades son malas: son el enemigo, el problema y el veneno. Cualquier síntoma indeseado (exceso de peso, falta de energía, dolores digestivos, etc.) se ve indefectiblemente como nuestro adversario.
Permítame postular todo lo contrario:
**Lo que uno cree que es enfermedad, es en realidad la cura. Cualquier problema del cuerpo que usted crea tener es, en la realidad del alma, la solución.**
Sé que esto puede parecer un poco extraño a algunos, y por eso entraré en más detalles. Los griegos antiguos consideraban cada síntoma como una visita de los dioses. Cualquier aflicción del cuerpo era divina, un mensajero celestial, un secreto susurrado por los espíritus guardianes para alertarnos de que el alma necesitaba una corrección de rumbo. Los males del cuerpo eran en realidad curas para el alma. Y lo que curaba el alma era una medicina fundamental y necesaria para el cuerpo. Al prestar atención a los síntomas, escucharlos, honrarlos, existir con ellos, aceptar su carácter divino, el alma hallaría su camino a través de las brumas y se levantarían los nubarrones que hacían llover veneno sobre el cuerpo.
Veamos el ejemplo de la falta de energía. ¿Qué podría curarse con ese síntoma? ¿Qué mensajes pueden traer el letargo y la fatiga, y de qué manera serían una bendición de los dioses que podría curar una dolencia del alma y completar así el círculo sagrado y restituir el cuerpo físico? Pues bien, es interesante reconocer que cualquier trastorno que nos produzca falta de energía es a menudo la única manera de hacernos tomar las cosas con más calma. Para la enfermedad de la velocidad, la cura es la falta de energía. Es un remedio para las ocasiones en que no prestamos suficiente atención a nuestras necesidades más profundas, cuando estamos perdidos en el ajetreo cotidiano y olvidamos las formas sencillas de ser y sentir.
Nos guste o no, la falta de energía nos hace igualar la cadencia sosegada del alma. Nos impone meditar y tomar un descanso. Nos impulsa a descubrir dónde están nuestras fugas de energía y en qué rumbo desea realmente ir nuestra vida. Descubra los mensajes que nos envían los dioses con esta dolencia y habrá encontrado una cura para su vida y para el cuerpo, ese cuerpo que tuvo la cortesía de hacerlo sosegarse y volver a su estado natural. ¿Siente falta de energía simplemente porque algo anda mal en su cuerpo, o es que necesita reconocer la realidad de que trabaja demasiado y debe descansar? Incluso Dios descansó después de seis días de trabajo creativo. ¿Acaso cree que usted tiene un sistema mejor?
Sea cual sea la causa, lo que nosotros consideramos enfermedad es en realidad la cura. Incluso si el catalizador de su fatiga ha sido una alergia a un alimento, una dieta inadecuada, un parásito o la falta de sueño, sólo encontrará el remedio que le devolverá la salud una vez que se sosiegue, preste atención, se cuide a sí mismo, busque ayuda y explore. Al alma le da igual emplear cualquier mecanismo para alertarnos de sus necesidades.
Y, ¿qué se imagina usted que puede curarse gracias al exceso de peso? Para muchas personas, es una alarma que indica cuando la vida ha perdido su equilibrio. Nos pide que examinemos nuestra relación con la tierra, con el prójimo y con nosotros mismos. La obesidad no es, como dicen, un problema eminentemente personal. Sí que es personal, pero a través de este síntoma divino se accede a un nivel más importante de comprensión. El exceso de peso es uno de los acompañantes de las sociedades industrializadas y de las personas del tercer mundo que consumen alimentos producidos en masa. Es el indicio de un experimento colectivo que ha salido mal. Es la cura de una ignorancia que nos ha hecho creer que podemos avanzar como sociedad a un ritmo cegador, a una velocidad que nos impide ver detenidamente los resultados de nuestras acciones.
Creamos montañas de desperdicios, producimos comidas desnaturalizadas, contaminamos nuestras aguas y nuestra atmósfera, lanzamos bombas a nuestros vecinos y nos comportamos como si las heridas del alma que hemos sufrido pueden tratarse fingiendo que no existen. El exceso de peso nos pide examinar cómo dentro de las fronteras del país de la abundancia puede haber niños que pasan hambre. Nos implora que examinemos más profundamente la paradoja que supone estar sobrealimentados y malnutridos. Nos pide que veamos la pesada carga oculta que nos empuja hacia abajo.
Sí, el exceso de peso puede ser resultado de comer demasiado y hacer poco ejercicio. Pero estas causas aparentemente sencillas y sus fáciles soluciones no son más que fantasmas. Para empezar, no nos ofrecen ninguna explicación de por qué nuestros hábitos andan fuera de control. Sus soluciones no prestan atención a los males del alma y, por lo tanto, son remedios ineficaces para nuestras preocupaciones más profundas.
Los estadounidenses no dejan de hacer ejercicio porque sean perezosos. Al contrario. Trabajan excesivamente. Algunas personas no tienen tiempo para hacer ejercicios; otras están agotadas. ¿Para qué mover el cuerpo sino es para celebrarlo? ¿Quién quiere realmente hacer ejercicio físico para castigarse e infligirse dolor? ¿Nos inspiran nuestros clubes de salud con su decoración, su música y sus máquinas? ¿Nos estamos dando a nosotros mismos algún buen motivo para correr y practicar deportes?
La obesidad y el exceso de peso son en gran medida relativos. Muchísimas personas tienen lo que se consideraría un exceso de grasa corporal, pero de todos modos son felices y están satisfechos consigo mismos. Tienen una imagen favorable de su cuerpo, una intensa vida sexual y buena salud. En los últimos años hay en la comunidad científica un gran debate que probablemente continuará durante un tiempo. Los científicos no acaban de decidirse en cuanto a si el peso excesivo es una enfermedad, un síntoma, un factor de riesgo para otras enfermedades, una cuestión sin importancia que no tiene por qué tener efectos desfavorables en la salud o quizás, incluso, un indicador levemente positivo de longevidad. La respuesta, por supuesto, es que el exceso de peso es todas esas cosas. Los resultados de numerosos estudios siguen siendo muy distintos entre sí porque las posibilidades son verdaderamente ilimitadas. Puede ser que uno porte abundante grasa corporal por razones muy positivas, o por razones muy negativas, o por una combinación de ambas.
Según el estudio que se consulte, del 96 al 99 por ciento de las personas que bajan de peso después de seguir una dieta de adelgazamiento lo vuelven a recuperar en un plazo de uno a dos años. No obstante, pocos investigadores han prestado atención al pequeño porcentaje de personas que se mantienen delgadas. Sorprendentemente, la mayoría de ellos relatan que sus vidas han experimentado un cambio significativo: un cambio de profesión, un divorcio que hacía tiempo deseaban, un nuevo amor, una experiencia espiritual, una relación sexual sin precedentes, etc. En otras palabras, sus relatos cambiaron, sus cargas se redujeron y sus metabolismo se transformaron gracias a la química del alma.
_Nutrición sagrada_
Podemos definitivamente invocar el poder de lo sagrado en formas prácticas que nos permitan influir en el metabolismo de cada comida. Un buen comienzo sería observar cómo usamos nuestro poder espiritual para bendecir o maldecir.
Como nos han mostrado diversos autores y expertos, la oración, la creencia y el amor pueden ocasionar cambios localmente o a distancia en las personas, plantas, alimentos, agua y diversos organismos vivientes. No sabemos exactamente _cómo_ funciona esto, pero sí sabemos _que_ funciona. Muchos de nosotros podemos sentir cuando otros nos han "maldecido" con sus juicios, calumnias y rumores. También podemos sentir en nuestros cuerpos las sensaciones palpables y edificantes producidas cuando alguna persona distante piensa o habla de nosotros en forma positiva. Son impresiones psicofisiológicas reales que han sido captadas a través de la capacidad de transmisión y recepción del corazón y convertidas en sustancias químicas del organismo.
Un juicio negativo sobre uno mismo como "Estoy obeso" o "No soy bella" o "No soy suficientemente bueno, fuerte, inteligente, etc." es una maldición. Cuando el embrujo se repite en silencio y continuamente, se adueña de nosotros con el paso del tiempo. Es literalmente una instrucción neuroquímica enviada al cerebro, que es modulada por el hipotálamo y las redes endocrina, inmunológica y de neuropéptidos, y convertida en realidad física en el cuerpo. Es una secuencia de órdenes metabólicas que pueden estar dirigidas a uno mismo o a otros.
Cuando bendecimos, damos oportunidad a lo Divino a verterse a través de nuestro corazón y a bombear a través de nuestra vasculatura las sustancias químicas de la afirmación y la vida. Enviamos fuerzas invisibles al campo electromagnético que nos rodea y más allá, y éstas trascienden los límites de tiempo y espacio. Cuando menos, la bendición crea una respuesta de relajación fisiológica y a esto se deben todas las ventajas metabólicas que ofrece esa condición, desde el aumento de la fuerza digestiva hasta una mayor eficiencia en la capacidad de quemar calorías.
Por todo lo anterior, orar antes de las comidas tiene sentido desde el punto de vista nutricional y espiritual. Piense en la posibilidad de pronunciar algunas palabras especiales, en voz alta o en silencio, de reconocimiento a las criaturas y plantas que se entregan a usted como ofrenda. Dé las gracias por recibir sustento y abundancia. Si le resulta incómodo elevar una plegaria de gratitud por sus alimentos, añada simplemente un toque de humildad. Se encontrará así más dispuesto y más considerado con respecto a la Creación. Comprobará cómo, al bendecir sus alimentos, obtiene a cambio una bendición a su persona.
Otra forma ponderada de incorporar el poder metabólico de lo sagrado es a través de los rituales. La mayoría de nosotros tenemos rituales diarios que realizamos con escasa conciencia o intención: despertar, soplarnos la nariz, ir al baño, ducharnos, vestirnos, tomar café, ir al trabajo, hacer gestiones. Sin embargo, cuando invocamos a lo Divino, los actos más banales se elevan en su condición. Prendemos una plácida llama metabólica que rompe el estancamiento y hace circular todo tipo de energía dentro de nosotros. De este modo, el ritual constituye la invocación intencional del más allá, de lo invisible, los ancestros, los espíritus. Confiere poder. Abre un circuito que nos vincula con lo sagrado y hace que lo ordinario pueda imbuirse de gracia.
Entre los elementos que confieren vida al poder del ritual figuran la ponderación, la reverencia, la holgura, la sinceridad, la receptividad, la gratitud y la humildad. El ritual tiene que ver con el ofrecimiento. Ofrecemos nuestras acciones a la fuerza creativa de la vida. Y ofrecemos plenamente todo nuestro ser porque en algún momento indefinido, todo nuestro ser fue ofrecido a nosotros. Para muchas personas, los rituales entrañan cierto riesgo intelectual. Esto se debe a que se nos ha enseñado a no hacerles caso y a ridiculizarlos, o hemos tenido una mala experiencia con rituales sociales y religiosos vacíos que nos han producido desconfianza en su valor. Pero eso queda en el pasado.
Quizás los rituales más significativos y potentes son los que creamos para nosotros mismos, a nuestra propia manera y a través de nuestra propia conexión íntima con el cosmos. ¿Cuáles son los rituales más significativos en su vida? ¿Qué sensaciones le causan estos rituales? ¿Cómo influyen en su organismo? ¿En su energía? ¿En su metabolismo? ¿Se le ocurre alguna actividad en su vida que requiera más atención de su parte desde el punto de vista del ritual? Piense en la posibilidad de introducir nuevos rituales en su actividad cotidiana, aunque sólo sea para experimentar y notar sus efectos. Vea a continuación algunas sugerencias.
**Rituales de cocina:** Prepare su comida con gratitud. Tenga conciencia de cada acto: limpieza, corte, desecho, servido, colocación. Haga que su reverencia y amor por la comida pasen a formar parte de los alimentos y se hagan extensivos a todos sus comensales. Sosiéguese. Tenga en cuenta la belleza y la intemporalidad de la experiencia de cocinar.
**Rituales del té o el café:** Al preparar su infusión, sea ponderado, agradecido, meditativo, alegre (en fin, cualquier cualidad que lo haga ir más hondo y más allá). Utilice vajilla y utensilios hermosos. Haga una pausa. Tome su taza en su mano con intención reverente. Haga que sus bebidas queden imbuidas de divinidad. Pida que los dones especiales de la divinidad fluyan por todo su cuerpo y su día. Tome la infusión a sorbos con gratitud y presencia.
**Rituales de ofrenda:** Al despertar, comience su día ofreciéndose a usted mismo al día, a la voluntad de la inteligencia creativa, al poder invisible de la compasión. Ofrezca su cuerpo y sus acciones. Renuncie a cualquier idea a la que se esté aferrando. Pida un nuevo comienzo y esté dispuesto a "convertirse" en ese nuevo comienzo. Entréguese plenamente a lo desconocido con confianza y fe.
**Rituales de la medicina:** No se limite a tragar sus vitaminas, píldoras o analgésicos: recíbalos. Prepárese para incorporar los dones que le aportan. Haga una pausa. Repare en ellos. Antes de ingerir sus píldoras, colóquelas en un cuenco pequeño y vistoso. Haga un reconocimiento consciente de sus motivos para tomar esas medicinas. Dé las gracias. Insúflelas con el poder de su creencia y pida que la fuerza curativa de lo invisible intervenga a través de ellas con pureza y sin efectos nocivos.
**Rituales de belleza e higiene:** Permita que la reverencia, la gratitud y una sensación de belleza estén presentes cuando usted se cepille los dientes, se duche o se bañe, se peine, afeite, maquille, esmalte las uñas o se embellezca. Sosiéguese. No hay prisa. Si la hubiera, concédase más tiempo. Sea consciente de quién es usted y de lo que se le ha entregado. Deje de juzgar la forma en que el Artista Divino lo hizo a usted. Aproveche la oportunidad de mostrar agradecimiento por su cuerpo y por su verdadera belleza. Haga que sus actos de embellecimiento sean una oferta de agradecimiento por un don que algún día no existirá más.
Alimentos sagrados
¿Ha tenido alguna vez la experiencia de sentirse repentinamente atraído a cierto alimento o bebida, consumirlo a menudo durante días o semanas, y estar convencido de que, a pesar de la extrañeza de su deseo, este alimento era de algún modo una medicina necesaria para usted? Durante la semana 8, compruebe si hay algún alimento que pida su atención de esta manera y, si lo hubiera, eleve su consumo al nivel de ritual sagrado. Reconozca la profunda sabiduría de su cuerpo, confíe en que la orientación divina puede intervenir a través de su sistema nervioso entérico, e infunda este alimento especial de su gratitud y atención. Puede tratarse de toronjas rosadas, o té de menta fresca, arándanos maduros, maíz congelado, jengibre encurtido, chocolate importado o un buen tequila. Sea lo que sea, invoque a lo sagrado para que lo oriente en su uso y compruebe si puede percatarse de las formas sutiles en que esta medicina sana y alimenta su cuerpo.
_Sustancias sagradas_
Cuando estaba cursando mis estudios de postgrado en la Universidad Estatal de Sonoma en California, un grupo se reunió espontáneamente una noche frente a un edificio del centro de estudios. Un anciano nativo norteamericano estaba de visita y su forma de ser amistosa, abierta y juiciosa atrajo a una pequeña multitud. En un momento determinado habló del carácter sagrado de todo lo existente en la creación. Citó el ejemplo del tabaco, que consideraba una planta sumamente sagrada y, por lo tanto, una de las más poderosas. En ese momento un transeúnte escéptico oyó casualmente esta parte de la conversación y le espetó: "No me creo nada de toda esa palabrería sobre el 'poder sagrado' del tabaco". El anciano nativo cortésmente se dirigió a él con una sonrisa y le preguntó lenta y claramente: "Entonces, ¿por qué crees que tantos de los tuyos son tan adictos al tabaco y se dejan destruir por él?"
Todo tiene poder, y ese poder es relativo. Algunas empresas son más poderosas que otras, y también lo son algunas canciones, edulcorantes, computadoras, tarjetas de crédito, rayos láser y segadoras de césped. El mismo concepto se aplica a los alimentos, drogas, plantas y medicamentos, o a cualquier sustancia que se pueda ingerir. Las empresas farmacéuticas y los drogadictos siempre están buscando sustancias nuevas y más potentes. Lo mismo se puede decir de quienes fabrican o consumen vitaminas y hierbas. Cuando reconocemos que el poder de cualquier sustancia se origina en lo sagrado, nos ponemos en condiciones de recibir las mejores cualidades de esa sustancia. Nos guía en su uso la mano de lo invisible. Creamos el espacio intemporal y reverente para que la inteligencia mayor pueda hablarnos. A la inversa, cuando no reconocemos la presencia de lo sagrado, procedemos sin conciencia y se desata todo tipo de problemas.
Tomemos como ejemplo cualquiera de las dificultades que enfrenta la sociedad en relación con el uso indebido de entidades poderosas (drogas, bebidas alcohólicas, tabaco, armas de fuego). Lo que se puede apreciar es la ausencia de lo sagrado. Sin invitar intencionalmente a lo Divino a la fabricación, cultivo, manipulación, preparación y uso de estas sustancias, su poder se transfiere al reino de lo profano, el cual produce una química ponzoñosa que nos desvía por una senda de destrucción y desesperanza.
El tabaco es una planta sagrada para muchos pueblos aborígenes. Se utiliza en ceremonias para honrar a los antepasados, para elevar oraciones al cielo en su humo y para reconocer la interrelación entre todas las personas y todas las cosas. Cada aspecto de su cultivo, preparación y uso se ha hecho sagrado y especial. La adicción al tabaco era antiguamente un concepto desconocido en las tribus aborígenes. No obstante, la tribu "de los blancos" tomó esa misma planta, la manipuló para que contuviera más elementos adictivos, le añadió cientos de aditivos químicos y luego negó durante décadas su potencial de producir enfermedades. El resultado ha sido la enfermedad, la adicción y la muerte. Una sustancia sagrada ha sido profanizada con nuestras acciones e intenciones.
Veamos el ejemplo del azúcar. Si no reconocemos su carácter sagrado (y, efectivamente, nuestra cultura no lo reconoce) es muy probable que suframos a medida que se va destapando el lado oscuro de su poder: deterioro de la dentadura, obesidad, resistencia a la insulina, diabetes, problemas cerebrales y cardiacos. Al carecer de la conciencia de lo sagrado, despojamos al azúcar de muchos de sus elementos y producimos una versión altamente desnaturalizada. La usamos en exceso. Los fabricantes la añaden en exceso a sus productos para aumentar las ventas. En consecuencia, nos volvemos químicamente adictos a ella y nos cegamos ante sus efectos. Sin embargo, si uno mascara el tallo fresco de la caña de azúcar todos los días durante años, es posible que esto nunca le produzca caries.
¿Es mala el azúcar? Por supuesto que no. Simplemente debemos respetarla, retroceder un paso. Permitamos que la propia azúcar nos enseñe cuánto debemos consumir y dónde está el límite.
Ya hemos mencionado lo que sucede cuando consumimos carnes de baja calidad. En verdad, el factor fundamental que determina la calidad nutricional de cualquier producto de este tipo es lo sagrado. Esto se debe a que los animales son sagrados y la carne de los animales es sagrada. Cuando sacrificamos un animal, uno de los sacrificios rituales más poderosos, estamos invocando las fuerzas de lo sagrado, o las de lo profano. Si no incluimos la divinidad en la ecuación, o sea, al no mostrar respeto, reverencia y agradecimiento a la criatura, tendremos problemas. Y esos tipos de problemas nunca se resolverán con más inspecciones de las autoridades de alimentación. Quien tuvo la idea de las "vacas sagradas" sabía de qué hablaba.
¿Cuáles sustancias demoniza usted? ¿Cuáles sustancias hace sagradas? Considere que todo alimento, fármaco, planta o brebaje es una expresión de divinidad. ¿Se da cuenta de cómo lo sagrado puede ofrecer sus facultades a través de cualquier sustancia existente? ¿Está dispuesto a reconocer lo sagrado en las cosas que usted demoniza? ¿En las personas que coloca en esa categoría? ¿En sus propias cualidades que usted demoniza? ¿Y se da cuenta de que todos los alimentos y medicamentos poderosos en nuestro mundo tienen una función clara y sencilla: servir como espejos de toda la humanidad?
Cualquier sustancia se puede sacralizar. Incluso si decidimos comer algo que está evidentemente desprovisto de vínculos sagrados en su cultivo y producción, de todas formas podemos ofrecerle nuestras bendiciones y oraciones y pedirle que transmute sus venenos. Ningún objeto ni persona es tan insignificante que no pueda ser elevado por nuestra humanidad y por la divinidad que fluye a través de nosotros.
Por supuesto, tendremos escaso interés en reconocer el carácter divino de alimentos, medicamentos o plantas si no podemos permitirnos reconocer el carácter sagrado de nuestros propios cuerpos. Nuestros sistemas científico y educativo nos han enseñado a ver el cuerpo como un conjunto de sustancias químicas, sin intervención de la mano del Creador. Creemos que de algún modo, en algún lugar, hace muchísimo tiempo, un conjunto de moléculas sin vida comenzaron, al azar, a chocar entre sí y a formar compuestos. Estas colisiones carentes de significado habrían continuado hasta que, al cabo de miles de millones de años, surgieron los seres humanos.
Si usted se ha dejado llevar por esta entretenida versión de cienciaficción, entonces probablemente tratará a su organismo como si fuera una máquina biológica. Lo alimenta, ejercita, lo hace caminar, correr, lo manda a revisar, a ajustar y a cambiar las piezas estropeadas. Esto puede dar resultado, pero sólo hasta un punto. Debido a que el organismo humano es mucho más que un dispositivo biomecánico, pagamos el precio de desterrar a la inteligencia cósmica. De hecho, terminamos por comportarnos de formas nada propias de una máquina. La depresión, la irritabilidad, la fatiga interna, los síntomas inexplicables y la conducta incontrolable son señales todas de cuando el alma pide que la deje intervenir.
De modo que, en lugar de buscar el próximo milagro en materia de dietas, alimentos o suplementos, ¿por qué no ir directamente a la fuente de lo milagroso? Imagínese cómo sería ver su cuerpo a través de los ojos del Creador. ¿Cómo lo vería a usted un Creador benévolo? ¿Cómo se vería usted mismo si supiera que su cuerpo es un recipiente sagrado? ¿Cómo se expresaría su metabolismo si usted se viera a sí mismo en esa luz más elevada?
_Semana 8: Su tarea principal_
Esta semana en su oportunidad de experimentar el poder metabólico de lo sagrado. Su tarea principal consiste en invocar la presencia de lo Divino en sus comidas y en su relación con su cuerpo. Comprométase a crear el espacio necesario para que la luz del alma salga a la superficie durante la semana 8 y a descubrir las múltiples conexiones entre su vida nutricional y su mundo espiritual.
**Ejercicio: La dieta de la oración**
Con cada comida y cada merienda o bebida que tome durante toda esta semana ofrezca, sin excepción, una oración de agradecimiento. Independientemente de la circunstancia o el lugar en que se encuentre, no se pierda ni una porción de la dieta de la oración. Deje apaciblemente a un lado todos los obstáculos reales o imaginarios que se interpondrían a un momento de oración. Cierre sus ojos, libere sus pensamientos, póngase en contacto con su corazón y conéctese con lo Divino. Dé gracias por los alimentos y por cualquier otra cosa en su vida por la que valga la pena dar gracias. Invite a participar al resto de los comensales. Si está comiendo con personas que se sientan incómodas con esto, simplemente avíseles que observará un momento de silencio antes de la comida. Si tiene hijos, pídales que expresen en voz alta lo que desean agradecer. Fíjese en cómo esto cambia su experiencia en la comida y observe los efectos que tiene en su metabolismo.
**Ejercicio: Seleccione su ritual**
Consulte otra vez la sección de este capítulo dedicada a la nutrición sagrada. Seleccione un ritual en el que quisiera concentrarse durante la semana; puede ser un ritual de cocina, té/café, ofrenda, medicina, o belleza e higiene. Haga que el ritual escogido sea especial. Cuando lo ponga en práctica, ofrezca sus acciones y pensamientos a lo Divino. Invoque la presencia de lo invisible y de los seres guardianes que habitan en su mundo espiritual. Permita que su enfoque mundano usual se eleve a un plano superior. Fíjese en las sensaciones que le causa invocar las energías de lo sagrado en su cocinar, invocar el poder de curación en su té, bendecir sus medicamentos y píldoras, o sentir su divinidad interior al embellecer el recipiente sagrado que se le ha concedido. Busque el tranquilo, plácido y mágico lugar dentro de sí en el que cobra vida su conexión con el más allá. Fíjese en cualquier cambio en su cuerpo, su nivel de energía y la calidad de su fuerza vital.
_El perdón: El metabolizador sagrado más potente_
En mis 25 años como nutricionista, la estrategia más poderosa y eficaz que he comprobado que libera energía, proscribe los hábitos de salud indeseados y rejuvenece el cuerpo es simplemente ésta: el perdón. No salgo de mi sorpresa al ver cómo personas que durante mucho tiempo han tenido trastornos de la alimentación, fatiga crónica, problemas digestivos y un cúmulo de síntomas debilitantes, obtienen un alivio milagroso cuando perdonan a personas de su pasado y su presente. Si alguna vez usted ha sido traicionado, abusado o herido de algún modo, los sentimientos de ira, culpabilidad o juicio que mantiene son tóxicos. De hecho, no importa cuánta razón tenga usted ni cuán culpable sea el perpetrador. Las sustancias químicas más venenosas del planeta son las que nosotros mismos producimos dentro de nuestros propios seres. Aunque nuestro veneno esté dirigido a otra persona, de todas formas vive dentro de nosotros y corroe el cuerpo con su acidez. El perdón tiene un poder curativo sin igual. Nuestras estrategias más inteligentes en materia de dietas, ejercicios, medicina y sanación son a la postre ineficaces frente a la turbia realidad química de quien no perdona.
**Ejercicio: Perdonar y sanar**
Haga un inventario de las personas en su vida a quienes considera responsable de alguna injusticia. Anote en un papel todos los personajes que usted sigue culpando, juzgando y manteniendo de rehenes en su prisión psíquica. Busque minuciosamente por su pasado y presente para localizar a todos esos infractores. Incluya debidamente a todos los familiares, amigos, amantes, presidentes e incluso grupos de personas ("hombres", "mujeres", "negros", "blancos", etc.). Una vez que su lista esté completa, repásela de nuevo para asegurarse de que estén en ella las tres personas que probablemente necesita perdonar más: sus padres y usted mismo.
Tal vez ya se haya percatado de que su próxima tarea consiste en perdonarlos a todos. Es una tarea difícil. Requiere valor y es la estrategia dietética imaginable que más lo compensará. Es así porque el perdón es un acto metabólico sagrado. Libera el apretón mortal que usted ha aplicado a sus propias células y desencadena una serie de reacciones químicas que abren el cuerpo para que pueda recibir el sustento de una manera completamente nueva. De veras es así. No hay ningún secreto ni ardid especial para lograr la hazaña heroica del perdón. Basta con respirar profundamente y hacerlo. Láncese a lo más profundo y encuentre la lección del alma que otros, con sus heridas y traiciones, le estaban ayudando a aprender. Agradézcales que lo hayan ayudado a alcanzar sus límites espirituales y a reconocer que usted es realmente una persona mejor y más madura gracias a las acciones de ellos.
Para obtener crédito extra, haga una lista completa de todos los detalles que aún no ha perdonado en relación con sus padres y con usted mismo. Fíjese en la resistencia que usted mismo podría oponer a este ejercicio. Es un indicador secreto de que el ejercicio es una buena medicina. Consiste en determinar los detalles o aspectos de la personalidad de sus padres y de la suya propia que usted aún no consigue aceptar, que todavía juzga, desea cambiar o trata de ocultar bajo la alfombra. Una vez que sienta que este inventario está completo, ame y perdone compasivamente a todos. Reconózcase a usted mismo que esto puede ser una práctica para mucho tiempo, quizás para toda la vida. Mientras la aplica, fíjese en los cambios que ocurren en su metabolismo.
**Ejercicio: Lecciones nutricionales del alma**
Éste es su ejercicio final para acceder al poder metabólico de lo sagrado. Centre su atención en cualquier dificultad relacionada con los alimentos o el cuerpo que usted quisiera modificar. Puede tratarse de una enfermedad, un síntoma, una preocupación sobre su imagen física, o un problema relacionado con el peso. Vuelva a consultar la sección "Lecciones de nutrición para el alma", véase a sí mismo desde la perspectiva de lo Divino y cree un relato nuevo y positivo acerca de por qué esta cuestión es su "cura" en lugar de una "enfermedad". ¿De qué modo es este problema un don de los dioses? ¿Qué lecciones del alma le ha de enseñar? Expanda su corazón y su mente de la forma más generosa posible para que pueda ver su vida desde una perspectiva cósmica compasiva. Permita que la madurez del alma aflore a la superficie para que usted se pueda ver a sí mismo en una luz más elevada. Para ayudarlo a empezar, enumero a continuación algunas reflexiones espirituales acerca de las preocupaciones comunes sobre el exceso de peso, la depresión, la fatiga y la salud digestiva.
_**Exceso de peso**_
Ésta es su oportunidad sagrada de expresar amor por su cuerpo, de encontrar compasión sin condiciones. Hágase amigo de su cuerpo. Reconozca la divinidad que hay en él. Permita que quede imbuido de un poder superior. Deje de sentirse solo en su lucha y dé entrada a la gracia. Confíe en su camino. Perdónese a sí mismo. Dése cuenta de que sus problemas en relación con el peso son una bendición, un camino que usted ha escogido para aprender algunas lecciones importantes sobre la vida, por ejemplo, "Lo que importa no es cómo uno se ve, sino cómo uno ama". Entréguese a su cuerpo tal cual y ámese a sí mismo exactamente en la etapa en que se encuentra. Quizás usted no ha logrado bajar de peso porque aún no ha encontrado y aceptado el mensaje de amor que debe recibir. Vea su peso como una cura para el pensamiento superficial, una cura para la vergüenza, para una vida vivida sin armonía. Permítase a sí mismo olvidar absolutamente la necesidad de bajar de peso, la tentación de limitarse o de castigarse a sí mismo. Tome unas vacaciones de su capataz interno. No es cuestión de comida ni de peso. De lo que se trata es de su ser y de la sanación de su alma. Esta sanación ocurre cuando uno decide suave y dulcemente habitar su cuerpo de una nueva forma. Suponga que usted es un ángel que acaba de aposentarse en su forma física, con la misión de amarla y sustentarla. Celebre las comidas, acepte su placer, relájese, tome tiempo para comer y sea consciente de lo que hace. Comience una relación nueva e íntima con los alimentos. Véala como su conexión vital al placer y el sustento de la existencia terrenal.
Trascienda la vanidad y ocúpese del alma. Sosiéguese. Sienta. Sueñe. Escuche. Aprecie la oscuridad. Escuche las voces de quienes sufren verdaderamente. Diga la verdad. Vea las conexiones ocultas entre todos nosotros. Pregúntese honestamente qué habría que liberar de su ser para aligerarlo verdaderamente. ¿Sería un antiguo motivo de ira, resentimientos, culpabilidad, juicios, cuestiones sin resolver, palabras no dichas? ¿De qué necesita desprenderse? ¿A quién necesita apartar de su vida? ¿Cómo necesita sustentarse para que su cuerpo le parezca hermoso y ligero en lugar de denso y con carencias? Una vez que usted comience a sentirse más ligero, entrenará a su metabolismo a ser más ligero.
Analice honestamente cuánto tiempo lleva usted en guerra con su cuerpo. Pida un cese incondicional del fuego. Las estrategias para bajar de peso impulsadas por el temor están condenadas a fracasar. Incluso si usted lograra bajar de peso a través del miedo y el juicio contra sí mismo, seguirá viviendo con miedo y juicio contra sí mismo. Antes de perder algo, hay que empezar por encontrarlo. Tiene que estar en su posesión. Si desea perder peso, tiene que reconocer que es suyo. Aceptarlo. Aceptarse a sí mismo. Encontrar su centro, su núcleo y su dignidad en medio de todo esto. Deje de ser un esclavo de los métodos de pérdida de peso. Es hora de que actúe como un rey o una reina.
Establezca un horario de comidas a intervalos regulares. Está bien hacer tres comidas al día, o de cinco a seis comidas pequeñas al día, o incluso dos comidas junto con alguna que otra merienda... no importa cómo lo haga. Encuentre su propio ritmo. Simplemente asegúrese de no matarse de hambre durante la primera mitad del día y luego atiborrarse de comida en la noche. Elija una pauta y manténgala. Almuerce. Renuncie a las "comidas consistentes únicamente en carbohidratos", de baja calidad, especialmente en el desayuno y almuerzo. Elija alimentos de calidad. Póngase en sintonía con su sabiduría intestinal. Fíjese en qué alimentos lo atraen que puedan proporcionarle un metabolismo sano y una experiencia nutritiva. Muévase y haga ejercicios con alegría. Elimine cualquier aspecto de su rutina de ejercicios que pueda ser un método de castigo encubierto. Olvídese de cualquier concepto basado en números, calorías, puntos, porciones o gramos. Sea natural. Encuentre su sabiduría interior. Confíe en sí mismo. Sea valeroso al avanzar por este camino nuevo e incierto. Tenga fe en que se recuperará de cualquier decisión poco conveniente que tome. Entréguese a la belleza de la experiencia de obtener sustento. Permita que su metabolismo quede imbuido de su resplandor interior. Sea feliz ya. Despídase de la falsa creencia de que perder peso garantiza la felicidad. Guíese por todo lo anterior y tendrá garantizado el éxito.
_**Depresión**_
La depresión ha sido muy infravalorada. No la honramos ni la valoramos verdaderamente. No nos gusta la depresión y por eso procuramos desterrarla. Pero no se supone que nos guste la depresión. Su presencia responde a una razón que nada tiene que ver con su popularidad. La depresión es una visita de lo Divino. Hay en ella cuando menos un mensaje, quizás muchos. La depresión es una temporada del alma. ¿Es realmente posible eliminar un frío invierno?
Antes de tratar su depresión con medicamentos, préstele atención. Si se ha adueñado de usted, acéptela. Si está braceando en sus aguas, bucee en ellas. Invoque a lo Divino. Ore. Descanse. Sienta. Tenemos muchas razones para estar deprimidos. La situación reinante en el mundo y las vidas que llevamos pueden sentirse acertada y justamente como deprimentes. Reconozcamos esa realidad.
Hace falta valor para adentrarse en el dolor del corazón. Hace falta una profunda confianza y fe para permitir que la depresión se desenvuelva. No es un demonio que lo quiere invadir. Es un aspecto de nuestra existencia, una parte de la vida de nuestra alma que necesita una voz. Ha venido a rescatarnos. ¿Somos capaces de escuchar? ¿Podemos ser lo suficientemente compasivos como para experimentar la totalidad de nuestra existencia? La depresión desea ayudarnos a recuperar los fragmentos perdidos de nuestra alma. ¿Es usted capaz de entrar en una caverna obscura para buscar a un niño querido? ¿Puede recordar sus sueños perdidos? ¿Puede encontrar su inocencia? ¿Puede ponerse en contacto con su rabia? ¿Puede reavivar su propia luz?
Los velos de la depresión se alzan naturalmente una vez que hemos recibido sus ofrendas. Si usted está verdaderamente listo para despedir a esta visita, la clave está en el oxígeno. De todas las investigaciones sobre este tema, la estrategia que mejor resultado da para aliviar la depresión no se basa en el Prozac ni en ningún medicamento, sino en el ejercicio intenso. Hay que respirar, correr, hacer ciclismo y resollar. Practique la respiración con las comidas. Nútrase con alimentos de calidad. Vuelva a descubrir el placer de las comidas. Invite a lo sagrado a formar parte de su vida. Bendiga a su depresión y bendiga el final de ésta.
_**Fatiga**_
Nunca he conocido a ninguna persona de poca energía que no necesitara urgentemente esta ofrenda. La fatiga nos ayuda a sosegarnos, sentir, mirarnos por dentro, escuchar profundamente y hacer las correcciones de rumbo que no hubiéramos hecho si no hubiéramos recibido la visita de esta cura. De modo que, antes de proscribir sus sensaciones de poca energía, recíbalas como invitadas de honor. Encuentre su mensaje. Observe la sensaciones que le produce el cansancio. Descanse. No diga que no tiene tiempo para sosegarse y recargar. Se trata de su vida. Sea honesto, sea valiente y busque el tiempo. Sea cual sea el motivo metabólico de su poca energía, y estos motivos pueden ser muchos, su fatiga es la forma que tiene el alma de hacerlo ir más despacio en su viaje.
¿Cuál es el trabajo verdadero que es necesario hacer? ¿Por qué ha tratado de evitarlo? ¿Quiénes son los seres queridos que necesitan de su atención? ¿A qué le está dando valor usted que, en realidad, no es tan valioso después de todo? ¿En qué aspectos se destaca su habilidad de engañarse a sí mismo? ¿En qué aspectos se esfuerza demasiado? Si está sintiendo poca energía, es muy probable que tenga fugas de energía. ¿Dónde se encuentran estas fugas? ¿De qué modo se priva usted mismo de facultades? ¿Qué partes de su personalidad están más llenas de miedo?
Los problemas relacionados con la energía tienen que ver con nuestra fuerza vital y nuestra manera de usarla. Están vinculados con la forma en que expresamos o suprimimos el propósito de nuestra alma. Si usted está haciendo lo que desea hacer y lo que vino a hacer, le sobrará energía para hacerlo. Si está yendo contra las fluctuaciones del alma, contra sus valores esenciales, se resistirá en secreto. Su cuerpo trabajará contra sí mismo. Se sentirá cansado. Y se sentirá atraído a comer todos los alimentos equivocados.
Es bueno experimentar con dietas y suplementos cuando uno trata de aumentar su energía. Simplemente tenga presente que el hecho de encontrar sus inspiraciones más profundas proporcionará a cualquier alimento o sustancia la fuerza necesaria para potenciar su fuego interior. Vivir la verdad de nuestra existencia es la clave para aliviar la fatiga.
**_Salud digestiva_**
Si usted sufre de acidez estomacal, indigestión o fatiga después de las comidas y si está tomando medicamentos para aliviar sus síntomas, es hora de que se libere a sí mismo. Este síntoma divino le pide que examine minuciosamente la forma en que usted digiere y asimila la vida. Le alerta de que algo anda mal en la forma en que usted metaboliza el mundo. La digestión constituye una bella metáfora sobre nuestra forma de consumir y procesar nuestros asuntos personales.
¿Está yendo demasiado rápido? ¿Está usted atiborrando su cuerpo de experiencias vitales con tal rapidez que no tiene tiempo para discernir la manera adecuada de descomponer lo ingerido? ¿Está usted alimentándose con las mismas estrategias vitales de siempre, las mismas pautas y hábitos una y otra vez y, aún así, no sabe por qué se siente tan mal? ¿Cuáles creencias insiste usted en regurgitar que constantemente hacen que los intestinos se rebelen? ¿De qué maneras se mantiene usted firmemente inconsciente, y se niega a asumir la responsabilidad del giro que está tomando su vida?
Algo no funciona bien en su forma de vivir en el mundo, y no es un problema con los alimentos ni con su digestión. Su alma le está pidiendo que procese las experiencias de una forma más completa y profunda y que vea cómo esa digestión consciente de la vida lo puede transformar. Sea real y esté presente consigo mismo. Confíe en que su alma puede procesar los sucesos de su vida que usted cree que son indigestos.
Aproximadamente 80 millones de estadounidenses dicen tener constantes problemas gastrointestinales. Esto no es normal ni natural. Y definitivamente no se debe a una deficiencia de medicamentos digestivos.
Esto es lo que las empresas farmacéuticas no quieren que usted sepa.
La mayor parte de nuestros males digestivos pueden curarse o aliviarse significativamente si comemos en el estado óptimo de digestión, o sea, la relajación. Esto es un secreto a voces pero muy pocos le prestan atención. Si alguna vez ha acudido a un médico o a un nutricionista porque tiene problemas digestivos y ese experto no le ha preguntado si usted come rápido, moderadamente o despacio, si no ha indagado sobre el estado emocional y fisiológico en que usted come (o sea, si sus comidas son ocasiones de ansiedad o de relajación), entonces ese experto habrá pasado por alto, sin saberlo, el factor determinante del bienestar digestivo más importante que se conoce en la ciencia. Hay 80 millones de personas con malestares gastrointestinales crónicos porque hay casi 80 millones de personas que se mueven a gran velocidad. No prestan atención a lo que comen ni a la forma en que lo comen. Han dejado al alma muy atrás, perdida en la bruma.
Incluso si usted tiene un verdadero invasor del sistema digestivo, como parásitos, candidiasis, exceso de bacterias o un desequilibrio en la acidez estomacal, nunca podrá alcanzar un alivio total ni la sanación digestiva, independientemente de cuántas píldoras o medicamentos se le receten, mientras no haya completado su imagen metabólica creando un medio digestivo sanador. Eso significa hacer que predomine el sistema parasimpático por medio de la relajación al comer. Significa prestar atención a las lecciones del alma que su digestión le está enseñando: practicar los principios de la respiración, la relajación, la conciencia y el placer. Busque el sosiego, aunque tenga un horario agitado, y vuelva a entrar en su cuerpo. Aliméntese.
Muchas personas que padecen de constantes malestares gastrointestinales han activado crónicamente el sistema simpático. Tal vez baste con unos días para desactivar este mecanismo; sin embargo, también podrían ser necesarios varios meses de práctica intensa para hacer que este vicio metabólico no lo siga afectando. Incluso si tiene que vivir para siempre con un sistema digestivo sensible, considérelo su amigo, un fiel barómetro que siempre le hará saber cuándo se ha excedido o cuándo no está prestando atención a su sabiduría interior.
Preste especial atención a ponerse en sintonía con la inteligencia de su sistema nervioso entérico, su sabiduría intestinal, y pídale consejos sobre qué comer. En muchas ocasiones, los malestares gastrointestinales son resultado directo de combinaciones de alimentos incompatibles con nuestra fisiología. Acceda a esta información sosegándose, poniéndose en sintonía consigo mismo, y preguntándose: "¿Qué alimentos me proporcionarían el sustento que necesito?" Luego proceda en consonancia con la respuesta que obtenga. Agradezca a su sistema digestivo por ser un maravilloso mecanismo de alerta que hace sonar la alarma cuando su estilo de comer y vivir no está en sincronía con los ritmos del corazón y el alma.
_Lecciones clave_
• Nuestra conexión con lo sagrado puede darnos acceso a cambios metabólicos poco comprendidos, pero potentes.
• El amor, la verdad, la valentía, la compasión, el perdón, la fe, la entrega y otras cualidades sagradas son grandes potenciadores del metabolismo.
• Cuando la vida nos pide ponerlas en práctica, estas cualidades funcionan como catalizadores de sanación en el cuerpo.
• Tenemos el poder de "bendecir" o "maldecir", o sea, que podemos influir incluso a distancia en la energía de nuestros alimentos, nuestros propios organismos y otros seres humanos.
• El ritual permite acceder a la metaquímica de lo sagrado.
• Cuando tratamos un alimento, fármaco o al cuerpo como "profanos", se pone en marcha la química del dolor y la confusión. Si los tratamos como objetos sagrados, permitimos que se revele una química sanadora y transformativa.
• La cura para nuestros males metabólicos suele encontrarse en el propio núcleo de esas mismas dificultades. De hecho, la enfermedad es la cura.
POSDATA
Su viaje metabólico
El metabolismo no es algo en lo que se pueda intervenir directamente. Es posible medir algunos de sus aspectos, pero el metabolismo propiamente dicho es inmensurable. Algunas de sus partes se pueden modificar, pero el conjunto siempre se mantiene inalterable. Uno puede obligarlo a hacer lo que uno desee, pero en última instancia responde a una fuente superior.
**El metabolismo no es un objeto. Es un camino.**
Es un océano de reacciones químicas en constante movimiento, cuyas profundidades son insondables y cuyo proceder es inevitablemente impredecible. El metabolismo fluctúa con los ritmos del mundo y se mueve al son de la música de las esferas. Es un poema épico y una sinfonía infinita. Es al mismo tiempo corriente y moneda de cambio. Es la desembocadura en el plano terrenal de las aguas de la divinidad. El metabolismo puede desencadenar acontecimientos, pero no es causa sino efecto. Es el efecto de nuestra vida, nuestra existencia, nuestra alma. Es el reflejo exacto de nuestra forma sagrada, el portal material de las fuerzas cósmicas, un habitáculo momentáneo para nuestro espíritu eterno.
La manera en que navegamos a través de nuestro viaje metabólico es la manera en que navegamos a lo largo de nuestro camino en la vida. En otras palabras, la forma en que uno trata a su cuerpo (la forma en que lo sustenta, lo alimenta, lo exalta o lo destrona) es la manera en que uno se trata a sí mismo. Si uno considera que su cuerpo es especial, considerará que su vida es especial. Si su metabolismo es un vacío caótico y aterrador, la vida también le parecerá del mismo modo. Si uno permite que la toxicidad entre en su cuerpo, también le está permitiendo que entre en su mundo personal. Si uno decide no reconocer cómo su estilo de vida determina su salud, entonces no reconoce cómo sus elecciones determinan su realidad. Si uno permite que lo sagrado entre en su mundo personal, lo encontrará habitando su mundo metabólico.
Todo esto es buena noticia, pues uno puede cambiar en cualquier momento su manera de vivir y, al hacerlo, puede transformar su biología. Al igual que con todos los viajes que han descrito los grandes poetas, dramaturgos y cuentistas a lo largo de las épocas, su viaje metabólico lo lleva a través de tierras mágicas, terrenos extraños, lugares peligrosos, bosques tenebrosos, cumbres sagradas, mercados, carnavales, carpas de circo, santuarios de sanadores, jardines de delicias, abismos. Encontrará charlatanes vestidos de expertos, expertos disfrazados de payasos, chamanes vestidos como hombres de negocios, budas en bikini y aliados que se ríen de usted con los disfraces más impensables, ocultándose por todas partes.
**Emprenda ya su viaje metabólico.**
Permita que su cuerpo y su perspectiva se renueven. Deje que el viaje sea lo que es, porque así será de todos modos. Cuando reine la incertidumbre, déjela ser su guía. Cuando su conocimiento interior aflore, sígalo con confianza y respeto. Cuando su metabolismo esté herido, permítale gritar. Antes de poner a prueba la química de su cuerpo, pruebe el sabor de sus lágrimas. Antes de tomar un medicamento, medite, reflexione y ore. Antes de limitarse a usted mismo con una dieta, expándase con amor. Antes de bajar una libra, agradézcale lo que le ha enseñado. Antes de hacer ejercicios, quédese quieto. Antes de intentar deshacerse de un mal hábito, agradézcale sus enseñanzas. Antes de hacerse daño con el pensamiento, las palabras o los hechos, haga una pausa. Antes de permitir que alguien tenga control sobre su cuerpo, despierte. Antes de buscar consejo, recuerde su sabiduría. Antes de hablar, asegúrese de que el valor sus palabras justifique la ruptura del silencio. Antes de pasar a la intimidad con otra persona, entre en contacto con lo sagrado. Antes de enfermarse, conténgase. Antes de entregarse al miedo, busque la luz. Antes de creer que el mundo carece de Creador, dé a luz. Antes de recordar su propósito divino, celebre su inminente llegada. Antes de comer, dé las gracias. Antes de sentarse durante largas horas, baile. Antes de levantarse, bendígalo todo. Antes de dormir, también bendígalo todo. Antes de vivir un día más, acceda a existir con plenitud. Y antes de volver a tomar aliento, elija la eternidad, el amor, el ahora.
| Si desea más información sobre los trabajos, actividades de enseñanza y otras ofertas de Marc David, no deje de visitarlo en su sitio web www.marcdavid.com.
---|---
Notas
Semana 1: El poder metabólico de la relajación
. He consultado las siguientes fuentes para hacer este gráfico sobre los efectos de la respuesta de estrés:
A. Hanck, "Stress and Vitamin Deficiency," _International Journal for Vitamin and Nutrition Research_ 26 (1984).
S. Porta, "Interactions Between Magnesium and Stress Hormones in Stress," _Mengen und Spurenelemente_ (diciembre de 1991).
R. A. Anderson, _"_ Stress Effects on Chromium Nutrition," Proceedings of Alltech's Tenth Annual Symposium (Nottingham University Press, 1994).
A. Singh, "Biochemical Indices of Selected Trace Minerals: Effect of Stress," _American Journal of Clinical Nutrition_ 67, no. 1 (1991). Este estudio documenta el descenso de los niveles de zinc, hierro y selenio en hombres sometidos a estrés.
N. Mei, "Role of the Autonomic Nervous System in the Regulation of Transit, Absorption and Storage of Nutrients," _Reproduction, Nutrition, Development_ 26, no. 5B (1986) (Francia).
G. A. Bray, "The Nutrient Balance Hypothesis: Peptides, Sympathetic Activity, and Food Intake," _Annals of The New York Academy of Sciences_ 676 (15 de marzo de 1993).
W. J. Kort, "The Effect of Chronic Stress on the Immune Response," _Advances in Neuroimmunology_ 4, no. 1 (1994).
J. D. Soderholm, "Stress and the Gastrointestinal Tract," _American Journal of Physiology_ 280, no. 1 (enero de 2001).
G. Aguilera, "The Renin Angiotensin System and the Stress Response," _Annals of The New York Academy of Sciences_ 771 (29 de diciembre 1995).
D. Pignatelli, "Direct Effect of Stress on Adrenocortical Function," _Hormone Metabolism Research_ 30, no. 6/7 (junio/julio de 1998).
J. L. Cuevas, "Spontaneous Swallowing Rate and Emotional State," _Digestive Diseases and Sciences_ 40, no. 2 (febrero de 1995).
J. E. Dimsdale, "Variability of Plasma Lipids in Response to Emotional Arousal," _Psychosomatic Medicine_ 44, no. 5 (1982).
S. Kaplan, "Effects of Cortisol on Amino Acid in Skeletal Muscle and Plasma," _Endocrinology_ 72 (febrero de 1963).
P. Havel, "The Contribution of the Autonomic Nervous System to Changes of Glucagons and Insulin Secretion During Hypoglycemic Stress," _Endocrine Reviews_ 10, no. 3 (agosto de 1989).
Hans Selye, _The Stress of Life_ (New York: Van Nostrand, 1984). Ésta es la obra clásica de consulta sobre la respuesta de estrés.
. P. Bjorntorp, "Psychosocial Factors and Fat Distribution," _Obesity in Europe '91_ (Proceedings of the 3rd European Congress on Obesity, 1992). Es un excelente documento de investigación sobre la obesidad y el estrés.
E. T. Poehlman, "Sympathetic Nervous System Activity, Body Fatness, and Body Fat Distribution in Younger and Older Males," _Journal of Applied Physiology_ 78, no. 3 (marzo de 1995).
S. Knox, "Biobehavioral Mechanisms in Lipid Metabolism and Athero-sclerosis: An Overview," _Metabolism: Clinical and Experimental_ 42, no. 9 (suppl. 1) (septiembre de 1993).
B. G. Lipinski, "Life Change Events as Correlates of Weight Gain," _Recent Advances in Obesity Research_ (Proceedings of the First International Congress on Obesity, London, 1975).
J. Istvan, "Body Weight and Psychological Distress in NHANES I," _International Journal of Obesity_ 21, no. 5 (octubre de 1992).
. T. E. Burkovskaya, "Kinetics of Elemental Content Changes of Bone Tissue of Mice During Evolution of Hypokinetic Stress," _Biological Trace Element Research_ 43–45, (otoño de 1994) (Moscú).
. D. Michelson, "Bone Mineral Density in Women with Depression," _The New England Journal of Medicine_ 335, no. 16, (17 de octubre de 1996).
. Melvyn Werbach, _Nutritional Influences on Illness_ (Tarzana, Calif.: Third Line Press, 1993). Si desea consultar una lista exhaustiva de materiales sobre este tema, la encontrará en la sección relativa a la osteoporosis.
**Entre otros materiales consultados en relación con este capítulo figuran:**
R. Forster y R. Estabrook, "Is Oxygen an Essential Nutrient?" _Annual Review of Nutrition_ 13 (1993).
C. R. Honig, "Oxygen Transport and Its Interaction with Metabolism: A Systems View of Aerobic Capacity" _Medical Science and Sports Exercise_ 24, no. 1 (enero de 1992).
D. L. Gilbert, _Oxygen and Living Processes: An Interdisciplinary Approach_ (New York: Springer-Verlag, 1981).
H. Weiner, _Perturbing the Organism: The Biology of Stressful Experience_ (University of Chicago Press, 1992).
Robert Sapolsky, _Why Zebras Don't Get Ulcers_ (New York: W. H. Freeman, 1994).
Semana 2: El poder metabólico de la calidad
. Weston A. Price, _Nutrition and Physical Degeneration_ (New Canaan, Conn.: Keats Publishing, 2003). Ésta es la obra clásica sobre las grandes diferencias de salud entre las personas que siguen una alimentación tradicional y las que se alimentan con productos comerciales o industriales.
. Jeff Bland hizo una de las exposiciones más esclarecedoras sobre el carácter enérgico de la salud, el cuerpo y la medicina en el 7º Simposio Internacional sobre la Medicina Funcional: Energía metabólica, moléculas mensajeras y enfermedades crónicas (International Symposium on Functional Medicine: Metabolic Energy, Messenger Molecules, and Chronic Illness), (mayo de 2000). La charla se titula "Disorders of Cellular Energy Metabolism." Puede obtener una cinta de audio si llama al Instituto de Medicina Funcional (Institute for Functional Medicine): (800) 228–0622.
. Si desea pruebas convincentes sobre el daño que representan para la salud las proteínas complejas de la carne, consulte _The Food Revolution_ , de John Robbins (Boston: Conari Press, 2000). Si desea pruebas convincentes sobre el punto de vista contrario, consulte H. Spencer, _American Journal of Clinical Nutrition_ 37, no. 6 (junio de 1983), y S. Fallon, _Price-Pottenger Nutrition Foundation Health Journal_ , 1996.
. A. Lopez "Some Interesting Relationships Between Dietary Carbohydrates and Serum Cholesterol," _American Journal of Clinical Nutrition_ 18, no. 2 (febrero de 1966).
**Entre otros materiales consultados en relación con este capítulo figuran:**
A. K. Kant, "Consumption of Energy-Dense, Nutrient-Poor Foods by the U.S. Population: Effect on Nutrient Profiles," _Journal of the American College of Nutrition_ 72, no. 4 (octubre de 2000).
E. Gunderson, "FDA Total Diet Study, julio de 1986–abril de 1991: Dietary Intake of Pesticides, Selected Elements, and Other Chemicals," _Journal of AOAL International_ 78, no. 6 (noviembre–diciembre de 1995).
"Inocuidad y calidad de los alimentos en relación con la agricultura orgánica", del _Informe de la ONU sobre la agricultura y la alimentación_ , julio de 2000.
"Organic Foods vs. Supermarket Foods: Elemental Levels," in _Journal of Applied Nutrition_ 45 (1993).
_"_ Exposure to Pesticides Lowered When Young Children Go Organic, Researchers Determine" _New York Times_ 25 de marzo de 2003. Paula Baillie-Hamilton, _The Body Restoration Plan: Eliminate Chemical Calories and Repair Your Body's Natural Slimming System_ (New York: Avery Publishing, 2003).
M. Alice Ottoboni, _The Dose Makes the Poison_ (New York: Van Nostrand, 1991).
Russell Blaylock, _Excitotoxins: The Taste that Kills_ (New Mexico: Health Press, 1996).
Sandra Steingraber, _Living Downstream_ (New York: Vintage Books, 1998).
Richard Gerber, _Vibrational Medicine_ (Rochester, Vt.: Bear & Company, 1988).
Semana 3: El poder metabólico de la conciencia
. S. A. Giduck, "Cephalic Reflexes: Their Role in Digestion and Possible Roles in Absorption and Metabolism," _Journal of Nutrition_ 117, no. 7 (julio de 1987).
. G. R. Barclay, "Effect of Psychosocial Stress on Salt and Water Transport in the Human Jejunum," _Gastroenterology_ 93, no. 1 (julio de 1987).
. B. Baldaro, "Effects of an Emotional Negative Stimulus on Cardiac, Electrogastrographic, and Respiratory Responses," _Perceptual and Motor Skills_ 71, no. 2 (octubre de 1990) __.
. T. L. Powley, "Diet and Cephalic Phase Insulin Responses," _American Journal of Clinical Nutrition_ 14, no. 4 (septiembre de 1985).
. J. Furness and J. Bornstein, "The Enteric Nervous System and Its Extrinsic Connections," in _Textbook of Gastroenterology_ (Philadelphia: Lippincott, 1995).
. Michael Gershon, _The Second Brain_ (New York: Perennial, 1999).
. T. E. Adrian and S. R. Bloom, "The Effect of Food on Gut Hormones," _Advances in Food and Nutrition Research_ 37 (1993).
. Sandra Blakeslee, "Complex and Hidden Brain in the Gut Makes Cramps, Butterflies, and Valium" _New York Times_ , 23 de enero de 1996.
**Entre otros materiales consultados en relación con este capítulo figuran:**
S. McCrae, "Changes in pattern of fasting jejunal motor activity during mental stress," _Journal of Physiology_ 308 (1980).
M. Costa, "The Enteric Nervous System," _The American Journal of Gastroenterology_ 89, no. 8 (1994).
S. Wolf, "The Stomach's Link to the Brain," _Federation Proceedings_ 44, no. 14 (1985).
R. K. Goyal, "The Enteric Nervous System," _The New England Journal of Medicine_ 334, no. 17 (25 de abril de 1996).
L. Johnson, _Gastrointestinal Physiology_ (Philadelphia: Mosby, 1991).
Raphael Kellman, _Gut Reactions_ (New York: Broadway Books, 2002).
Semana 4: El poder metabólico del ritmo
. C. A. Czeisler, "Stability, Precision, and Near-24-Hour Period of the Human Pacemaker," _Science_ 284 (25 de junio de 1999).
. David Lloyd, _Ultradian Rhythms in Life Processes_ (New York: Springer-Verlag, 1992).
_Providers Manual—Clinical Training in Mind/Body Medicine_ , Harvard Mind/Body Medical Institute 1995.
. T. W. Uhde, "Caffeine: Relationship to Human Anxiety, Plasma MHPG, and Cortisol," _Psychopharmacology Bulletin_ 20, no. 3 (1984).
. T. S. Wiley, _Lights Out_ (New York: Pocket Books, 2000).
**Entre otros materiales consultados en relación con este capítulo figuran:**
E. M. Berry, "Foods and Their Effects on Sleep Patterns," _International Clinical Nutrition Review_ 7, no. 2 (1987).
A. Concu, "Indirect Evidence in Humans of Nervous Parasympathetic Predominance in Integrated Responses to a Balanced Meal," _Medical Science Research_ 20, no. 19 (1992) (Italia).
E. L. Gibson, "Increased Salivary Cortisol Reliably Induced by a Protein-Rich Midday Meal," _Psychosomatic Medicine_ 61, no. 2 (1999).
F. Brouns, "Is the Gut an Athletic Organ? Digestion, Absorption and Exercise" _Sports Medicine_ 15, no. 4 (1993).
H. M. Lloyd, "Mood and Cognitive Performance Effect of Isocaloric Lunches Differing in Fat and Carbohydrate Content," _Physiology and Behavior_ 56, no. 1 (julio de 1994).
B. C. Johnson, "Nutrient Intake as a Time Signal for Circadian Rhythm," _American Institute of Nutrition_ 122, no. 9 (28 de abril de 1992).
P. J. Rogers, "Nutrition and Mental Performance," _Proceedings of the Nutrition Society_ 53, no. 2 (1994).
Kenneth Rose, _The Body in Time_ (New York: Wiley and Sons, 1988).
A. Reinberg _Introduction to Chronobiology_ (New York: Springer-Verlag, 1983).
A. T. Winfree, _The Timing of Biological Clocks_ (New York: Scientific American Books, 1987).
LifeWaves International: www.lifewaves.com. En este sitio web se pueden consultar las interesantes obras del Dr. Irv Dardik.
Semana 5: El poder metabólico del placer
. Este informe fue presentado por Margo Denke, del Center for Human Nutrition, University of Texas Health Science Center, en la reunión anual de la Asociación Americana del Corazón, 1987.
. "Food that Tastes Good Is More Nutritious," publicado en _Tufts University Health and Nutrition Letter_ , octubre de 2000.
. Guy Murchie, _The Seven Mysteries of Life_ (Boston: Houghton Mifflin, 1978).
. T. D. Geracioti, "Meal-Related Cholecystokinin Secretion in Eating and Affective Disorders," _Pharmacology Bulletin_ 25, no. 3 (1989) and J. Hirsch, "A Clinical Perspective on Peptides and Food Intake," _American Journal of Clinical Nutrition_ 55, no. 1 (1992).
. M. M. Hetherton, "Pleasure and Excess: Liking For and Over-consumption of Chocolate," _Physiology and Behavior_ 57, no. 1 (1995).
**Entre otros materiales consultados en relación con este capítulo figuran:**
A. Levine, "Opioids—Are They Regulators of Feeding?" _Annals of the New York Academy of Sciences_ 575 (1989).
J. C. Melchior, "Palatability of a Meal Influences Release of Beta-Endorphin and of Potential Regulators of Food Intake in Healthy Human Subjects," _Appetite_ 22, no. 3 (junio de 1994).
G. A. Bray, "Peptides Affect the Intake of Specific Nutrients and the Sympathetic Nervous System," _American Journal of Clinical Nutrition_ 55, no. 1 (enero de 1992).
J. E. Blundell, "Regulation of Nutrient Supply: the Brain and Appetite Control," _Proceedings of the Nutrition Society_ 53, no. 2 (julio de 1994).
J. E. Blundell, "Serotonin and the Biology of Feeding," _American Journal of Clinical Nutrition_ 55, no. 1 (enero de 1992).
G. P. Smith, "The Satiety Effect of Cholecystokinin: Recent Program and Current Problems," _Annals of the New York Academy of Sciences_ 448 (1985).
"Discovering Something New in Food: Pleasure" _New York Times_ , 30 de diciembre de 1992.
G. J. Dockray, _Gut Peptides: Biochemistry and Physiology (_ Edinburgh: Churchill Livingstone, 1994).
R. Ornstein and D. Sobel, _Healthy Pleasures_ (New York: Da Capo Press/ Perseus Publishing, 1990).
Semana 6: El poder metabólico del pensamiento
. Ernest Rossi, _The Psychobiology of Mind-Body Healing_ (New York: Norton, 1986). Este libro ofrece excelentes perspectivas científicas y esclarecedores diagramas sobre la conexión entre el cuerpo y la mente.
. J. W. Fielding, "Adjunct Chemotherapy in Operable Gastric Cancer," _World Journal of Surgery_ 7, no. 3 (1983).
. "Placebo—The Hidden Asset in Healing" in _Investigations_ , Institute of Noetic Sciences Research Bulletin 2, no. 14 (1985).
. D. S. Moore, _Statistics: Concepts and Controversies_ (New York: Freeman, 1995).
. S. B. Penick "The effect of expectation on response to phenmetrazine," _Psychosomatic Medicine_ 26, no. 4 (1964).
. Kenneth Cooper, _The Antioxidant Revolution_ (Nashville: Thomas Nelson, 1994).
**Entre otros materiales consultados en relación con este capítulo figuran:**
R. L. Shames, "Nutritional Management of Stress-Induced Dysfunction," _Applied Nutritional Science Reports 2002_ Advanced Nutrition Publications Inc., se puede obtener a través del Instituto de Medicina Funcional (Institute for Functional Medicine).
R. Ornstein y D. Sobel, _The Healing Brain_ (New York: Simon and Schuster, 1988).
Norman Cousins, _The Healing Heart_ (New York: Norton, 1983).
Henry Dreher, _The Immune Power Personality_ (New York: Penguin, 1996).
Howard Brody, _The Placebo Response_ (New York: Harper Collins, 2000).
Larry Dossey, _Recovering the Soul_ (New York: Bantam, 1989).
Blair Justice, _Who Gets Sick_ (Los Angeles: Tarcher, 1987).
Semana 7: El poder metabólico del relato
. "Multiple Personality—Mirrors of a New Model of Mind?" _Investigations_ , Institute of Noetic Sciences Research Bulletin 1, no 3/4.
. B. G. Braun, "Psychophysiologic Phenomena in Multiple Personality," American Journal of Clinical Hypnosis 26, no. 2 (1983). En este informe también se enumeran otros ejemplos de pacientes con trastorno de personalidad múltiple que manifiestan alergias en unas personalidades y en otras no.
**Entre otros materiales consultados en relación con este capítulo figuran:**
Brendan O'Regan y Rick Carlson, "Defining Health: The State of the Art," _Holistic Health Review_ 3, no. 2 (1979).
A. Ziegler, _Archetypal Medicine_ (New York: Continuum International Publishing, 2000).
James Hillman, _Healing Fiction_ (Barrytown, New York: Station Hill Press, 1983).
Larry Dossey, _Meaning and Medicine_ (New York: Bantam, 1991).
Lynn Payer, _Medicine and Culture_ (New York: Henry Holt, 1988).
Semana 8: El poder metabólico de lo sagrado
**Entre los materiales consultados en relación con este capítulo figuran:**
"Asking If Obesity Is a Disease or Just a Symptom," _New York Times_ , 16 de abril de 2002.
"God and the Brain: How We're Wired for Spirituality," _Newsweek_ , 7 de mayo de 2001.
Joseph Chilton Pearce, _The Biology of Transcendence_ (Rochester, Vt.: Park Street Press, 2002).
Michael Murphy, _The Future of the Body_ (Los Angeles: Tarcher, 1992).
Larry Dossey, _Healing Beyond the Body_ (Boston: Shambhala, 2001).
Larry Dossey, _Healing Words_ (New York: HarperCollins, 1993).
Dean Ornish, _Love and Survival_ (New York: HarperCollins, 1997).
Sandra Ingerman, _Medicine for the Earth_ (New York: Three Rivers Press, 2000).
Eugene d'Aquili, _The Mystical Mind: Probing the Biology of Religious Experience_ (Minneapolis: Fortress Press, 1999).
Robin Robertson, _The Sacred Kitchen_ (Novato, Calif.: New World Library, 1999).
James Hillman, _The Soul's Code_ (New York: Random House, 1996).
Bibliografía
Barks, Coleman. _The Illuminated Rumi_. New York: Broadway Books, 1997.
Buck, William. _Mahabharata_. Berkeley: University of California Press, 1973.
Brody, Howard. _The Placebo Response_. New York: HarperCollins, 2000.
Calasso, Roberto. _Ka: Stories of the Mind and Gods of India_. New York: Vintage 1998.
Dossey, Larry. _Healing Beyond the Body_. Boston: Shambhala Publications, 2001.
———. _Meaning and Medicine_. New York: Bantam Books, 1991.
———. _Recovering the Soul_. New York: Bantam Books, 1989.
———. _Reinventing Medicine_. San Francisco: HarperCollins, 1999.
Fallon, Sally. _Nourishing Traditions_. Washington, D.C.: New Trends Publishing, 1999.
Hillman, James. _Healing Fiction_. Barrytown, N.Y.: Station Hill Press, 1983.
———. _Re-Visioning Psychology_. New York: Harper & Row, 1975.
———. _The Soul's Code_. New York: Random House, 1996.
Holmes, Ernest. _The Science of Mind_. New York: Tarcher, 1998.
Hyman, Mark and Liponis, Mark. _Ultra-Prevention_. New York: Scribner, 2003.
Lao Tzu. _Tao Te Ching_. New York: Concord Grove Press, 1983.
Levine, Peter. _Waking the Tiger_. Berkeley, Calif.: North Atlantic Books, 1997.
Murchie, Guy. _The Seven Mysteries of Life_. Boston: Houghton Mifflin, 1978.
Murphy, Michael. _The Future of the Body_. Los Angeles: Tarcher, 1992.
Ottoboni, Alice. _The Dose Makes the Poison_. New York: Van Nostrand Reinhold, 1991.
Payer, Lynn. _Medicine and Culture_. New York: Henry Holt, 1988.
Pearsall, Paul. _The Heart's Code_. New York: Broadway Books, 1998.
Pearce, Joseph Chilton. _The Biology of Transcendence_. Rochester, Vt.: Park Street Press, 2002.
Pollan, Michael. _The Botany of Desire_. New York: Random House, 2001.
Ravnskov, Uffe. _The Cholesterol Myths_. Washington, D.C.: New Trends Publishing, 2000.
Robbins, John. _The Food Revolution_. Boston: Conari Press, 2001.
Rosenthal, Joshua. _The Energy Balance Diet_. Indianapolis: Alpha Books, 2003.
Rossi, Ernest. _The Psychobiology of Mind-Body Healing_. New York: Norton, 1986.
Sapolsky, Robert. _Why Zebras Don't Get Ulcers_. New York: W. H. Freeman, 1994.
Tolle, Eckhart. _The Power of Now_. Novato, Calif.: New World Library, 1999.
Schmidt, Gerhard. _The Dynamics of Nutrition_. Rhode Island: Bio-Dynamic Literature, 1987.
———. _The Essentials of Nutrition_. Rhode Island: Bio-Dynamic Literature, 1987.
Shealy, Norman and Myss, Caroline. _The Creation of Health_. Walpole, N.H.: Stillpoint Publications, 1993.
Werbach, Melvyn. _Nutritional Influences on Illness_. Tarzana, Calif.: Third Line Press, 1993.
Wiley, T. S. _Lights Out_. New York: Pocket Books, 2000.
Inner Traditions en Español
One Park Street
Rochester, Vermont 05767 USA
www.InnerTraditions.com
Inner Traditions en Español es una división de Inner Traditions International
Copyright © 2005 de Marc David
Traducción © 2008 de Inner Traditions International
Titulo original: _The Slow Down Diet: Eating for Pleasure, Energy, and Weight Loss_ publicado por Healing Arts Press, sección de Inner Traditions International
Todos los derechos reservados. Ninguna parte de este libro puede ser reproducida ni utilizada en manera alguna ni por ningún medio, sea electrónico o mecánico, de fotocopia o de grabación, ni mediante ningún sistema de almacenamiento y recuperación de información, sin permiso por escrito del editor.
_**Nota al lector:** El propósito de este libro es servir como guía informativa. Los remedios, criterios y técnicas aquí descritos han de suplementar, no reemplazar, la atención o el tratamiento médico o profesional. No deben utilizarse para tratar una enfermedad o dolencia grave si no se ha consultado antes a un profesional de la salud cualificado_.
ISBN 978-1-59477-829-2
|
{
"redpajama_set_name": "RedPajamaBook"
}
| 8,048
|
Q: Scrapy Deltafetch incremental crawling I am working on Scrapy to scrap the website. And I want to extract only those items which have not been scraped in its previous run.
I am trying it on "https://www.ndtv.com/top-stories" website to extract only 1st headline if it is updated.
Below is my code:
import scrapy
from selenium import webdriver
from w3lib.url import url_query_parameter
class QuotesSpider(scrapy.Spider):
name = "test"
start_urls = [
'https://www.ndtv.com/top-stories',
]
def parse(self, response):
print ('testing')
print(response.url)
yield {
'heading': response.css('div.nstory_header a::text').extract_first(),
}
DOWNLOADER_MIDDLEWARES = {
'scrapy_crawl_once.CrawlOnceMiddleware': 100,
}
SPIDER_MIDDLEWARES = {
#'inc_crawling.middlewares.IncCrawlingSpiderMiddleware': 543,
'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': True,
'scrapy_deltafetch.DeltaFetch': 100,
'scrapy_crawl_once.CrawlOnceMiddleware': 100,
'scrapylib.deltafetch.DeltaFetch': 100,
'inc_crawling.middlewares.deltafetch.DeltaFetch': 100,
}
COOKIES_ENABLED = True
COOKIES_DEBUG = True
DELTAFETCH_ENABLED = True
DELTAFETCH_DIR = '/home/administrator/apps/inc_crawling'
DOTSCRAPY_ENABLED = True
I have updated above code in setting.py file:
I am running the above code using "scrapy crawl test -o test.json" command and after each run .db file and test.json file gets updated.
So, my expectation is whenever the 1st headline is updated only then .db gets updated.
kindly help me if there is any better approach to extract updated headline.
A: a good way to implement this would be to override the DUPEFILTER_CLASS to check your database before doing the actual requests.
Scrapy uses a dupefilter class to avoid getting the same request twice, but it only works for running spiders.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,981
|
Hölder estimates on domains of complex dimension two and on three dimensional CR manifolds
Charles L. Fefferman, Joseph J. Kohn
Advances in Mathematics
Mathematics(all)
Dive into the research topics of 'Hölder estimates on domains of complex dimension two and on three dimensional CR manifolds'. Together they form a unique fingerprint.
CR Manifold Mathematics 100%
Two Dimensions Mathematics 68%
Three-dimensional Mathematics 54%
Estimate Mathematics 35%
Fefferman, C. L., & Kohn, J. J. (1988). Hölder estimates on domains of complex dimension two and on three dimensional CR manifolds. Advances in Mathematics, 69(2), 223-303. https://doi.org/10.1016/0001-8708(88)90002-3
Fefferman, Charles L. ; Kohn, Joseph J. / Hölder estimates on domains of complex dimension two and on three dimensional CR manifolds. In: Advances in Mathematics. 1988 ; Vol. 69, No. 2. pp. 223-303.
@article{781203a642584ec2b0ea65133a21a9e6,
title = "H{\"o}lder estimates on domains of complex dimension two and on three dimensional CR manifolds",
author = "Fefferman, {Charles L.} and Kohn, {Joseph J.}",
journal = "Advances in Mathematics",
Fefferman, CL & Kohn, JJ 1988, 'Hölder estimates on domains of complex dimension two and on three dimensional CR manifolds', Advances in Mathematics, vol. 69, no. 2, pp. 223-303. https://doi.org/10.1016/0001-8708(88)90002-3
Hölder estimates on domains of complex dimension two and on three dimensional CR manifolds. / Fefferman, Charles L.; Kohn, Joseph J.
In: Advances in Mathematics, Vol. 69, No. 2, 06.1988, p. 223-303.
T1 - Hölder estimates on domains of complex dimension two and on three dimensional CR manifolds
AU - Fefferman, Charles L.
AU - Kohn, Joseph J.
JO - Advances in Mathematics
JF - Advances in Mathematics
Fefferman CL, Kohn JJ. Hölder estimates on domains of complex dimension two and on three dimensional CR manifolds. Advances in Mathematics. 1988 Jun;69(2):223-303. https://doi.org/10.1016/0001-8708(88)90002-3
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,731
|
static size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp)
{
((std::string*)userp)->append((char*)contents, size * nmemb);
return size * nmemb;
}
std::string ReplaceAll(std::string str, const std::string& from, const std::string& to)
{
size_t start_pos = 0;
while((start_pos = str.find(from, start_pos)) != std::string::npos)
{
str.replace(start_pos, from.length(), to);
start_pos += to.length();
}
return str;
}
nlohmann::json SpotifyCurlInternal(std::string request, std::string endpoint, std::map<std::string, std::string> options, std::string authToken, std::string body)
{
CURL * curl;
curl = curl_easy_init ( ) ;
if(!curl)
{
std::cerr << "Could not initiate cURL" << std::endl;
return curl;
}
std::string url = "https://api.spotify.com" + endpoint;
if(!options.empty())
{
url += "?";
for(auto option : options)
{
url += option.first + "=" + option.second + '&';
}
}
url = ReplaceAll(url, " ", "%20");
std::string readBuffer;
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, false); // Can't authenticate the certificate, so disable authentication.
curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, request.c_str());
if(!authToken.empty())
{
std::string header = "Authorization: Bearer " + authToken;
struct curl_slist *headers = NULL;
headers = curl_slist_append(headers, header.c_str());
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);
}
if(!body.empty())
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, body.c_str());
int rc = curl_easy_perform(curl);
if (rc != CURLE_OK)
throw CurlException(rc);
long statusCode = 0;
curl_easy_getinfo (curl, CURLINFO_RESPONSE_CODE, &statusCode);
if(statusCode < 200 || statusCode > 204)
throw SpotifyException(Error(nlohmann::json::parse(readBuffer)["error"]));
curl_easy_cleanup(curl);
if(readBuffer.empty())
return nlohmann::json();
return nlohmann::json::parse(readBuffer);
}
nlohmann::json SpotifyGET(std::string endpoint, std::map<std::string, std::string> options, std::string authToken, std::string body = "")
{
return SpotifyCurlInternal("GET", endpoint, options, authToken, body);
}
nlohmann::json SpotifyPUT(std::string endpoint, std::map<std::string, std::string> options, std::string authToken, std::string body = "")
{
return SpotifyCurlInternal("PUT", endpoint, options, authToken, body);
}
nlohmann::json SpotifyDELETE(std::string endpoint, std::map<std::string, std::string> options, std::string authToken, std::string body = "")
{
return SpotifyCurlInternal("DELETE", endpoint, options, authToken, body);
}
nlohmann::json SpotifyPOST(std::string endpoint, std::map<std::string, std::string> options, std::string authToken, std::string body = "")
{
return SpotifyCurlInternal("POST", endpoint, options, authToken, body);
}
std::string VectorJoin(std::vector<std::string> vector)
{
std::stringstream ss;
for(size_t i = 0; i < vector.size(); ++i)
{
if(i != 0)
ss << ",";
ss << vector[i];
}
return ss.str();
}
#endif
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,618
|
\section{Evolution equations in de~Sitter/FLRW spacetime}
For the proposed FLRW spacetime the relevant equations can be obtained in the same way as for the LTB part
by setting $\partial_r a=0$ and $E=0$.
\begin{equation}
ar = R, \quad {\dot t}^2 = 1 + a^2{\dot r}^2~.
\end{equation}
The potential $V$ reduces to
\begin{equation}
2V = -\left[\frac{\Lambda_-}{3}
+\left(\frac{\rho}{3\sigma} +\frac{\Lambda_+ -\Lambda_-}{24\pi\sigma} +2\pi\sigma \right)^2\right]R^2~,
\end{equation}
and the equations of motion are given by
\begin{eqnarray}
\partial_t \bar r =
\frac{\bar r\partial_t a -\sqrt{\left(1+2V\right)\left(\left(\bar r\partial_t a\right)^2+2V\right)}}{2aV}~, \\
\partial_t \sigma =
\left(\rho+p\right)\frac{a\partial_t \bar r}{\sqrt{1-\left(a\partial_t \bar r\right)^2}}~.
\end{eqnarray}
These equations together with the background dynamics given by (\ref{eqmoFLRW}) and (\ref{conFLRW}) determine the bubble motion in
the de~Sitter/FLRW background.
\section*{Bibliography}
\section{Bubble nucleation in time-dependent settings}
Quantum nucleation of bubbles is a very intricate problem, especially when
effects of gravity have to be taken into account. Much of the literature
on this topic focusses on the case of transitions between two vacua having
different values of the cosmological constant. In this special case, a
semiclassical calculation of the nucleation rate based on instanton
methods has been presented by Coleman and De Luccia~\cite{CDL80}. However,
this calculation, as well as many alternative approaches developed by others (cf.~e.g.~\cite{FMP90}),
heavily relies on a high degree of symmetry of spacetime, which is
initially assumed to be in a pure vacuum state with the geometry of
de Sitter spacetime. We therefore feel that the applicability of these results
to cases where spacetime is not (or only approximately) in a vacuum state
is in need of some clarification (see also the discussion in \cite{Widrow91}).
A calculation of nucleation rates in arbitrary non-vacuum states, including
all possible effects, is clearly beyond our capabilities. We will instead
only take a small step away from the assumption of de Sitter symmetry and
consider Friedmann universes in general, of which de Sitter spacetime is only a
special case. We therefore retain many useful simplifications, in particular
the assumption of homogeneity of space (but not of spacetime!), and the
so-called \lq thin-wall approximation\rq. The cosmological expansion
of the universe, however, follows a non-trivial dynamical law (the Friedmann
equation), and we are interested in the effect of the \textit{time-dependent}
Hubble rate on the nucleation rate of bubbles, which will in turn itself become
\textit{time-dependent}.
To keep matters as simple as possible, we will consider the nucleation of a
spherical bubble of new phase, and we will assume that its shell -- the layer
which separates the new phase from the old -- is of negligible thickness
compared to the size of the bubble. This amounts to the
\lq thin-wall approximation\rq. The energy budget of the bubble consists of
latent heat (the difference between the energy densities of the two phases)
and surface tension. Throughout this section we will neglect gravitational backreaction of the bubble
onto the spacetime geometry since we are primarily interested in the effects
of cosmological expansion of the background, which is taken into account, and
the treatment of the full gravitational problem would introduce too many
additional complications.
\subsection{Lagrangean formulation}
Associated to the background spacetime is the
Friedmann-Lema\^itre-Robertson-Walker (FLRW) line element
\begin{equation}
\rmd s^2 = a^2\!\left(\eta\right) \left[-\rmd \eta^2 + \rmd r^2 + r^2 \rmd \Omega^2\right]~,
\end{equation}
where we have assumed flat spatial sections. The conformal
time $\eta$ is related to cosmological (proper) time $t$ by $\rmd t = a \rmd \eta$.
Spherical symmetry reduces the bubble dynamics to a $1+1$ dimensional problem.
Denoting the coordinate radius of the shell as $\bar{r}$, the shell trajectory
$\bar{r}\left(\eta\right)$ follows from the action
\begin{equation}
\label{FRWaction}
\mathcal{S} = \int\!\rmd\eta \left[\frac{4 \pi}{3} \epsilon~a^4\!\left(\eta\right) \bar{r}^3\!\left(\eta\right) - 4 \pi \sigma~a^3\!\left(\eta\right) \bar{r}^2\!\left(\eta\right) \sqrt{1 - \left(\partial_\eta \bar{r}\left(\eta\right)\right)^2}\right]~,
\end{equation}
where $\epsilon$ and $\sigma$ denote, respectively, the difference between the
energy densities of the two phases (latent heat) and the surface energy density
(surface tension) of the shell. As we have indicated above, this effective action
does not take into account gravitational self-interaction of the bubble. In
order to include some of these effects, one could add surface-surface,
volume-volume, as well as surface-volume terms for gravitational energy.
The evolution of the scale factor $a$ introduces an explicit time-dependence,
giving rise to nucleation rates which will in general be time-dependent
as well. A formalism for calculating semiclassical tunneling rates in
time-dependent settings has been presented by Keski-Vakkuri and Kraus \cite{KVK96}.
Its application to the present scenario will be worked out in detail in the
following section.
\subsection{The complex time path formalism}
Our starting point is the classical equation of motion of the bubble, which
can be found from eq.~(\ref{FRWaction}) as
\begin{equation}
\label{r-eom}
4 \pi \epsilon a^4 \bar{r}^2 - 8 \pi \sigma a^3 \bar{r} \sqrt{1 - \left(\partial_\eta \bar{r}\right)^2} = \frac{\rmd}{\rmd \eta} \left[4 \pi \sigma \frac{a^3 \bar{r}^2 \partial_\eta \bar{r}}{\sqrt{1 - \left(\partial_\eta \bar{r}\right)^2}}\right]~,
\end{equation}
and we have dropped some labels in favor of simplified notation.
The classical trajectory after tunneling
emanates from a classical turning point, where the canonical momentum
\begin{equation}
\bar{p}~\equiv~\frac{\partial \mathcal{L}}{\partial \partial_\eta \bar{r}} = 4 \pi \sigma \frac{a^3 \bar{r}^2 \partial_\eta \bar{r}}{\sqrt{1 - \left(\partial_\eta \bar{r}\right)^2}}
\end{equation}
vanishes. It has been pointed out in \cite{KVK96} that by analytic
continuation to complex $\eta$ one can find a classical trajectory\footnote{It
turns out to be a special feature of time-dependent settings that this
trajectory is not along a purely imaginary direction as would be the
case in static settings, where, as a consequence, Euclideanization of
time is a valid prescription.} (in the complex $\eta$ plane) that smoothly
shrinks the bubble to zero size. To this end, it is useful to rewrite
eq.~(\ref{r-eom}) as an equation for $\eta\left(\bar{r}\right)$:
\begin{equation}
4 \pi \epsilon a^4 \bar{r}^2 \partial_{\bar{r}} \eta - 8 \pi \sigma a^3 \bar{r} \sqrt{\left(\partial_{\bar{r}} \eta\right)^2 - 1} = \frac{\rmd}{\rmd \bar{r}} \left[4 \pi \sigma \frac{a^3 \bar{r}^2}{\sqrt{\left(\partial_{\bar{r}} \eta\right)^2 - 1}}\right]~,
\end{equation}
which, after some simplification, becomes
\begin{equation}
\label{eta-ode}
\frac{\epsilon}{\sigma} a \sqrt{\left(\partial_{\bar{r}} \eta\right)^2 - 1} = 2 \frac{\partial_{\bar{r}} \eta}{\bar{r}} + 3 \frac{\partial_\eta a}{a} - \frac{\partial^2_{\bar{r}} \eta}{\left(\partial_{\bar{r}} \eta\right)^2 - 1}~.
\end{equation}
We are looking for the solution with the boundary conditions
\begin{equation}
\label{bc}
\bar{p}\left(\eta_0\right) = \left. 4 \pi \sigma \frac{a^3 \bar{r}^2}{\sqrt{\left(\partial_{\bar{r}} \eta\right)^2 - 1}}\right|_{\eta = \eta_0} = 0~,\qquad\qquad\partial_{\bar{r}}\eta\left(0\right) = 0~.
\end{equation}
The first condition matches the solution to the turning point, from which on it
remains on the real $\eta$ axis for increasing $\bar{r}$. The second condition
guarantees that the full spherically symmetric solution is regular at the origin
$\bar{r} = 0$. Since $\eta$ will depart from the real axis for radii
smaller than the nucleation radius,
it is clear that one also has to analytically continue the
time-dependent scale factor $a$ to the complex plane. These boundary conditions do
not determine the solution entirely: we are still free to choose the \lq nucleation
time\rq~$\eta_0$. The \lq nucleation radius\rq, i.e.~the coordinate radius
of the bubble at the classical turning point, will accordingly be denoted
as $\bar{r}_0$, and fulfills $\eta\left(\bar{r}_0\right) = \eta_0$. This means that
we have a one-parameter family of solutions labeled by their individual nucleation
times. For each solution, the semiclassical tunneling rate is determined by
the imaginary part of its action:
\begin{equation}
\Gamma\left(\eta_0\right) \sim \exp\left[-2 \mathrm{Im} \mathcal{S}\left(\eta_0\right)\right]~,
\end{equation}
where it is again useful to write $\mathrm{Im} \mathcal{S}\left(\eta_0\right)$
in terms of $\eta\left(\eta_0; \bar{r}\right)$:
\begin{eqnarray}
\label{ImS}
\mathrm{Im} \mathcal{S}\left(\eta_0\right)&=&\mathrm{Im} \int_0^{\bar{r}_0}\!\!\rmd \bar{r} \left[\frac{4 \pi}{3} \epsilon a^4\!\left(\eta\left(\eta_0; \bar{r}\right)\right) \bar{r}^3 \partial_{\bar{r}} \eta\left(\eta_0; \bar{r}\right)\right.\nonumber\\
&&\hspace{75pt}\left.- 4 \pi \sigma a^3\!\left(\eta\left(\eta_0; \bar{r}\right)\right) \bar{r}^2 \sqrt{\left(\partial_{\bar{r}} \eta\left(\eta_0; \bar{r}\right)\right)^2 - 1} \right]
\end{eqnarray}
We have now all the tools to compute time-dependent tunneling rates.
Let us now turn to some explicit examples.
\subsection{Minkowski spacetime}
Before dealing with time-dependent backgrounds, let us briefly review the
situation in Minkowski spacetime. We can simply set $a = 1$ and $\eta = t$.
Since we know that the solution has to be invariant under boosts, a
natural ansatz for the trajectory is a hyperbola
\begin{equation}
\bar{r}^2 - \left(\eta - \eta_0\right)^2 = \bar{r}_0^2~,
\end{equation}
which yields
\begin{equation}
\label{Mink-sol}
\eta\left(\eta_0; \bar{r}\right) = \eta_0 + \sqrt{\bar{r}^2 - \bar{r}_0^2}~,
\end{equation}
where the positive sign is chosen for the square root, corresponding
to an expanding bubble for $\bar{r} > \bar{r}_0$. Inserting into eq.~(\ref{eta-ode})
one infers
\begin{equation}
\label{Mink-nucr}
\bar{r}_0 = \frac{3 \sigma}{\epsilon}~,
\end{equation}
and one can check that this solution fulfills the boundary conditions
(\ref{bc}). The imaginary part of its action can be found readily
from eq.~(\ref{ImS})~,
\begin{equation}
\label{ImS-Mink}
\mathrm{Im}\mathcal{S}_{\mathrm{Mink}} = \frac{\pi^2}{12} \epsilon \bar{r}_0^4 = \frac{27 \pi^2 \sigma^4}{4 \epsilon^3}~,
\end{equation}
which is exactly Coleman's result \cite{Coleman77}.
\subsection{de Sitter spacetime}
Our first example of an expanding universe will be de Sitter spacetime.
Although it can actually be written in static coordinates, the FLRW
metric with \textit{flat} spatial sections has a scale factor which
grows exponentially with time, $a = \exp\left(H t\right)$. Written
in conformal time this becomes $a = -1 / H \eta$, where $\eta$ runs
from $-\infty$ to $0$ as $t$ runs from $-\infty$ to $+\infty$. In
this case, eq.~(\ref{eta-ode}) reads
\begin{equation}
\label{eta-ode-dS}
-\frac{\epsilon}{\sigma H \eta} \sqrt{\left(\partial_{\bar{r}} \eta\right)^2 - 1} = 2 \frac{\partial_{\bar{r}} \eta}{\bar{r}} - \frac{3}{\eta} - \frac{\partial_{\bar{r}}^2 \eta}{\left(\partial_{\bar{r}} \eta\right)^2 - 1}~.
\end{equation}
While finding the complete solution to this problem does not seem
promising, it turns out that we can guess the relevant solution by
sensibly generalizing eq.~(\ref{Mink-sol}). Since de Sitter spacetime
(in flat coordinates) has a constant expansion rate $H$, we expect
that the \textit{proper} nucleation radius should be independent of
time. Now, if $\bar{r}_0$ is to be a \textit{comoving} radius, it
has to be divided by the scale factor, i.e.~instead of
eq.~(\ref{Mink-nucr}) we expect
\begin{equation} \label{dS_exactsolution1}
\bar{r}_0 = a^{-1}\!\left(\eta_0\right) \frac{3 \sigma}{\epsilon} = -H \eta_0 \frac{3 \sigma}{\epsilon}~.
\end{equation}
Remarkably, choosing this nucleation radius is enough to have eq.~(\ref{Mink-sol})
solve eq.~(\ref{eta-ode-dS}) with the boundary conditions (\ref{bc}).
The integral of eq.~(\ref{ImS}) is still solvable, and its imaginary
part is found to be
\begin{eqnarray}
\label{ImS-dS}
\mathrm{Im} \mathcal{S}_{\mathrm{dS}}&=&\frac{\pi^2 \epsilon}{3 H^4} \frac{\left(1 - \sqrt{1 + \left(3 H \sigma / \epsilon\right)^2}\right)^2}{\sqrt{1 + \left(3 H \sigma / \epsilon\right)^2}}\nonumber\\
&=&\frac{4 \pi^2 \epsilon}{3 H^4} \sinh^2\!\frac{1}{4} \ln\left(1 + \left(3 H \sigma / \epsilon\right)^2\right)~,
\end{eqnarray}
which is independent of the choice $\eta_0$ for the time of nucleation.
This means that the nucleation rate in de Sitter spacetime is time independent,
which is a manifestation of the fact that de Sitter spacetime has no true
dynamics. This expression reduces to the result of Minkowski spacetime
eq.~(\ref{ImS-Mink}) in the limit of $H \rightarrow 0$. The first correction
is of order $H^2$ and complies with the expansion obtained by Abbott, Harari
and Park~\cite{AHP87}. We can also take the limit $\epsilon \rightarrow 0$,
which corresponds to the nucleation of a domain wall separating two degenerate
vacua. One finds
\begin{equation}
\lim_{\epsilon \rightarrow 0} \mathrm{Im} \mathcal{S}_{\mathrm{dS}} = \frac{\pi^2 \sigma}{H^3}~,
\end{equation}
in complete agreement with a result obtained by Basu, Guth and Vilenkin~\cite{BGV91}.
\subsection{More general FLRW spacetimes}
Thus far we have only considered static spacetimes in order to get some
experience using the new tools. Let us finally turn to more general
spacetimes, where the expansion rate is not assumed to be constant. A
simple deformation of de Sitter expansion is given by power law inflation,
where the scale factor grows as $a = \left(\eta_1 / \eta\right)^{1 + \alpha}$.
For small deformation parameters $\alpha$ this is an exact slow-roll
solution of inflation with a constant slow-roll parameter $-\partial_t H / H^2 \approx \alpha$.
$\eta_1$ denotes an (arbitrary) point in time where the scale factor is
normalized to unity. This spacetime is obviously not static, and we shall
use this simple example to study effects of time-dependent cosmological
expansion on tunneling rates. For power law inflation, eq.~(\ref{eta-ode})
reads
\begin{equation}
\frac{\epsilon}{\sigma} \left(\frac{\eta_1}{\eta}\right)^{1 + \alpha} \sqrt{\left(\partial_{\bar{r}} \eta\right)^2 - 1} = 2 \frac{\partial_{\bar{r}} \eta}{\bar{r}} - 3 \frac{1 + \alpha}{\eta} - \frac{\partial_{\bar{r}}^2 \eta}{\left(\partial_{\bar{r}} \eta\right)^2 - 1}~.
\end{equation}
We have been unable to find an analytic solution to this equation and
therefore decided to treat it numerically. Using the boundary conditions
(\ref{bc}) one can find numerical solutions for any choices of $\alpha$,
$\eta_0 / \eta_1$ and $\epsilon / \sigma$. A parameter study of the tunneling rates
reveals the following picture. There are now three different time scales
in the problem. The inverse expansion rate $H^{-1}\left(\eta\right)$ gives the time scale
on which the background (scale factor) changes significantly. But there is
now another time scale related to the change of the expansion rate itself
(higher order derivatives of the expansion rate are zero in power
law inflation). These two time scales have to be compared to the
\lq bubble crossing time\rq , which we define by the nucleation radius
divided by the speed of light, or roughly $3 \sigma / \epsilon$ in our units.
If the bubble crossing time is the smallest time scale of the problem,
the tunneling rate is well approximated by the result of Minkowski spacetime,
eq.~(\ref{ImS-Mink}). However, if the bubble crossing time is not much smaller
than the Hubble time at nucleation, $H^{-1}\!\left(\eta_0\right)$, there are
two different possibilities. Either the characteristic time scale
for the \textit{change} of the expansion rate, $\left|\partial_t H / H\right|^{-1}$, is still much
larger than the bubble crossing time -- then we are in a regime where a quasistatic
approximation is valid such that a good estimate for the tunneling rate can be
obtained from eq.~(\ref{ImS-dS}) by setting $H = H\left(\eta_0\right)$. Or the
bubble crossing time cannot be regarded as small with respect to any other time scale. In
this case, the tunneling process \lq feels\rq~the changing expansion rate, and the
decay rate is modified significantly. Our numerical study clearly indicates that
the tunneling rate is enhanced compared to the quasistatic estimate. We believe
that this is related to the fact that the expansion rate decreases with time.
Instead of setting $H = H\left(\eta_0\right)$ in eq.~(\ref{ImS-dS}), one should
\textit{average} the expansion rate over an interval of one bubble crossing time
prior to nucleation. Using this averaged expansion rate in eq.~(\ref{ImS-dS})
turns out to give a much better estimate for the actual tunneling rate, cf.~Fig.~\ref{figpl}.
\begin{SCfigure}[5][tp]
\psfrag{ImS}[b][b]{\small $\mathrm{Im}\mathcal{S}~/~\mathrm{Im}\mathcal{S}_\mathrm{dS}$}
\psfrag{epsilon}[t][t]{\small $\left| \partial_t H / H^2 \right|$}
\psfrag{proper}[l][l]{\small proper time average}
\psfrag{conformal}[l][l]{\small conformal time average}
\centering
\includegraphics[width=0.55\textwidth]{pltunneling-ImS2.eps}
\caption{\label{figpl} Numerical values for the imaginary part of the action $\mathcal{S}$ for
tunneling solutions with proper nucleation radius $3 \sigma / \epsilon = 2 H^{-1}\!\left(\eta_0\right)$,
as a function of the slow roll parameter $\partial_t H / H^2$. The values are normalized to the \lq quasistatic
approximation\rq, which is obtained from eq.~(\ref{ImS-dS}) by setting $H = H\left(\eta_0\right)$.
Thus, in this approximation only instantaneous dynamical parameters are taken into account. The tunneling
process, however, has a characteristic time scale given by the light-travel time across the bubble.
Hence it is more appropriate to take into account some \textit{average} dynamics of the background.
In a crude way, this can be accomplished by using an averaged expansion rate to evaluate the tunneling
probability in the quasistatic approximation. The
dashed red line is obtained from a \textit{proper time average} of $H$ taken over an interval
$\Delta t = 3 \sigma / \epsilon$ before nucleation, whereas the dotted green line is obtained from a
\textit{conformal time average} taken over an interval $\Delta \eta = \Delta t / a\left(\eta_0\right)$.}
\end{SCfigure}
\subsection{Spacetimes with Big Bang singularity}
An interesting case which is also relevant for the chain inflation scenario is
a universe filled with radiation and some vacuum energy, such that radiation
dominates its early evolution, while it approaches a vacuum de Sitter phase at
later time. If we neglect the brief era of matter domination, this is also
a good model for our present universe (after reheating has taken place).
During the radiation era, the Hubble rate drops rapidly, and
assuming the vacuum is metastable, one might wonder if this affects tunneling
rates as described above. We can answer this question by comparing the relevant
time scales. If the universe is sufficiently flat, $a \propto t^{1/2}$
as long as radiation dominates over vacuum energy. Note that the universe has a Big
Bang singularity at $t = 0$ as long as reheating and any kind of cosmology before
the radiation era is not taken into account.
Hence, there exists a particle horizon of size $\sim t$.
Also both the Hubble time and the characteristic time scale for the change of the Hubble
rate are of order $\sim t$. Since even tunneling cannot violate causality, bubbles
larger than the particle horizon cannot be produced, at least not in the semiclassical
picture we are using here. This means that the bubble crossing time can never
be much larger than the other relevant time scales, and thus we expect
corrections to the tunneling rates to be small in general.
Moreover, a numerical study indicates that the tunneling rates for bubbles whose
nucleation radius is comparable to the size of the particle horizon become sensitive
to the dynamics of the scale factor up to the vicinity of the Big Bang, such that
details of reheating and even earlier cosmology become relevant.
As soon as vacuum energy begins to dominate, the time scale on which the expansion
rate changes tends to infinity quite rapidly as the Hubble rate approaches a constant.
Hence, for the late part of the evolution, we expect the de Sitter approximation
to be good. Note, however, that a particle horizon still exists and that bubbles should
not exceed the horizon size at any time as long as causal physics is at play. To
eliminate this \lq horizon problem\rq\footnote{Actually, this is nothing but the good
old horizon problem of standard Big Bang cosmology in a new guise, and therefore it can
be solved in the same spirit.}, one has to change the details of the model
near the Big Bang singularity to include any cosmology preceding the era of radiation
domination.
\section{Bubble propagation on dynamical backgrounds}
In this section we will explore the effect of background irregularities on the evolution of the bubble and whether these effects
can potentially be seen by an observer inside the bubble.
To this end we will follow a somewhat different route than in the preceding section,
where we ignored gravitational backreaction of the bubble.
In order to include this backreaction, we will follow the approach used in \cite{Fischler07} and assume that the bubble wall separates spacetime into two parts,
described by different metrics and containing different matter.
These two parts, for convenience called interior and exterior part,
are approximated by the manifolds $\mathcal{M}_-$ and $\mathcal{M}_+$ and are joined along a common timelike,
spherically symmetric hypersurface $\Sigma$ which represents the bubble wall.
Since the work of Israel \cite{Israel} there exists a well established formalism for joining manifolds along a common boundary,
known as the Israel junction conditions.
Once $\mathcal{M}_-$ and $\mathcal{M}_+$ are given, the evolution of the bubble wall uniquely follows from these conditions. Moreover, by this construction, the resulting spacetime is a solution to Einstein's field equations.
For the interior part we use de~Sitter spacetime (which can be thought of as an approximation to the inflationary phase of our observable patch of the universe) and for the exterior part two different cases will be considered.
We will study the evolution of vacuum bubbles in an inhomogeneous background for which the exterior part is approximated by the spherically symmetric Lema\^ itre-Tolman-Bondi (LTB) spacetime \cite{Lemaitre,Tolman,Bondi}.
To maintain spherical symmetry it is assumed that the bubble nucleates in the center of the LTB model.
In a similar vein, we will also explore the evolution of vacuum bubbles when the exterior part is given by a FLRW universe filled with a fluid which at some time undergoes a smooth (second-order) phase transition, e.g.~from $w=-1$ to $w=1/3$, or vice versa.
In the first subsection we will introduce the description of the bubble wall and the interior and exterior parts of spacetime.
Thereafter we will give a concise guide on how to calculate the junction equations and write them down explicitly for the cases of our interest. Finally we will solve these equations numerically and discuss the results.
\subsection{Bubble wall and background spacetime}
\subsubsection{Interior: de~Sitter}
The interior of the bubble is assumed to be in a de~Sitter phase with vacuum energy density given by $\Lambda/(8\pi)$.
We employ the flat slicing in which the metric is given by
\begin{equation}
{\rmd s}^2 = -{\rmd t}^2 +\exp\left( 2\sqrt{\Lambda/3}~t\right)\left({\rmd r}^2 +r^2{\rmd\Omega}^2\right)~,
\end{equation}
and stress-energy is given by $T_{\mu\nu} = -\Lambda g_{\mu\nu}$.
All quantities should carry the index $(-)$ which indicates that they belong to $\mathcal{M}_-$.
For convenience we write these indices only when necessary,
hoping that it will be clear from the context which quantities are meant.
\subsubsection{Bubble wall}
The timelike, spherically symmetric hypersurface separating the interior and exterior parts of spacetime, the bubble wall,
is described by the metric
\begin{equation}\label{metric_bubble}
{\rmd s}^2 = -{\rmd\tau}^2 + R^2{\rmd\Omega}^2~.
\end{equation}
Though it has been shown~\cite{BGG} that the stress-energy of a wall which separates regions of different vacua will solely be given by the surface tension, it is not clear whether this conclusion is valid when matter is present.
However, in lack of a field theoretic description, we assume that stress-energy is given by
\begin{equation}\label{seTshell}
S_{ij} = -\sigma h_{ij}~.
\end{equation}
where $h_{ij}$ is the metric tensor of~(\ref{metric_bubble}).
The equations of motion of the proper radius $R$ and surface tension $\sigma$ of the bubble will be given by the junction conditions.
\subsubsection{Exterior: LTB and FLRW}
\paragraph{LTB spacetime}
In order to explore the evolution of a bubble in an inhomogeneous background we use an LTB ansatz.
In comoving coordinates a suitable metric \cite{PK} can be given in the form
\begin{equation} \label{metricLTB}
{\rmd s}^2 = -{\rmd t}^2 + \frac{\left(r\partial_r a(t,r) + a(t,r)\right)^2}{1+2E(r)}{\rmd r}^2 + a^2(t,r)r^2{\rmd\Omega}^2~,
\end{equation}
with $2E(r)>-1$ but otherwise arbitrary.
From Einstein's equations with a dust source
$T_{\mu\nu} = \rho\delta_\mu^t\delta_\nu^t -\Lambda g_{\mu\nu}\,$, we obtain the equation of motion of the scale factor
\begin{equation} \label{eqmoLTB}
\left(\frac{\partial_t a}{a}\right)^2 - \frac{2E}{a^2r^2} = \frac{2M}{a^3r^3} +\frac{\Lambda}{3}~.
\end{equation}
Here, $M(r)$ is the first integral of motion which corresponds to the active gravitational mass within a sphere of coordinate radius $r$.
Once the scale factor is known, the dust density $\rho$ is determined by
\begin{equation} \label{eqmodust}
8\pi\rho = \frac{2\partial_r M}{a^2r^2\left(r\partial_ra+a\right)}~.
\end{equation}
\paragraph{FLRW spacetime}
In addition, we want to study the motion of bubbles in a background which undergoes a phase transition.
Therefore a flat FLRW spacetime is employed, a comoving coordinate system of which is
\begin{equation}
{\rmd s}^2 = -{\rmd t}^2 +a^2(t)\left({\rmd r}^2 +r^2{\rmd\Omega}^2\right)~,
\end{equation}
with stress-energy given by
\begin{equation}
T_{\mu\nu} = \left(\rho+p\right)\delta_\mu^t\delta_\nu^t + p g_{\mu\nu} -\Lambda g_{\mu\nu}~.
\end{equation}
The evolution of this background follows from the Friedmann equation
\begin{equation} \label{eqmoFLRW}
\left(\frac{\partial_t a}{a}\right)^2 = \frac{8\pi}{3}\rho + \frac{\Lambda}{3}~,
\end{equation}
and continuity equation
\begin{equation} \label{conFLRW}
\partial_t \rho +3 \frac{\partial_t a}{a}\left(\rho+p\right) = 0~.
\end{equation}
To look at the influence of a phase transition in the background on the evolution of the bubble,
we artificially\footnote{A universe with two or more components with different equations of state,
like e.g.~radiation and cosmological constant, actually has one or several intrinsic phase transitions.
However, these transitions are very gentle and would probably only produce a minuscule effect.}
introduce an abrupt change in the equation of state $p=w\rho$ via
\begin{equation}
w(t) = -\frac{1}{3}\left(1 \pm 2\tanh(\gamma_\mathrm{pt} (t-t_\mathrm{pt})) \right)~.
\end{equation}
to model a nearly instantaneous (on time scale $\gamma_\mathrm{pt}^{-1} \ll H^{-1}$) phase transition at $t=t_\mathrm{pt}$ from $w=-1$ to $w=1/3$ (\lq reheating\rq)
and vice versa. Solutions to equations~(\ref{eqmoFLRW}) and~(\ref{conFLRW}) can not be given in closed form
and will be obtained numerically.
\subsection{Conditions for a valid junction}
The problem of joining manifolds across a common boundary has been lucidly explained in the work of Israel~\cite{Israel}.
For a nice introduction see also the textbooks~\cite{Poisson,GH}.
Two conditions arise in the course of joining two manifolds.
The first junction condition requires the induced metrics $h_{ij} =g_{\mu\nu} \mathrm{e}_i^\mu\mathrm{e}_j^\nu$ to coincide on $\Sigma$
\begin{equation} \label{junction1a}
\left[h_{ij}\right] \equiv h_{ij}^+\vert_\Sigma -h_{ij}^-\vert_\Sigma = 0~.
\end{equation}
The second junction states that, whenever there is a discontinuity in the extrinsic curvature of $\Sigma$ as seen from
$\mathcal{M}_\pm$, a surface layer of stress-energy $S_{ij}$, given by
\begin{equation} \label{junction2a}
8\pi S_{ij} = \left[K_{ij}\right] - h_{ij}\left[K\right]
\end{equation}
will be present.
Therefore the proposed stress-energy on the bubble wall~(\ref{seTshell})
has to be identified with the difference in the extrinsic curvature.
The components of the extrinsic curvature tensor $K_{ij}$ are defined as the covariant derivative of the vector
$\mathrm{e}_j^\mu$ along $\mathrm{e}_i^\nu$ projected onto the surface normal
\begin{equation}\label{ecT}
K_{ij} = n_\alpha\Gamma^\alpha_{\mu\nu}\mathrm{e}^\mu_i\mathrm{e}^\nu_j~.
\end{equation}
Once the projectors $\mathrm{e}_i^\mu = \partial x^\mu/\partial y^i$ are known,
the normal vector of $\Sigma$ can be obtained by the conditions
\begin{equation}
n_\mu n^\mu = 1 \quad \mathrm{and}\quad n_\mu\mathrm{e}_i^\mu = 0
\end{equation}
up to a sign which determines how $\mathcal{M}_-$ and $\mathcal{M}_+$ are stuck together.
We choose this sign such that
\begin{equation}\label{sign_normal}
n_\mu = \sqrt{g_{rr}}\left(-\dot r, \dot t, 0 ,0 \right)~,
\end{equation}
where a dot refers to a partial derivative with respect to $\tau$. This choice implies that in $\mathcal{M}_-$,
radii increase towards $\Sigma$ and in $\mathcal{M}_+$, radii decrease towards $\Sigma$.
The continuity equation
\begin{equation}\label{eqmosigma}
\nabla_i S_j^i + \left[T^\alpha_\beta n_\alpha e^\beta_j\right] = 0~.
\end{equation}
is not independent of the two equations resulting from~(\ref{junction2a}) and will be used to substitute one of these.
We consider exterior stress-energy given by a perfect fluid and interior by a cosmological constant.
Therefore the $\tau$ component provides the following first order equation for $\sigma$
\begin{equation}\label{eqmosigma2}
\dot\sigma = \left(\rho+p\right)\sqrt{g_{rr}}\dot r \dot t~.
\end{equation}
In the next subsections we will write down these equations for LTB spacetime.
\subsubsection{Equation of motion for the size and surface tension of the bubble}
By virtue of spherical symmetry of the bubble wall, angular coordinates can be identified
and the time and radial coordinate can be parameterized by the proper time of the bubble $\left(t(\tau),r(\tau)\right)$.
We write down the equations for the LTB part and relegate the FLRW equations to the appendix.
The conditions of the first junction turn out to be
\begin{equation} \label{junction1b}
ar = R, \quad {\dot t}^2 = 1 + \frac{\left(r\partial_r a + a\right)^2}{1+2E}{\dot r}^2~.
\end{equation}
We will make use of the $\theta\theta$-component of the second junction condition which yields
\begin{equation} \label{junction2b}
4\pi\sigma R = \sqrt{ {\dot R}^2 + 1-\frac{\Lambda_-}{3}R^2 } - \sqrt{ {\dot R}^2 + 1 -\frac{2M}{R} -\frac{\Lambda_+}{3}R^2 }~.
\end{equation}
Solving for the derivative we obtain the more convenient form
\begin{equation}\label{Roftau}
\frac{1}{2}{\dot R}^2 + V = -\frac{1}{2}~,
\end{equation}
with $2V$ given by
\begin{equation}\hspace{-50pt}
2V =
-\left[ \frac{\Lambda_-}{3} +\left(\frac{\Lambda_+ -\Lambda_-}{24\pi\sigma} +2\pi\sigma\right)^2 \right]R^2
-\left(1+\frac{\Lambda_+ -\Lambda_-}{48\pi^2\sigma^2}\right)\frac{M}{R}-\frac{M^2}{16\pi^2\sigma^2R^4}~.
\end{equation}
If $M=\mathrm{const}$ all coefficients in the potential are constant and we recover the Schwarzschild-de~Sitter model which has been discussed in~\cite{BGG,BKT,APS,AJ}.
However, since we want to introduce an exterior matter density, $M$ will no longer be constant, i.e. $M(r)=M(R/a)$ and the scale factor of the ambient spacetime will enter the equation.
In this way the motion of the surface becomes sensitive to the presence of matter in the background.
Note that~(\ref{metricLTB}) is covariant under a rescaling of the radial coordinate.
Together with $\partial_r M >0$ this allows one to define a radial coordinate such that
$M(r) = \frac{4\pi}{3}A r^3$ where $A$ is a constant.
The potential becomes
\begin{equation}\label{VLTB}
2V = -\left[\frac{\Lambda_-}{3}
+\left(\frac{A}{3a^3\sigma} +\frac{\Lambda_+ -\Lambda_-}{24\pi\sigma} +2\pi\sigma \right)^2\right]R^2~.
\end{equation}
and the equation of motion for the surface tension is
\begin{equation}
\dot\sigma = \rho\frac{r\partial_r a+a}{\sqrt{1+2E}}\dot r \dot t~.
\end{equation}
It is restricted by equation~(\ref{junction2b}) to
\begin{equation}\label{bound_sigma}
4\pi\sigma < \sqrt{\frac{8\pi A}{3a^3} + \frac{\Lambda_+ -\Lambda_-}{3}} \equiv \sqrt{\frac{8\pi}{3}\epsilon}~.
\end{equation}
Here $\epsilon$ is the difference in energy density between inside and outside.
This bound is a direct consequence of the geometry that was fixed by the sign of the normal vector~(\ref{sign_normal}).
\subsubsection{Evolution equations expressed in exterior coordinates}
Since the background dynamics of the LTB (and FLRW) part of the spacetime can be obtained only numerically all equations will be solved in these coordinates in the first place.
Making use of the junction conditions we are able to write down the evolution equations in terms of the exterior coordinates.
After a solution to these equations has been obtained, the matching conditions will be employed again to express the evolution in terms of the interior coordinates. Like before, we explicitly write down the expressions for the LTB part and provide the FLRW equations in the appendix.
Let $\bar r(t)$ be the bubble radius in these coordinates.
Writing $\dot R = \dot t \frac{d}{dt}\left(a\bar r\right)$ we can solve for $\partial_t \bar r$ and obtain
\begin{equation}\label{posLTB}
\partial_t \bar r =
\frac{-(1+2E)\bar r\partial_t a +\sqrt{(1+2E)\left(1+2V\right)\left(\left(\bar r\partial_t a\right)^2-2E+2V\right)}}
{\left(\bar r\partial_{\bar r} a+a\right)\left(2E-2V\right)}~.
\end{equation}
where we have chosen a positive sign of the square root because we are interested in solutions of physically growing bubbles,
i.e. $\dot R\geq 0$. The equation for $\sigma$~(\ref{eqmosigma2}) can as well be converted to LTB coordinates
\begin{equation} \label{sigmaLTB
\partial_t \sigma =
\rho\frac{\left(\bar r\partial_{\bar r} a+a\right)\partial_t \bar r}
{\sqrt{1+2E-\left(\bar r\partial_{\bar r} a+a\right)^2\left(\partial_t \bar r\right)^2}}~.
\end{equation}
The surface tension becomes time dependent.
It increases when the comoving radius of the bubble increases, i.e.~it collects matter from the background, and if the bubble shrinks it will exactly provide the amount of matter density determined by the background.
This is a limitation of the spacetime junction approach, which in the present form does not capture the
physics of matter transfer across the junction surface. Physically we would expect that dust would actually
penetrate into the bubble, as one can convince oneself by looking at the geodesics of \lq test particles\rq.
However, interior and exterior parts of spacetime are fixed ab initio and cannot be changed by the motion
of the bubble. Bearing with this limitation, we continue our analysis and will give an outlook
on this issue in our conclusions.
For now, equations~(\ref{posLTB}) and~(\ref{sigmaLTB}) completely determine the evolution of the bubble
in exterior coordinates.
\subsection{Bubble evolution on dynamical backgrounds}
In this section we explore the propagation of bubbles on dynamical backgrounds.
We start with an exact solution of a de~Sitter/de~Sitter spacetime and continue with the numerical solutions obtained for the
de~Sitter/LTB and de~Sitter/FLRW spacetimes.
\subsubsection{Exact solution in de~Sitter/de~Sitter spacetime}
We solve equations~(\ref{posLTB}) and~(\ref{sigmaLTB}) for the case that both spacetimes are de~Sitter with cosmological constants given by $\Lambda_\pm$.
Then $M=0$ and we employ the flat slicing where also $E=0$. Since there is no matter in the background it follows from equation~ (\ref{sigmaLTB}) that $\sigma=\mathrm{const}$. The potential $V$ reduces to
\begin{equation}\label{VdS}
2V = -\left[\frac{\Lambda_-}{3} +\left(\frac{\epsilon}{3\sigma} +2\pi\sigma\right)^2\right] R^2~.
\end{equation}
For further convenience we define
\begin{equation} \label{def_of_u}
u_\pm^2 \equiv \frac{3}{\Lambda_\pm}\left(\frac{\epsilon}{3\sigma} \mp 2\pi\sigma\right)^2~,
\end{equation}
such that $2V = -H_\pm^2(1+u_\pm^2)R^2$, where $H_\pm^2 = \Lambda_\pm/3$.
The solution is valid in $\mathcal{M}_-$ and $\mathcal{M}_+$, with the corresponding quantities $(u_-,\Lambda_-)$ and $(u_+,\Lambda_+)$ respectively.
Equation~(\ref{posLTB}) becomes
\begin{equation}
\frac{2V(a\bar r)}{H}\frac{\partial_t \bar r}{\bar r} = 1 - |u|\sqrt{-2V(a\bar r)-1}~.
\end{equation}
which, when rewritten as a differential equation for $V$, can be
solved by separation of variables. Solving for $\bar r$ yields
\begin{equation} \label{dS_exactsolution2}
\bar r(t) = \sqrt{u^{-2} + \left(\exp\left(-Ht\right) -1\right)^2}H^{-1}~.
\end{equation}
For convenience we normalized the scale factor at the time $t=t_0$ when $\partial_t \bar r(t_0)=0$
and took $t_0=0$ without loss of generality.
This implies
\begin{equation}
\bar r_0^{~\pm} = \left|\frac{\epsilon}{3\sigma} \mp 2\pi\sigma\right|^{-1}~.
\end{equation}
After $t=t_0$ the bubble accelerates and converges to $\bar r(t\rightarrow\infty) = \sqrt{1+u^2}\bar r_0$,
see Fig.~(\ref{dS_exact_trajectories2}).
We conclude with the remark that~(\ref{dS_exactsolution2}) reduces to the solution found in~(\ref{dS_exactsolution1}) in the limit of $G\rightarrow 0$.
\begin{SCfigure}[5][tp]
\psfrag{Hr}[][]{\small $H\bar r$}
\psfrag{Ht}[][]{\small $Ht$}
\centering
\includegraphics[width=0.55\textwidth]{dS_exact_trajectories2.eps}
\caption{\label{dS_exact_trajectories2}
Trajectories of the bubble wall in the flat slicing of de~Sitter/de~Sitter spacetime for different values of $u\,$; the
parameter $u$ (\ref{def_of_u}) encodes the dependency on $\Lambda_\pm$ and the surface tension $\sigma$.
The bubble expands accelerated from coordinate radius $\bar r_0=u^{-1}H^{-1}$ but converges to the finite coordinate radius
$\sqrt{1+u^2}\bar r_0$ in the limit $t\rightarrow\infty$. The trajectory is shown in de~Sitter coordinates rather than
in physical quantities (proper radius $R$ vs. proper time $\tau$) to make comparison to our later results easier.}
\end{SCfigure}
\subsubsection{Numerical solution in de~Sitter/LTB spacetime}
The goal of this section is to understand the influence of ambient inhomogeneities on the motion of the bubble wall.
The first step into the numerics of the bubble is solving the dynamics of the background.
For an intuitive approach we define $2E(r) = -k(r)r^2$ where the profile $k$ may be interpreted as the local spatial curvature.
Then, the coordinate size of the spatial section, if finite, is determined by $k\left(r_\mathrm{max}\right)r_\mathrm{max}^2 = 1$.
In addition, we have to specify the initial value of the scale factor $a_0(r)\equiv a(t_0,r)$
which will in general depend on the radial coordinate.
Instead, one may equivalently choose the initial dust density $\rho_0(r)\equiv \rho(t_0,r)$, which
defines $a_0(r)$ via equation~(\ref{eqmodust}).
Thus, in coordinates where $M(r) = \frac{4\pi}{3}Ar^3$,
spatial inhomogeneity of the LTB spacetime is incorporated in the functions $k$ and $\rho_0$.
In the limit where $k$ and $\rho_0$ are constant the model becomes homogeneous.
After both functions have been specified, the partial differential equation~(\ref{eqmoLTB}) can directly be integrated at each $r$.
When a solution is found its validity has to be checked.
If not $\partial_r(ar)>0$, the weak energy condition $\rho \geq 0$ is violated by the occurrence of a shell-crossing singularity.
This actually restricts the curvature profile $k$, since, if too steep,
it disturbs the background massively such that the dust density will violate the weak energy condition at some time.
We evolve the system until the background space has expanded for about 4 efolds.
The initial time of the analysis is $t=t_0$ where $\partial_t \bar r(t_0)=0$.
$t_0$ will be referred to as the time of nucleation of the bubble.
The nucleation radius is determined by the parameters $A, \Lambda_+, \Lambda_-, \sigma_0$.
It can be inferred from equation~(\ref{posLTB}) which we rearrange to
\begin{equation} \label{nuclradius}
\frac{1}{\bar r_0^2} = k(\bar r_0) +a_0^2(\bar r_0)\left(\frac{\epsilon_0}{3\sigma_0}-2\pi\sigma_0 \right)^2~.
\end{equation}
Note that the initial difference in energy density $\epsilon_0$ includes the initial dust density $\rho_0$,
which can be a function of $\bar r_0$, too.
This equation illustrates the route that we will follow.
There are two possibilities, via the functions $k$ and $\rho_0$, to introduce inhomogeneity in the LTB model.
These two cases are considered independently, meaning that one of the two terms on the right hand side will be independent of $r$.
In case there is more than one solution to the equation the smallest positive value will be taken.
Note also that the proper kinetic energy of the bubble at nucleation is proportional to
$ \dot R^2 \vert_{t=t_0} = H_0^2R_0^2$,
where $H_0 \equiv \frac{\partial_t a}{a}(t_0,\bar r_0)$ and $R_0=a_0(\bar r_0)\bar r_0$.
\paragraph{Homogeneous limit}
The first thing we will explore is not the effect of inhomogeneity,
but simply what happens when the bubble nucleates in a background where, in addition to vacuum energy density, some dust is present.
To keep things simple we refrain from adding any curvature at this stage and set $k=0$ and $a_0(r)=1$.
The spatial sections of LTB spacetime become homogeneous and reduce to the FLRW limit.
Henceforth, we will always assume that at nucleation dust density shall exceed exterior vacuum energy,
and for definiteness we choose $8\pi A/3=10^{-4}$ and $\Lambda_+/3=10^{-5}$ in accordance with~\cite{Fischler07}.
It is important to note that this setup already has a significant effect on the evolution of the bubble.
Whereas a bubble that nucleates in vacuum always begins to expand, this is no longer guaranteed as soon as
considerable amounts of matter are present in the environment. This can be seen by the following argument.
The force which accelerates the shell has two contributions: one from the surface tension, which is always
directed inwards, and one from the pressure difference between the interior and exterior fluid.
In the case where both fluids are mere cosmological constants it is easy to show that the pressure
force can sustain the surface tension and will push the shell outwards. However, if the exterior fluid
is mainly composed of pressureless dust, at some point surface tension will outrun pressure support
and the bubble will be forced to collapse.
To make this statement more quantitative we can take another derivative of eq.~(\ref{posLTB}) and examine the behavior
of the bubble when $\partial_t \bar{r} = 0$. In the homogeneous limit one finds
\begin{equation}
\left. \partial_t^2 \bar{r}\right|_{\partial_t \bar{r} = 0}~=~\frac{1}{a} \left(\frac{\Lambda_+ - \Lambda_-}{24 \pi \sigma} - 2 \pi \sigma - \frac{2 \rho}{3 \sigma}\right)~.
\end{equation}
The sign of $\partial_t^2 \bar{r}$ depends on how $\sigma^2$ and $\rho$ compare to
the latent heat of the vacuum, $\epsilon_{\mathrm{vac}} \equiv \left(\Lambda_+ - \Lambda_-\right) / 8 \pi$.
The bubble can only expand into the ambient spacetime if $\rho$ is not too large. In particular,
one can never have an expanding bubble during a matter dominated phase\footnote{Note that this was not
at all an issue in section 2, since background spacetime was assumed spatially homogeneous and any
dust would therefore permeate the bubble. In the present setup, however, the interior is assumed
to be completely empty except for
a possible cosmological constant. In this sense, the issue is one of initial conditions.}.
This strongly limits the possibility
to study the propagation of bubbles into inhomogeneous matter with the current approach, since
the exterior spacetime has to be vacuum dominated in order to allow the bubble to propagate towards the
inhomogeneity in the first place. We have summarized the behavior of freshly nucleated bubbles of vacuum
within a dust environment in Fig.~\ref{sigmabounds}.
\begin{SCfigure}[5][tp]
\centering
\psfrag{r/e}[b][b]{\small $\rho / \epsilon_{\mathrm{vac}}$}
\psfrag{6pGs2/e}[t][t]{\small $6 \pi \sigma^2 / \epsilon_{\mathrm{vac}}$}
\psfrag{collapsing}[b][b]{\small \textbf{contracting}}
\psfrag{expanding}[t][t]{\small \textbf{expanding}}
\psfrag{forbidden}[b][b]{\small \textbf{forbidden}}
\psfrag{smallest possible rL}[b][b]{\footnotesize matter dominated universes possible}
\psfrag{vacuum dominated universes}[t][t]{\footnotesize all universes vacuum dominated}
\includegraphics[width=0.55\textwidth]{sigmabounds.eps}
\caption{\label{sigmabounds}
This plot characterizes the early-time behavior of a vacuum bubble after it was nucleated at rest in
the comoving frame of an exterior flat FLRW spacetime with dust and cosmological constant. With
respect to an exterior comoving observer, the bubble shows different behavior in different regions
of the $\sigma^2$-$\rho$-plane, which is drawn in units of $\epsilon_{\mathrm{vac}} \equiv \left(\Lambda_+ - \Lambda_-\right) / 8 \pi$.
The shaded region is forbidden for our choice of junction because $\sigma$ there violates the bound (\ref{bound_sigma}).
If the energy density of dust $\rho$ is chosen above the dashed red line, the bubble starts to contract.
This includes all matter dominated universes $\rho > \rho_{\mathrm{vac}} \equiv \Lambda_+ / 8 \pi$
since we assume $\Lambda_+ > \Lambda_- \geq 0$, which implies $\rho_{\mathrm{vac}} \geq \epsilon_{\mathrm{vac}}$.
Below the dashed red line the bubble starts to expand into the ambient spacetime. This includes all
vacuum de Sitter spacetimes since they are found on the line $\rho = 0$. Below the dotted blue line, all
universes are vacuum dominated.}
\end{SCfigure}
We numerically calculated the trajectories of bubbles in a dust dominated background. As expected
by the argument above, in contrast to the de~Sitter case, bubbles contract as seen from the exterior perspective
(Fig.~\ref{figures_LTB_flat}, left).
In fact some bubbles contract so fast that the growth of their physical size is decelerated, i.e. their proper kinetic energy decreases.
Our results show that it will even decrease to zero for small bubbles with $H_0\bar r_0 \lesssim 1$.
In this case the proper kinetic energy becomes imaginary and the evolution of the bubble had to be stopped.
When $\bar r_0\gtrsim H_0^{-1}$ they retain some proper kinetic energy but nevertheless shrink in exterior coordinates and converge to a coordinate radius which is smaller than the coordinate radius of nucleation.
For bubbles larger than $2H_0^{-1}$, even if they fulfill the bound~(\ref{bound_sigma}) initially, the
dust density in the background drops faster than the surface tension of the bubble such that the bound will be violated soon.
Note also that after $\sqrt{\Lambda_+/3}~t_\mathrm{eq}\simeq 0.4$ the bubble moves on a vacuum dominated background.
The contour plot in \ref{figures_LTB_flat} illustrates the fate of a bubble in dependence of the parameters $(\Lambda_0,\sigma_0)$.
\begin{figure}
\begin{center}
\begin{tabular}{lr}
\psfrag{y}[][]{\scriptsize{$H_0\bar r$}}
\psfrag{x}[][]{\scriptsize{$\sqrt{\Lambda_+/3}~t$}}
\includegraphics[width=0.47\textwidth]{LTB_flat_trajectories.eps}
&
\psfrag{y}[][]{\scriptsize{$\Lambda_-/\Lambda_+$}}
\psfrag{x}[][]{\scriptsize{$4\pi\sigma_0/H_0$}}
\includegraphics[width=0.47\textwidth]{contourplot.eps}
\end{tabular}
\caption{\label{figures_LTB_flat}
Results obtained for the homogeneous limit of the LTB model with $k=0$ and $a_0(r)=1$.
Left figure:
Trajectories of the bubble wall for several values of the surface tension~
$0.3H_0\le 4\pi\sigma_0 \le 0.75H_0$ and~$\Lambda_-=0.1\Lambda_+$.
The fate of the bubble depends on the nucleation size.
Small bubbles contract in LTB coordinates until their proper kinetic energy becomes zero, i.e.~$\dot R = 0$
and the equation of motion becomes imaginary.
The greater the bubble the more likely it sustains kinetic energy until the background is dominated by~$\Lambda_+$
and it converges to some finite coordinate radius.
Right figure:
Fate of a bubble in dependence of the parameters~$\sigma_0,\Lambda_-$.
We looked in the region of parameter space where the nucleation radius of the bubble is within $0.1<H_0\bar r_0<5$.
The black lines are lines of constant kinetic energy and nucleation radius~$H_0\bar r_0=(0.1,1,2,3,4,5)$.
The black shaded region is considered to be unphysical since equation~(\ref{bound_sigma}) is not fullfilled there already initially.
In the region shaded light blue, bubbles either contract until~$\dot R = 0$, or they hit the geometrical bound~(\ref{bound_sigma}).
Neither occurs in the white region in which bubbles \lq survive\rq~and come to rest at a finite coordinate radius.}
\end{center}
\end{figure}
\paragraph{Inhomogeneous dust density}
We now explore possible effects of inhomogeneity by the
introduction of an initial dust distribution $\rho_0(r)$. The bubble may nucleate either in an overdense or in an underdense region of space.
It is not very revealing to consider a radially decreasing dust profile,
because the dust will be rarefied by the expansion of the background anyway.
Therefore we will have a look at radially increasing dust profiles only.
For a given $\rho_0(r)\,$, the initial scale factor is determined by
\begin{equation}
a_0^3(r) = \frac{3A}{r^3}\int \frac{r^2}{\rho_0(r)}\rmd r.
\end{equation}
We will consider the profile
\begin{equation}\label{dustprofile}
\rho_0(r) = A r^3/r_A^3
\end{equation}
with $r_A=\left(\Lambda_+/3\right)^{-1/2}$.
We expect that small bubbles with $\bar r_0 \ll r_A$ will begin to expand initially because they will find themselves in a vacuum dominated background, whereas large bubbles with $\bar r_0 \gtrsim r_A$ are in a matter dominated background with their subsequent evolution being
much like in the homogeneous limit discussed before.
Although the initial dust profile is increasing, expanding bubbles propagate into regions of lower density due to the expansion of the background, see Fig.~\ref{figures_LTB}.
\begin{figure}
\begin{center}
\begin{tabular}{lr}
\psfrag{y}[][]{\scriptsize{$\sqrt{\Lambda_+/3}~\bar r$}}
\psfrag{x}[][]{\scriptsize{$\sqrt{\Lambda_+/3}~t$}}
\includegraphics[width=0.47\textwidth]{LTB_dustprofile_trajectories.eps}
&
\psfrag{y}[][]{\scriptsize{$\rho(t,\bar r)/A$}}
\psfrag{x}[][]{\scriptsize{$\sqrt{\Lambda_+/3}~t$}}
\includegraphics[width=0.47\textwidth]{LTB_dustprofile_dustevolution.eps}\\
\end{tabular}
\caption{\label{figures_LTB}
Results obtained for the inhomogeneous LTB model with the dust profile~(\ref{dustprofile}).
The left figure shows the trajectory of bubbles with initial surface tension
$0.7\sqrt{\Lambda_+/3}\le 4\pi\sigma_0 \le 1.6\sqrt{\Lambda_+/3}$ and~$\Lambda_-=0.1\Lambda_+$.
Smaller bubbles nucleate in a region where vacuum energy dominates over dust density and therefore expand.
Bubbles with $\bar r_0 \simeq r_A$ contract because they are already in a dust dominated background.
The right figure shows the dust density at the position of the bubble. Although the dust profile~(\ref{dustprofile})
radially increases, the expansion of the background dilutes matter efficiently such that expanding bubbles
effectively propagate in a decreasing profile.
}
\end{center}
\end{figure}
\paragraph{Inhomogeneous curvature}
The other possibility is to incorporate inhomogeneity in the neighborhood of the bubble via a curvature profile $k$.
However, in view of the results obtained in the homogeneous limit, we state that the bubble will hardly be able propagate into that inhomogeneity because it will shrink as long as the background is matter dominated.
Even if the condition that the bubble nucleates comovingly is relaxed, such that the bubble may have $\partial_t \bar r(t_0)>0$, deceleration is large enough to make the bubble contract almost immediately.
Therefore it seems that studying the effect of curvature inhomogeneity within this approach is hardly feasible.
Note that this result appears to be in contrast to what has been obtained in \cite{Fischler07}.
Nevertheless, to see what happens when a bubble enters a curvature inhomogeneity we make use of a result obtained previously.
In the last section it was shown that bubbles which nucleated in a vacuum dominated region expanded initially.
Now, we add some curvature in \lq front\rq~of the bubble and have a look what happens if the bubble encounters that inhomogeneity and
whether there is a difference compared to the corresponding solution in the flat background.
The curvature profile is given by
\begin{equation} \label{eq_curvature_profile}
k(r) = \frac{1}{2\left(\alpha_1 \mathcal{R}_\mathrm{cr}\right)^2}
\left(1+\tanh\left( \frac{\sqrt{\Lambda_+/3}~r-\alpha_3}{\alpha_2} \right)\right)~,
\end{equation}
where $\mathcal{R}_\mathrm{cr} \equiv (4 \pi A \sqrt{\Lambda_+})^{-1/3}$. Taking the dust profile from the last section, we fix the initial surface tension and vacuum energy of the bubble to $\sigma_0=0.6\sqrt{\Lambda_+/3}, \ \Lambda_-=0.1\Lambda_+$.
The evolution of the bubble is significantly affected in the exterior perspective.
However, this may be just a coordinate effect and when looked at the trajectories in the interior coordinates the effect practically vanishes. Nevertheless, there remains an effect in the surface tension of the bubble, see Fig.~\ref{figures_LTB_k}.
\begin{figure}
\begin{center}
\begin{tabular}{lr}
\psfrag{Hr}[][]{\scriptsize{$\sqrt{\Lambda_+/3} r$}}
\psfrag{y}[][]{\scriptsize{$R_\mathrm{cr}\sqrt{k}$}}
\includegraphics[width=0.47\textwidth]{curvature_profile.eps}
&
\psfrag{y}[][]{\scriptsize{$\sqrt{\Lambda_-/3}~\bar r$}}
\psfrag{x}[][]{\scriptsize{$\sqrt{\Lambda_-/3}~t$}}
\includegraphics[width=0.47\textwidth]{dS_flat_curved_trajectories2.eps}\\
\psfrag{y}[][]{\scriptsize{$\sqrt{\Lambda_+/3}~\bar r$}}
\psfrag{x}[][]{\scriptsize{$\sqrt{\Lambda_+/3}~t$}}
\includegraphics[width=0.47\textwidth]{LTB_flat_curved_trajectories2.eps}
&
\psfrag{y}[][]{\scriptsize{$\sigma/\sigma_0$}}
\psfrag{x}[][]{\scriptsize{$\sqrt{\Lambda_-/3}~t$}}
\includegraphics[width=0.47\textwidth]{dS_flat_curved_sigma2.eps}
\end{tabular}
\caption{\label{figures_LTB_k}
Results obtained for a bubble propagating in an inhomogeneous background (curvature inhomogeneity \textit{and} inhomogeneity in the initial dust profile). Upper left:
the curvature profile~(\ref{eq_curvature_profile}) with $\alpha_1 = 1.5, \alpha_2 = 0.02$ and $\alpha_3 = 0.2$.
Lower left:
Bubble trajectories in the comoving LTB coordinates in a flat background (red) compared to the curved background (green).
The trajectories are affected significantly in these coordinates.
However, if one converts the trajectories to the interior de~Sitter coordinates the effect practically vanishes (upper right).
Nevertheless, there remains an effect on the surface tension of the bubble (lower right).}
\end{center}
\end{figure}
\subsubsection{Numerical solution in de~Sitter/FLRW spacetime}
In this section, the evolution of the bubble on a de~Sitter/FLRW background will be considered.
The FLRW part is supposed to contain vacuum energy and a perfect fluid which undergoes a phase transition i) from~$w=1/3$ to~$w=-1$,
or ii) vice versa.
The FLRW dynamics~(\ref{eqmoFLRW}) and~(\ref{conFLRW}) will be solved numerically with
the initial values~$a(t_0)=1$,~$8\pi\rho_0/3=10^{-4}$ and $\Lambda_+/3=10^{-5}$.
Again~$t_0=0$ shall represent the exterior time coordinate at which~$\partial_t \bar r(t_0)=0$.
The phase transition is supposed to occur at~$t_\mathrm{pt}=0.5H_0^{-1}$, and we set the width $\gamma_\mathrm{pt}^{-1} = 1 \ll H_0^{-1}$ in order to model a nearly instantaneous transition.
After these dynamics have been established we consider the evolution of the bubble.
The nucleation radius of the bubble can still be obtained from equation~(\ref{nuclradius}) with $k=0$.
Of course, an immediate effect of the phase transition on the trajectory of the bubble can be seen in the exterior coordinates.
In case i) the contraction of the bubble is stopped when the background becomes vacuum dominated,
whereas in case ii) the expansion of the bubble reverses to contraction due to the phase transition.
However, contrary to the inhomogeneous background discussed before, the effect is still present when the trajectory is expressed in the coordinates of an interior observer, see Fig.~\ref{figures_FLRW}.
However, the most prominent effect still can be seen in the surface tension of the bubble.
\begin{figure}
\begin{center}
\begin{tabular}{lr}
\psfrag{y}[][]{\scriptsize{$H_0\bar r$}}
\psfrag{x}[][]{\scriptsize{$H_0 t$}}
\psfrag{W}[][]{\scriptsize{\fbox{$w=1/3 \rightarrow w=-1$}}}
\includegraphics[width=0.47\textwidth]{FLRW_trajectories.eps}
&
\psfrag{y}[][]{\scriptsize{$H_0\bar r$}}
\psfrag{x}[][]{\scriptsize{$H_0 t$}}
\psfrag{W}[][]{\scriptsize{\fbox{$w=-1 \rightarrow w=1/3$}}}
\includegraphics[width=0.47\textwidth]{FLRW_trajectories2.eps}\\
\psfrag{y}[][]{\scriptsize{$\sqrt{\Lambda_-/3}~\bar r$}}
\psfrag{x}[][]{\scriptsize{$\sqrt{\Lambda_-/3}~t$}}
\psfrag{W}[][]{\scriptsize{\fbox{$w=1/3 \rightarrow w=-1$}}}
\includegraphics[width=0.47\textwidth]{dS_trajectories.eps}
&
\psfrag{y}[][]{\scriptsize{$\sqrt{\Lambda_-/3}~\bar r$}}
\psfrag{x}[][]{\scriptsize{$\sqrt{\Lambda_-/3}~t$}}
\psfrag{W}[][]{\scriptsize{\fbox{$w=-1 \rightarrow w=1/3$}}}
\includegraphics[width=0.47\textwidth]{dS_trajectories2.eps}\\
\psfrag{y}[][]{\scriptsize{$\sigma/\sigma_0$}}
\psfrag{x}[][]{\scriptsize{$\sqrt{\Lambda_-/3}~t$}}
\psfrag{W}[][]{\scriptsize{\fbox{$w=1/3 \rightarrow w=-1$}}}
\includegraphics[width=0.47\textwidth]{dS_sigma.eps}
&
\psfrag{y}[][]{\scriptsize{$\sigma/\sigma_0$}}
\psfrag{x}[][]{\scriptsize{$\sqrt{\Lambda_-/3}~t$}}
\psfrag{W}[][]{\scriptsize{\fbox{$w=-1 \rightarrow w=1/3$}}}
\includegraphics[width=0.47\textwidth]{dS_sigma2.eps}
\end{tabular}
\caption{\label{figures_FLRW}
Evolution of bubbles in the de~Sitter/FLRW spacetime for surface tensions $0.4H_0 \leq 4\pi\sigma \leq 0.6H_0$
and $\Lambda_- = 0.1\Lambda_+$.
The left column shows the results of a transition from $w=1/3$ to $w=-1$ and the right column from $w=-1$ to $w=1/3$
compared, respectively, to their counterparts where the equation of state remains constant.
The upper plots show the trajectory of the bubble in exterior coordinates, while the mid plots show the trajectories
as seen from an interior observer.
Unlike for the inhomogeneous background, we see that the exterior phase transition indeed leaves a sizeable effect
on the trajectory of the bubble. In addition, the most prominent effect is again on the surface tension.}
\end{center}
\end{figure}
\section{Conclusions}
First-order phase transitions, as may have occurred multiple times
in the early universe, proceed by spontaneous nucleation of bubbles.
In order to answer the question on how much information about the
initial state \lq survives\rq~the phase transition, it is important to
understand bubble nucleation and propagation beyond the assumption of a trivial initial state. This
question may indeed be relevant for certain cosmological scenarios,
such as the chain inflation proposal. In this particular scenario, a
series of first-order phase transitions proceeds very quickly, such
that the time in between two transitions is too short for the universe
to dilute all inhomogeneities and thermal radiation produced in
each transition.
We have seen that the dynamics of the background spacetime affects
the calculation of semiclassical tunneling probabilities. A simple
comparison of time scales helps to decide if this effect is relevant.
In cases where the bubble crossing time (the nucleation radius of
the bubble divided by the speed of light) is much smaller than any time
scale of the background geometry, we found that it is well justified
to use tunneling rates obtained from field theory on Minkowski spacetime.
Noting that this tunneling probability drops exponentially with
increasing nucleation radius, it seems reasonable to assume that the
bubbles in fact nucleate at tiny radii whenever we demand an appreciable
tunneling rate which can lead to an onset of the phase transition
before the universe reaches (approximately) a vacuum state. Hence, using
Minkowski spacetime as an approximation appears self-consistent in many
cases. However, if the expansion rate of the universe is very large,
there may be cases where the effect of background evolution on tunneling
rates can be important. In the context of the chain inflation proposal,
we expect this to be the case very early on during a radiation dominated
phase short after a previous phase transition.
The existence of a particle horizon in a radiation dominated FLRW
universe renders the nucleation of bubbles larger than this horizon
impossible. This means that one has to have better knowledge of the
spacetime near the singularity, including details about reheating and
any cosmology preceding the radiation era, in
order to avoid this \lq horizon problem\rq . In other words, the pure
radiation dominated FLRW universe is no useful approximation for a
semiclassical calculation of nucleation rates of bubbles larger than
the particle horizon, since those rates would be sensitive to the
cosmology at the Big Bang.
Concerning the subsequent evolution of comovingly nucleated vacuum bubbles in non-vacuum backgrounds,
we have seen that already the presence of homogeneously distributed matter has a
significant influence on the bubble. Unlike in de~Sitter spacetime where the
proper kinetic energy of a bubble increases exponentially, it may decrease in the presence of
matter in the background. For small bubbles, the proper kinetic energy became zero
and real classical solutions could not be obtained beyond this point. As has been pointed out in
\cite{Fischler07}, such bubbles do not correspond to classical configurations, but should
be interpreted as mere fluctuations.
Bubbles with greater radius have more proper kinetic energy and are able to survive until the matter
density in the background has been diluted sufficiently. We note, however, that the setup in
which we study bubble trajectories, in particular the choice of initial conditions, is somewhat ad hoc.
To settle this issue, one would have to solve the tunneling problem with matter \textit{and} gravity,
an enterprise on which we did not embark in this work.
We also studied the effect of inhomogeneities in the background on the propagation of vacuum bubbles.
The results show that the trajectory of a bubble is affected, from the point of view of an exterior observer,
when it propagates into an inhomogeneous background. Since it is not clear whether this is just a coordinate effect
we have looked at the trajectory as seen by an interior observer. This observer will hardly
see an influence of ambient inhomogeneities in the trajectory of the bubble.
Nevertheless, when looked at from the inside, there remains an effect on the surface tension of the bubble.
Furthermore, bubbles moving in an FLRW spacetime which itself undergoes a phase transition have been considered.
Similar to the inhomogeneous case, a large effect is seen in the exterior coordinates.
However, when converted to interior coordinates, there remains an appreciable effect in the trajectory of the bubble.
This raises the question whether those perturbations of the bubble wall will lead to potentially observable effects.
In the context of bubble collisions \cite{Aguirre08,Chang08}, it has been pointed out that a disturbance in the
trajectory of the bubble wall may lead to a redshift of the reheating surface and could therefore potentially be observable in the CMB.
However, the most prominent of the effects of inhomogeneity or phase transitions in the background
is found in the surface tension of the bubble. This is a consequence of the
spacetime junction approach. As soon as a bubble propagates in a matter environment the evolution of
its surface tension is determined by the demand of the background.
This means that an expanding bubble \lq collects\rq\ matter from the background while a contracting bubble
will lose the amount of energy required by the space it uncovers.
Therefore it is necessary to
find a proper treatment of energy transfer through the bubble surface. Rather than fixing interior and
exterior spacetime ab initio, one should dynamically construct those spacetimes from initial conditions.
To solve this problem, an additional equation is needed, which arises from a proper dynamical description
of the surface tension. It should capture the physics of matter transfer across the junction hypersurface
and should probably be motivated from a field theoretical point of view. We hope to make progress in this
direction in our future work.
\section{Introduction}
In the recent literature a lot of interest has been dedicated to the
question of how inflationary models \cite{Guth00,Linde07} can be embedded
into a general theory \cite{McAllister07}. In this respect the emergence
of the string theory landscape \cite{Susskind03} has opened various
possibilities and a completely new perspective. Due
to flux compactification in string theory there can be a huge number of
distinct vacua \cite{Douglas06} in scalar field space without a unique
physical selection mechanism available at present.
Classically, a field that is stuck in one
vacuum would be trapped forever and
could never move through the landscape. Quantum mechanically, the vacua
are rendered metastable by the possibility for any field to tunnel to
another (metastable) vacuum and thereby
probe the landscape. The rate of tunneling naturally depends on
the energy difference between the vacua and on the height and width of
the potential barrier. The transition by pure tunneling can be
described by the Coleman-De Luccia instanton \cite{CDL80} whereas a
transition over the barrier due to thermal fluctuations can be described
by the Hawking-Moss instanton \cite{HawkingMoss}.
\begin{figure}[t]
\psfrag{vphi}[t][t]{$V(\phi)$\hspace{-30pt}}
\psfrag{phi1}[t][t]{$\phi_1$\hspace{-16pt}}
\psfrag{phi2}[t][t]{$\phi_2$\hspace{-20pt}}
\psfrag{dS}[t][t]{dS}
\centerline{\includegraphics[width=0.84\textwidth]{doub_poten3.eps}}
\caption{We investigate the effects on the tunneling and the
evolution of bubbles of new vacuum when the precursor state is
non-vacuum. The simplistic sketch shows such a tunneling event that
can be imagined in the context of rapid tunneling in the landscape.
We are interested in the case when tunneling is rapid enough such
that the background had no time to relax to vacuum and is, for
instance, undergoing a cosmological phase transition. This
complication can be relevant in the context of chain inflation
\cite{Freese05}--\cite{Ashoorioon08b} or other models
that invoke rapid tunneling, like e.g.~DBI or resonance tunneling
\cite{Sarangi07}, \cite{Tye06}. For some particular examples of non-vacuum
backgrounds (like e.g.~radiation dominated FLRW or inhomogeneous
LTB), we analyse the effects on the semiclassical tunneling rate
as well as on the subsequent general relativistic evolution of
the formed new vacuum bubbles.}
\label{fig:potential}
\end{figure}
The Coleman-De Luccia process gives rise to the nucleation of
spherically symmetric regions (bubbles) in space which are filled
with new vacuum and expand into the old vacuum
-- a first-order phase transition. So, in the landscape multiple
tunneling from multiple metastable vacua can occur, leading to
different patches of spacetime undergoing coeval inflation in a
variety of vacua. This process leads to a very complicated (fractal)
large-scale structure of the universe that has been termed the
attractor of eternal inflation. It is an attractor in the sense that
statistical properties of the large-scale universe asymptotically do
not depend on initial conditions \cite{Garriga05}.
The probability per unit four-volume for a tunneling to occur may be
written as
\begin{equation}
\Gamma = A \exp\left(-2 \mathrm{Im}\mathcal{S}\right)~,
\end{equation}
where $A$ is a determinantal factor and $\mathcal{S}$ is the action of
the instanton mediating the tunneling process. The prefactor $A$ is
notoriously hard to obtain and is usually assumed to be of order
unity for most practical purposes. The instanton action $\mathcal{S}$
can be obtained by solving the field equations with appropriate
boundary conditions. In many cases the layer separating the two phases
can be thought of as a domain wall with a thickness that is small
compared to the size of the bubble (\lq thin-wall approximation\rq).
With this assumption a calculation in Minkowski spacetime \cite{Coleman77}
yields $\mathrm{Im}\mathcal{S} = 27 \pi^2 \sigma^4 / 4 \epsilon^3\,$, with
$\epsilon$ the latent heat and $\sigma$ the energy density in the surface layer.
It is often assumed that $\sigma \gg \epsilon$ (in Planck units)
and thus tunneling is suppressed by a huge
exponential factor, which implies extremely long lifetimes of the
metastable vacua. However, any given point will eventually enter the
decay chain, no matter what the odds are.
In this work we will study bubbles which nucleate from a non-vacuum
initial state, that is inside pockets which do not obey the de~Sitter
symmetries because they are not vacuum dominated at the time when
tunneling takes place, see Fig.~\ref{fig:potential}. We look at
tunneling from a precursor state that is, say, in a radiation
or a matter dominated phase (but not yet vacuum dominated), and which
produces an inflationary bubble of de~Sitter vacuum. One may object
that a large class \cite{Wald83,Jensen86,Kitada92,Nobre09} of
inflating spacetimes, if they include a positive cosmological
constant, will always be vacuum dominated in the asymptotic future
(the cosmic no-hair conjecture~\cite{HawkingMoss,Gibbons77,Starobinski83}).
Together with the aforementioned long lifetimes of metastable vacua
in the landscape this seems to render the question of bubble
nucleation from anything other than the vacuum academic. However,
there are recent proposals of inflation where tunneling between
minima on the landscape occurs \emph{rapidly}. This is the case,
e.g., in the model of chain inflation\footnote{
In chain inflation, tunneling occurs stepwise through a sequence of
many minima. Chain inflation
resurrects the old inflationary idea of a first-order phase
transition but is able to solve the problem of graceful exit.
Originally, this was accomplished because the fields were assumed
to be coupled. The coupling is responsible for rapid tunneling:
once a first tunneling has occurred the field increases the decay
probability of its neighbor(s) further up in the landscape due to
coupling. A chain reaction of rapid tunneling starts that
eventually ends with a (slow) transition to true vacuum.
However, chain inflation does not seem to allow for eternal
inflation because it would not produce the right
primordial density fluctuations \cite{Feldstein06}. In \cite{Freese06}
a concrete realization of chain inflation on the string landscape that
is driven by four form fluxes has been proposed.} \cite{Freese05,Watson06,Feldstein06,Freese06,Huang07,Chialva08a,Chialva08b,Ashoorioon08a,Ashoorioon08b}
or for non-standard tunneling on the landscape via DBI or
resonance tunneling \cite{Sarangi07}, \cite{Tye06}. In case of
rapid tunneling -- in chain inflation every transition only
yields a fraction of an e-fold -- the efficiency of the cosmic
no-hair mechanism can be questioned. Therefore, we argue that
the question of the consequences of rapid inflation through a
pocket that has possibly not yet relaxed to pure vacuum deserves
some attention. We will attack this problem in a twofold way.
On the one hand, we will look at the tunneling itself and on the
other hand, we will study the classical evolution of a vacuum bubble
in a non-vacuum environment.
Concerning the former, one would for instance expect that the instanton
action picks up some geometrical corrections due to the non-trivial
background. Any such modifications are potentially interesting,
since the tunneling rates are exponentially sensitive to them. We
will show how one can assess the relevance of those
effects, based on a comparison of characteristic time scales.
Geometrical corrections become important if some dynamical time
scale of the background is comparable to or smaller than the
characteristic time scale of the tunneling process, which is given
by the bubble's proper nucleation radius. This may happen either when
the nucleation radius is very large -- but then tunneling rates are
so small that the background geometry will relax to a de~Sitter
stage long before transition occurs -- or when the background
dynamics is characterized by some very short time scales.
The subsequent classical evolution of bubbles in
such an environment will also be modified with respect to the
vacuum case. We will study the
propagation of vacuum bubbles in presence of i) homogeneously
distributed matter, ii) matter with an inhomogeneous radial
profile and iii) a fluid undergoing a second-order phase transition.
These simple toy scenarios shall give a taste of the phenomenology
which may result from non-vacuum initial states.
Besides for chain inflation, the results of this analysis can be
important also for other scenarios. For instance, the influence of
background inhomogeneity can be relevant in the context of
landscape sampling by tunneling. While through resonant
processes the tunneling rate on the landscape can be enhanced
\cite{Tye06}, inhomogeneous initial states may be of high
importance for this sort of tunneling \cite{Saffin08,Copeland07}.
The approach in our work is partly inspired by \cite{Fischler07},
where it was studied how an inhomogeneous and spherically
sym\-metric background influences the classical evolution of an
inflationary bubble of new vacuum immersed into it. In the limits of
the used model it was claimed in \cite{Fischler07} that ambient
inhomogeneities do not alter the evolution of bubbles
significantly as long as the weak energy condition is
respected. On the other hand, this topic also touches the
interesting issue of inhomogeneous initial conditions versus the
onset of inflation, see
e.g.~\cite{Goldwirth91,Calzetta:1992gv,KurkiSuonio93,Deruelle94,Iguchi96,Berera00}.
The paper is organized as follows. In section 2 we present
results that show the influence of dynamical backgrounds on the
nucleation rate of bubbles of new vacuum. To this end, we apply an
extension of the usual semiclassical approach to cosmologically
interesting time-dependent FLRW backgrounds such as power law inflation
or radiation dominated universes. The nucleation rates are obtained
in the thin-wall approximation by using a complex time path formalism.
In section 3, we analyze the subsequent classical trajectory of bubbles.
In addition to FLRW backgrounds\footnote{We thank Ben Freivogel for this suggestion.},
we look at the evolution in an exact
spherically symmetric and inhomogeneous spacetime containing also matter.
The interior of the bubble is assumed to be de~Sitter spacetime,
as a first approximation to the inflationary phase
of the patch of the universe that we are observing. The bubble evolution
follows from the Israel junction method which is employed to
join these spacetimes together. We ask whether signatures of the background
can potentially be seen by an observer inside the bubble.
In section 4 we summarize our results and give conclusions as well
as some remarks on the limits of the used methods and an outlook.
We use units $c = \hbar = G = 1$, and sign conventions for
geometrical quantities in accordance with \cite{MTW}.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,330
|
\section{Introduction}
In online prediction problems, one needs to provide predictions over a stream of inputs, while attempting to learn from the data and improve the predictions. Unlike offline settings, where the learning phase over a training set is decoupled from the testing phase, here the two are intertwined, and we cannot afford to slow down.
The standard models of online prediction consider a serial setting, where the inputs arrive one by one, and are processed by a single processor. However, in large-scale applications, such as search engines and cloud computing, the rate at which inputs arrive may necessitate distributing the computation across multiple cores or cluster nodes. A non-trivial challenge is to design distributed algorithms for online prediction, which maintain regret guarantees as close as possible to the serial case (that is, the ideal case where we would have been able to process all inputs using a single, sufficiently fast processor).
In \cite{DMB}, we presented the DMB algorithm, which is a template that allows to convert any serial online learning algorithm into a distributed algorithm. For a wide class of such algorithms, \cite{DMB} showed that when the loss function is smooth and the inputs are stochastic, then the DMB algorithm is asymptotically optimal. Specifically, the regret guarantee of the DMB algorithm will be identical in its leading term to the regret guarantee of the serial algorithm, including the constants. Also, the algorithm can be easily adapted to stochastic optimization problems, with an asymptotically optimal speedup in the convergence rate, by using a distributed system as opposed to a single processor.
However, the DMB algorithm
makes several assumption that may not be realistic in all distributed settings.
These assumptions are:
\begin{itemize}
\item All nodes work at the same rate.
\item All nodes are working properly throughout the execution of the algorithm.
\item The network connecting the nodes is stable during the execution of the algorithm.
\end{itemize}
These assumptions are not always realistic. Consider for example a multi-core CPU. While the last two assumptions are reasonable in this environment, the first one is invalid since other processes running on the same CPU may cause occasional delays on some cores (e.g., \cite{PKP03}). In massively distributed, geographically dispersed systems, all three assumptions may fail to hold.
In this companion paper to \cite{DMB}, we focus on adding robustness to the DMB algorithm, and present two methods to achieve this goal. In \secref{sec:leader} we present ways in which the DMB algorithm can be made robust using a master-workers architecture, and relying on the robustness of off-the-shelf methods such as leader election algorithms or databases.
In \secref{sec:async}, we present an asynchronous version of the DMB algorithm
that is robust with a fully decentralized architecture.
\section{Background}
We begin by providing a brief background on the setting and the DMB algorithm. The background is deliberately terse, and we refer the reader to \cite{DMB} for the full details.
We assume that we observe a stream of inputs $z_{1},z_{2},\ldots$, where
each~$z_{i}$ is sampled independently from a fixed unknown
distribution over a sample space $\Zcal$. Before observing each
$z_{i}$, we predict a point $w_{i}$ from a convex set~$\convexset$. After
making the prediction~$w_{i}$, we observe~$z_{i}$ and suffer the loss
$f(w_{i},z_{i})$, where $f$ is a predefined loss function, assumed to be convex in its first argument. We may now
use~$z_{i}$ to improve our prediction mechanism for the future (e.g.,
using a stochastic gradient method). The goal is to accumulate the
smallest possible loss as we process the sequence of inputs. More
specifically, we measure the quality of our predictions on $m$ examples using the notion of \emph{regret}, defined as
\[
R(m) = \sum_{i=1}^m \left( f(w_i,z_i) - f(w^\star,z_i) \right),
\]
where $w^\star = \argmin_{w \in \convexset} \E_z[f(w,z)]$.
Note that the regret $R(m)$is a random variable, since it depends on~$m$
stochastic inputs.
For simplicity, we will focus on bounding the expected regret.
We model our distributed computing system as a set of nodes, each of which is an independent processor, and a network that enables the nodes to communicate with each other. Each node receives an incoming stream of examples from an outside source, such as a load balancer/splitter. As in the real world, we
assume that the network has a limited bandwidth, so the nodes cannot
simply share all of their information, and that messages sent over the
network incur a non-negligible latency.
The ideal (but unrealistic) solution to this online prediction problem is to run a standard serial algorithm on a single ``super'' processor that is sufficiently fast to handle the stream of examples. This solution is optimal, simply because any distributed algorithm can be simulated on a fast-enough single processor. The optimal regret that can be achieved by such serial algorithms on $m$ inputs is $O(\sqrt{m})$. However, when we choose to distribute the computation, the regret performance might degrade, as the communication between the nodes is limited. Straightforward approaches, as well as previous approaches in the literature, all yield regret bounds which are at best $O(\sqrt{km})$, where $k$ is the number of nodes in the system. Thus, the regret degrades rapidly as more nodes are utilized.
In \cite{DMB}, we present the DMB algorithm, which has the following two important properties:
\begin{itemize}
\item It can use a wide class of gradient-based update rule for serial online
prediction as a black box, and convert it into a parallel or
distributed online prediction algorithm.
These serial online algorithms include (Euclidean) gradient descent,
mirror descent, and dual averaging.
\item
If the loss function~$f(w,z)$ is smooth in~$w$ (namely, its gradient is Lipschitz), then the DMB algorithm attains an asymptotically optimal regret bound of $O(\sqrt{m})$. Moreover, the
coefficient of the dominant term~$\sqrt{m}$ is the same as in the
serial bound, which is \emph{independent} of~$k$ and of the network
topology.
\end{itemize}
The DMB algorithm is based on a theoretical observation that, for smooth loss
functions, one can prove regret bounds for serial gradient-based algorithms
that depend on the variance of the stochastic gradients.
To simplify discussions, we use $\psi(\sigma^2,m)$ to denote such variance
bounds for predicting~$m$ inputs, where $\sigma^2$ satisfies
\[
\forall\, w \in \convexset, \qquad \E_z \left[ \bigl\| \nabla_w f(w,z)
- \nabla_w \E_z f(w,z) ] \bigr\|^2 \right] \leq \varp^2 ~.
\]
For example, we show in \cite{DMB} that for both mirror-descent (including
classical gradient descent) and dual averaging methods, the expected regret
bounds take the form
\[
\psi(m,\sigma^2) = 2D^2\smoothp+2D\varp\sqrt{m},
\]
where $\smoothp$ is the Lipschitz parameter of the loss gradient $\nabla_w f(w,z)$, and $D$ quantifies the size of the set $\convexset$ from which the predictors are chosen. As a result, it can be shown that applying a serial gradient-based algorithm on \emph{averages} of gradients, computed on independent examples with the same predictor $w$, will reduce the variance in the resulting regret bound.
In a nutshell, the DMB algorithm uses the distributed network in order to rapidly accumulate gradients with respect to the same fixed predictor $w$. Once a mini-batch of sufficiently many gradients are accumulated (parameterized by $b$), the nodes collectively perform a vector-sum operation, which allows each node to obtain the average of these $b$ gradients. This average is then used to update their predictor, using some gradient-based online update rule as a black box. Note that the algorithm is inherently synchronous, as all nodes must use the same predictor and perform the averaging computations and updates at the same time. A detailed pseudo-code and additional details appear in \cite{DMB}.
The regret analysis for this algorithm is based on a parameter $\mu$,
which bounds the number of inputs processed by the system during the
vector-sum operation.
The gradients for these~$\mu$ inputs are not used for updating the predictor.
While $\mu$ depends on the network structure and communication latencies, it does not scale with the total number of examples $m$ processed by the system. Formally, the regret guarantee is as follows:
\begin{theorem} \label{thm:synchronous}
Let $f$ be an $\smoothp$-smooth convex loss function and assume that
the stochastic gradient $\nabla_w f(w,z_i)$ has $\varp^2$-bounded variance
for all~$w\in\convexset$.
If the online update rule used by the DMB algorithm has the serial regret bound $\regretbound(\varp^2, m)$, then the expected regret of the DMB algorithm over $m$ examples is at most
\[
(b+\mu)\,\regretbound\left(\frac{\varp^2}{b},
\left\lceil\frac{m}{b+\mu}\right\rceil\right) ~.
\]
Specifically, if $\regretbound(\varp^2, m)=2D^2\smoothp+2D\varp\sqrt{m}$,
and the batch size is chosen to be $b=m^\rho$ for any $\rho\in (0,1/2)$, the expected regret is $2D\sigma\sqrt{m} + o(\sqrt{m})$.
\end{theorem}
Note that for serial regret bounds of the form $2D^2\smoothp+2D\varp\sqrt{m}$, we indeed get an identical leading term in the regret bound for the DMB algorithm, implying its asymptotic optimality.
\section{Robust Learning with a Master-Workers Architecture}\label{sec:leader}
The DMB algorithm presented in \cite{DMB} assumes that all nodes are making similar progress. However, even in homogeneous systems, which are
designed to support synchronous programs, this is hard to achieve
(e.g., \cite{PKP03}), let alone grid environments in which each
node may have different capabilities. In this section, we present a variant of the DMB algorithm that adds the following properties:
\begin{itemize}
\item It performs on heterogeneous clusters, whose nodes may have varying processing rates.
\item It can handle dynamic network latencies.
\item It supports randomized update rules.
\item It can be made robust using standard fault tolerance techniques.
\end{itemize}
To provide these properties, we convert the DMB algorithm to work with
a single master and multiple workers. Each of the workers receives
inputs and processes them at its own pace. Periodically, the worker
sends the information it collected, i.e., the sum of gradients, to
the master. Once the master has collected sufficiently many gradients,
it performs an update and broadcasts the new predictor to the
workers. We call this algorithm the \emph{master-worker distributed
mini-batches} (MaWo-DMB) algorithm. For a detailed description of
the algorithm, see \algref{alg:MaWoW} for the worker algorithm and
\algref{alg:MaWoM} for the master algorithm.
\begin{algorithm}[t]
\DontPrintSemicolon
initialize $w$\;
$j = 1$\;
count = 0\;
$\hat g$ = 0\;
\While{not end of data}
{
\If {master message $\mathrm{m} = (w, j)$ and $\mathrm{m}.j > j$}
{
$w~:=\mathrm{m}.w$\;
$j~:=~\mathrm{m}.j$\;
$\hat g~:=~0$\;
count$~:=~0$\;
}
\If {did not send message for the past $t$ time--units and count $>0$}
{
send the message ($\hat g$, count, $j$) to the master\;
$\hat g~:=~0$\;
count$~:=~0$ \;
}
predict $w$\;
receive input $z$ and suffer loss $f(w,z)$\;
compute gradient $\nabla_w f(w,z)$\;
$\hat g ~:=~\hat g + \nabla_w f(w,z)$\;
count $~:=~$ count + 1\;
}
\caption{MaWo-DMB worker algorithm.}
\label{alg:MaWoW}
\end{algorithm}
\begin{algorithm}[t]
\DontPrintSemicolon
$j = 1$\;
count = 0\;
$\hat g$ = 0\;
\While{not end of data}
{
receive message m = ($\hat g$, count, $j$) from a worker\;
\If {$\mathrm{m}.j = j$}
{
$\hat g ~:=~ \hat g + \mathrm{m.}\hat g$ \;
count $~:=~$ count + m.count \;
\If {count $\geq$ b}
{
$\bar{g}_{j} ~:=~ \frac {\hat g}{\text{count}}$ \;
use $\bar{g}_{j}$ to compute updated predictor $w_{j+1}$\;
$j~:=~j+1$ \;
count $~:=~$ 0\;
$\hat g ~:=~ 0$\;
broadcast ($w_{j}$,$j$)\;
}
}
}
\caption{MaWo-DMB master algorithm.}
\label{alg:MaWoM}
\end{algorithm}
This algorithm uses a slightly different communication protocol than the DMB algorithm. We assume that the network supports two operations:
\begin{enumerate}
\item Broadcast master $\rightarrow$ workers: the master sends updates to the workers.
\item Message worker $\rightarrow$ master: periodically, each worker sends a message to the master with the sum of gradients it has collected so far.
\end{enumerate}
One possible method to implement these services is via a database. Using a database, each worker can update the gradients it collected on the database,
and check for updates from the master. At the same time, the master can check periodically to see if sufficiently many gradients have accumulated in the
database. When there are at least $b$ gradients accumulated, the master performs an update and posts the result in a designated place in the database.
This method provides nice robustness features to the algorithm, as discussed in \secref{sec:MW robust}.
\subsection{Properties of the MaWo-DMB algorithm}
The MaWo-DMB algorithm shares a similar asymptotic behavior as the DMB
algorithm (e.g. as discussed in \thmref{thm:synchronous}). The proof for the DMB algorithm applies to this algorithm as well. To get the optimal rate, we only need to bound the number $\mu$ of inputs whose gradient is not used in the computation of the next prediction point. A coarse bound on this number can be given as follows: Let $M$ be the maximal number of inputs per time--unit. Let $T$ be the time difference between messages sent from each worker to the
master, let $\tau_u$ be the time it takes the master to perform an
update, and let $\tau_c$ be the maximal time it takes to send a
message between two points in the network. Using this notation, the
number of inputs dropped in each update is at most
$M(T+2\tau_c+\tau_u)$. Specifically, let $t$ be the time when the
master encounters the $b$'th gradient. Inputs that were processed
before time $t - T - \tau_c$ were received by the master. Moreover, at
time $t + \tau_u + \tau_c$ all of the workers have already received
the updated prediction point. Therefore, only inputs that were
processed between $t - T - \tau_c$ and $t + \tau_u + \tau_c$ might be
dropped. Clearly, there are at most $M(T + 2\tau_c + \tau_u)$ such
inputs.
While asymptotically the MaWo-DMB algorithm exhibits the same performance as the DMB algorithm, it does have some additional features. First, it allows
workers of different abilities to be used. Indeed, if some workers can process more inputs than other workers, the algorithm can compensate for
that. Moreover, the algorithm does not assume that the number of inputs each worker handles is fixed in time. Furthermore, workers can be added
and removed during the execution of the algorithm.
The DMB algorithm assumes that the update rule is deterministic. This is essential since each node computes the update, and it is assumed that
they reach the same result. However, in the MaWo-DMB algorithm, only the master calculates the update and sends it to the rest of the nodes,
therefore, the nodes all share the same point even if the update rule is randomized.
\subsection{Adding Fault Tolerance to the MaWo-DMB algorithm}\label{sec:MW robust}
The MaWo-DMB algorithm is not sensitive to the stability of the workers. Indeed, workers may be added and removed during the execution of the algorithm.
However, if the master fails, the algorithm stops making updates. This is a standard problem in master-worker environments. It can be solved
using leader election algorithms such as the algorithm of \cite{GHS83}. If the workers do not receive any signal from the master
for a long period of time, they start a process by
which they elect a new leader (master). \cite{MWW00} proposed a leader election algorithm for ad-hoc networks. The advantage
of this kind of algorithm for our setting is that it can manage dynamic networks where the network can be partitioned and reconnected. Therefore,
if the network becomes partitioned, each connected component will have its own master.
Another way to introduce robustness to the MaWo-DMB algorithm is by selecting the master only when an update step is to be made. Assume that there
is a central database and all workers update it. Every $T$ time--units, each worker performs the following
\begin{enumerate}
\item lock the record in the database
\item add the gradients computed to the sum of gradients reported in the database
\item add the number of gradients to the count of the gradients reported in the database
\end{enumerate}
At this point, the worker checks if the count of gradients exceeds
$b$. If it does not, the worker releases the lock and returns to
processing inputs. However, if the number of gradients does exceed
$b$, the worker performs the update and broadcasts the new prediction
point (using the database) before unlocking the database and becoming
a worker again.
This simple modification we just described creates a distributed master such that any node in the system can be removed without
significantly affecting the progress of the algorithm. In a sense, we are leveraging the reliability of the database system (see e.g., \cite{BHG87, DeanBrock, bigtable})
to convert our algorithm into a fault tolerant algorithm.
\section{Robust Learning with a Decentralized Architecture}\label{sec:async}
In the previous section, we discussed asynchronous algorithms based on a master-workers paradigm. Using off-the-shelf fault tolerance methods, one can design simple and robust variants, capable of coping with dynamic and heterogeneous networks.
That being said, this kind of approach also has some
limitations. First of all, access to a shared database may not be feasible, particularly in massively distributed environments. Second, utilizing leader election algorithms is potentially wasteful, since by the
time a new master is elected, some workers or local worker groups
might have already accumulated more than enough gradients to perform a
gradient update. Moreover, what we really need is in fact more complex
than just electing a random node as a master: electing a
computationally weak or communication-constrained node will have
severe repercussions. Also, unless the communication network is
fully connected, we will need to form an entire DAG (directed acyclic graph)
to relay gradients from the workers to the elected master. While both
issues have been studied in the literature, it complicates the
algorithms and increases the time required for the election process,
again leading to potential waste. In terms of performance guarantees,
it is hard to come up with explicit time guarantees for these
algorithms, and hence the effect on the regret incurred by the system
is unclear.
In this section, we describe a robust, fully decentralized and
asynchronous version of DMB, which is not based on a master-worker
paradigm. We call this algorithm \emph{asynchronous} DMB, or ADMB for
brevity. We provide a formal analysis, including an explicit regret
guarantee, and show that ADMB shares the advantages of DMB in
terms of dependence on network size and communication latency.
\subsection{Description of the ADMB Algorithm}
We assume that communication between nodes takes place along some bounded-degree acyclic graph. In addition, each node has a unique numerical index. We will generally use $i$ to denote a given node's index, and let $j$ denote the index of some neighboring node.
Informally, the algorithm works as follows: each node $i$ receives examples, accumulates gradients with respect to its current predictor (which we shall denote as $w_i$), and uses batches of $b$ such gradients to update the predictor. Note that unlike the MaWo-DMB algorithm, here there is no centralized master node responsible for performing the update. Also, for technical reasons, the prediction themselves are not made with the current predictor $w_i$, but rather with a running average $\bar{w}_i$ of predictors computed so far.
Each node occasionally sends its current predictor and accumulated gradients to its neighboring nodes. Given a message from a node $j$, the receiving node $i$ compares its state to the state of node $j$. If $w_i=w_j$, then both nodes have been accumulating gradients with respect to the same predictor. Thus, node $i$ can use these gradients to update its own predictor $w_i$, so it stores these gradients. Later on, these gradients are sent in turn to node $i$'s neighbors, and so on. Each node keeps track of which gradients came from which neighboring nodes, and ensures that no gradient is ever sent back to the node from which it came. This allows for the gradients to propagate throughout the network.
An additional twist is that in the ADMB algorithm, we no longer insist on all nodes sharing the exact same predictor at any given time point. Of course, this can lead to each node using a different predictor, so no node will be able to use the gradients of any other node, and the system will behave as if the nodes all run in isolation. To prevent this, we add a mechanism, which ensures
that if a node $i$ receives from a neighbor node $j$ a ``better'' predictor than its current one, it will switch to using node $j$'s predictor. By ``better'', we mean one of two things: either $w_j$ was obtained based on more predictor updates, or $j<i$. In the former case, $w_j,\bar{w}_j$ should indeed be better, since they are based on more updates. In the latter case, there is no real reason to prefer one or the other, but we use an order of precedence between the nodes to determine who should synchronize with whom. With this mechanism, the predictor with the most gradient updates is propagated quickly throughout the system, so either everyone starts working with this predictor and share gradients, or an even better predictor is obtained somewhere in the system, and is then quickly propagated in turn - a win-win situation.
We now turn to describe the algorithm formally. The algorithm has two global parameters:
\begin{itemize}
\item $b$: As in the DMB algorithm, $b$ is the number of gradients whose average is used to update the predictor.
\item $t$: This parameter regulates the communication rate between the nodes. Each node $i$ will send message to its neighbor every $t$ time--units.
\end{itemize}
Each node $i$ maintains the following data structures:
\begin{itemize}
\item A \emph{node state} $S_i=(w_i,\bar{w}_i,v_i)$, where
\begin{itemize}
\item $w_i$ is the current predictor.
\item $\bar{w}_i$ is the running average of predictors actually used for prediction.
\item $v_i$ counts how many predictors are averaged in $\bar{w}_i$. This is also the number of updates performed according to the online update rule, in order to obtain $w_i$.
\end{itemize}
\item A vector $g_i$ and associated counter $c_i$, which hold the sum of gradients computed from inputs serviced by node $i$.
\item For each neighboring node $j$, a vector $g_i^j$ and associated counter $c_i^j$, which hold the sum of gradients received from node $j$.
\end{itemize}
When a node $i$ is initialized, all the variables discussed above are set to zero, The node then begins the execution of the algorithm. The protocol is composed of executing three event-driven functions: the first function (Algorithm \ref{alg:asyncfunc} below) is executed when a new request for prediction arrives, and handles the processing of that example. The second function (Algorithm \ref{alg:asyncsend}) is executed every $t$ time--units, and sends messages to the node's neighbors. The third function (Algorithm \ref{alg:asyncreceive}) is executed when a message arrives from a neighboring node. Also, the functions use a subroutine \texttt{update\_predictor} (Algorithm \ref{alg:updatepredictor}) to update the node's predictor if needed. For simplicity, we will assume that each of those three functions is executed atomically (namely, only one of the function runs at any given time). While this assumption can be easily relaxed, it allows us to avoid a tedious discussion of shared resource synchronization between the functions.
\begin{algorithm}
\DontPrintSemicolon
Predict using $\bar{w}_i$\;
Receive input $z$, suffer loss and compute gradient $\nabla_{w} f(w_i,z)$\;
$g_i:=g_i+\nabla_{w} f(w_i,z)$~,~$c_i:=c_i+1$\;
\If{$c_i+\sum_j c_i^j \geq b$}{\texttt{update\_predictor\;}}
\caption{ADMB Algorithm: Handling a new request} \label{alg:asyncfunc}
\end{algorithm}
\begin{algorithm}
\DontPrintSemicolon
For each neighboring node $j'$, send message $\left(i,S_i,g_i+\sum_{j\neq j'}g_i^j,c_i+\sum_{j\neq j'}c_i^j\right)$
\caption{ADMB Algorithm: Sending Messages (Every $t$ Time--Units) } \label{alg:asyncsend}
\end{algorithm}
\begin{algorithm}
\DontPrintSemicolon
Let $(j,S_j,g,c)$ be the received message\;
\eIf{$S_j.v_j>v_i$ or ($S_j.v_j=v_i$ and $S_j.w_j\neq w_i$ and $j<i$)}
{
$S_i:=S_j$~,~$g_i:=0$~,~$c_i:=0$\;
$\forall j$~~$g_i^j:=g$~,~$c_i^j:=c$\;
}
{
\If{$S_j.w_j=w_i$}
{
$g_i^j=g$~,~$c_i^j=c$\;
\If{$c_i+\sum_j c_i^j \geq b$}
{\texttt{update\_predictor}\;}
}
}
\caption{ADMB Algorithm: Processing Incoming Message} \label{alg:asyncreceive}
\end{algorithm}
\begin{algorithm}
\DontPrintSemicolon
use averaged gradient $\frac{g_i+\sum_j g_i^j}{c_i+\sum_j c_i^j}$ to compute updated predictor $w_{i}$\;
$\bar{w}_i ~:=~ \frac{v_i}{v_i+1} \bar{w}_i+\frac{1}{v_i+1}w_i$\;
$v_i:= v_i+1$~,~$g_i:= 0$~,~$c_i:= 0$\;
$\forall j$~~$g_i^j:= 0$~,~$c_i^j:= 0$\;
\caption{\texttt{update\_predictor} Subroutine} \label{alg:updatepredictor}
\end{algorithm}
It is not hard to verify that due to the acyclic structure of the network, no single gradient is ever propagated to the same node twice. Thus, the algorithm indeed works correctly, in the sense that the updates are always performed based on independent gradients. Moreover, the algorithm is well-behaved in terms of traffic volume over the network, since any communication link from node $i$ to node $j$ passes at most $1$ message every $t$ time--units, where $t$ is a tunable parameter.
As with the MaWo-DMB algorithm, the ADMB algorithm has some desirable robustness properties, such as heterogeneous nodes and adding/removing new nodes, and communication latencies. Moreover, it is robust to network failures: even if the the network is split into two (or more) partitions, it only means we end up with two (or more) networks which implement the algorithm in isolation. The system can continue to run and its output will remain valid, although the predictor update rate will become somewhat slower, until the failed node is replaced. Note that unlike the MaWo-DMB algorithm, there is no need to wait until a master node is elected.
\subsection{Analysis}\label{subsec:analysis}
We now turn to discuss the regret performance of the algorithm. Before we begin, it is important to understand what kind of guarantees are possible in such a setting. In particular, it is not possible to provide a total regret bound over all the examples fed to the system, since we have not specified what happens to the examples which were sent to malfunctioning nodes - whether they were dropped, rerouted to a different node and so on. Moreover, even if nodes behave properly in terms of processing incoming examples, the performance of components such as interaction with neighboring nodes might vary over time in complex ways, which are hard to model precisely.
Instead, we will isolate a set of ``well-behaved'' nodes, and focus on the regret incurred on the examples sent to these nodes. The underlying assumption is that the system is mostly functional for most of the time, so the large majority of examples are processed by such well-behaved nodes. The analysis will focus on obtaining regret bounds over these examples.
To that end, let us focus on a particular set of $k'$ nodes, which form a connected component of the communication framework, with diameter $d'$. We will define the nodes as \emph{good}, if all those nodes implement the ADMB algorithm at a reasonably fast rate. More precisely, we will require the following from each of the $k'$ nodes:
\begin{itemize}
\item Executing each of the three functions defining the ADMB algorithm takes at most one time--unit.
\item The communication latency between two adjacent nodes is at most one time--unit.
\item The $k'$ nodes receive at most $M$ examples every time--unit.
\end{itemize}
As to other nodes, we only assume that the messages they send to the good
nodes reflect a correct node state, as specified earlier. In particular, they may be arbitrarily slow or even completely unresponsive.
First, we show that when the nodes are good, up-to-date predictors from any single node will be rapidly propagated to all the other nodes. This shows that the system has good recovery properties (e.g. after most nodes fail).
\begin{lemma}\label{lem:predprop}
Assume that at some time point, the $k'$ nodes are good, and at least one of them has a predictor based on at least $v$ updates. If the nodes remain good for at least $(t+2)d'$ time--units, then all nodes will have a predictor based on at least $v$ updates.
\end{lemma}
\begin{proof}
Let $i$ be the node with the predictor having at least $v$ updates. Counting from the time point defined in the lemma, at most $t+2$ time--units will elapse until all of node $i$'s neighbors will receive a message from node $i$ with its predictor, and either switch to this predictor (and then will have a predictor with $v$ updates), or remain with the same predictor (and this can only happen if its predictor was already based on $v$ updates). In any case, during the next $t+2$ time--units, each of those neighboring nodes will send a message to its own neighbors, and so on. Since the distance between any two nodes is at most $d'$, the result follows.
\end{proof}
The next result shows that when all nodes are good and have a predictor based on at least $v$ updates, not too much time will pass until they will all update their predictor.
\begin{theorem}\label{thm:fastupdates}
Assume that at some time point, the $k'$ nodes are good, and every one of them has a predictor with $\geq v$ updates (not necessarily the same one). Then after the nodes process at most
\[
b+2(t+2)d'M
\]
additional examples, all $k'$ nodes will have a predictor based on at least $v+1$ updates.
\end{theorem}
\begin{proof}
Consider the time point mentioned in the theorem, where every one of the $k'$ nodes, and in particular the node $i_0$ with smallest index among them, has a predictor with $\geq v$ updates. We now claim that after processing at most
\begin{equation}\label{eq:timespan2}
(t+2)d' M
\end{equation}
examples, either some node in our set had a predictor with $\geq v+1$ updates, or every node has the same predictor based on $v$ updates. The argument is similar to \lemref{lem:predprop}, since everyone will switch to the predictor propagated from node $i_0$, assuming no predictor obtained a predictor with more updates. Therefore, at most $(t+2)d'$ time--units will pass, during which at most $(t+2)d' M$ examples are processed.
So suppose we are now at the time point, where either some node had a predictor with $\geq v+1$ updates, or every node had the same predictor based on $v$ updates. We now claim that after processing at most
\begin{equation}\label{eq:timespan3}
b+(t+2)d'M
\end{equation}
examples, any node in our set obtained a predictor with $\geq v+1$ updates. To justify \eqref{eq:timespan3}, let us consider first the case where every node had the same predictor based on $v$ updates. As shown above, the number of time--units it takes any single gradient to propagate to all $k'$ nodes is at most $(t+2)d'$. Therefore, after $T$ time--units elapsed, each node will accumulate and act upon all the gradients computed by all nodes up to time $T-(t+2)d'$. Since at most $M$ examples are processed each time--unit, it follows that after processing at most $b+(t+2)d'M$ examples, all nodes will update their predictors, as stated in \eqref{eq:timespan3}.
We still need to consider the second case, namely that some good node had a predictor with $\geq v+1$ updates, and we want to bound the number of examples processed till all nodes have a predictor with $\geq v+1$ updates. But this was already calculated to be at most $(t+2)d'M$, which is smaller than \eqref{eq:timespan3}. Thus, the time bound in \eqref{eq:timespan3} covers this case as well.
Adding \eqref{eq:timespan2} and \eqref{eq:timespan3}, the theorem follows.
\end{proof}
With these results in hand, we can now prove a regret bound for our algorithm. To do so, define a \emph{good time period} to be a time during which:
\begin{itemize}
\item All $k'$ nodes are good, and were also good for $(t+2)d'$ time--units prior to that time period.
\item The $k'$ nodes handled $b+2(t+2)d'M$ examples overall.
\end{itemize}
As to other time periods, we will only assume that at least \emph{one} of the $k'$ nodes remained operational and implemented the ADMB algorithm (at an arbitrarily slow rate).
\begin{theorem}\label{thm:asyncregret}
Suppose the gradient-based update rule has the serial regret bound $\regretbound(\varp^2, m)$, and that for any $\varp^2$, $\frac{1}{m}\regretbound(\varp^2,m)$ decreases monotonically in $m$.
Let $m$ be the number of examples handled during a sequence of non-overlapping good time periods. Then the expected regret with respect to these examples is at most
\[
\sum_{j=1}^{\ceil{m/\mu}}\frac{\mu}{j}\regretbound\left(\frac{\varp^2}{b},j\right),
\]
where $\mu=b+2(t+2)d'M$.
Specifically, if $\regretbound(\varp^2, m)=2D^2\smoothp+2D\varp\sqrt{m}$, then the expected regret bound is
\[
2D^2L(b+2(t+2)d'M)(1+\log(m))+
4D\sigma\sqrt{\left(1+\frac{2(t+2)d'M}{b}\right)m}
\]
\end{theorem}
When the batch size $b$ scales as $m^{\rho}$ for any $\rho\in (0,1/2)$, we get an asymptotic regret bound of the form $4D\sigma\sqrt{m}+o(\sqrt{m})$. The leading term is virtually the same as the leading term in the serial regret bound. The only difference is an additional factor of $2$, essentially due to the fact that we need to average the predictors obtained so far to make the analysis go through, rather than just using the last predictor.
\begin{proof}
Let us number the good time periods as $j=1,2,\ldots$, and let $\bar{w}_j$ be a predictor used by one of the nodes at the beginning of the $j$-th good time period. From \lemref{lem:predprop} and \thmref{thm:fastupdates}, we know that the predictors used by the nodes were updated at least once during each period. Thus, $\bar{w}_j$ is the average of $j'\geq j$ predictors $w_1,w_2,\ldots,w_{j'}$, where each $w_{p+1}$ was obtained from the previous $w_{p}$ using $b_p\geq b$ gradients each, on some examples which we shall denote as $z_{p,1},z_{p,2},\ldots,z_{p,b_p}$. Since $w_p$ is independent of these examples, we get
\[
\E\left[\frac{1}{b_p}\sum_{q=1}^{b_p}f(w_p,z_{p,q})-f(w^\star,z_{p,q})~\big| w_p\right] = \E[f(w_p,z)-f(w^\star,z)\big|w_p].
\]
Based on this observation and Jensen's inequality, we have
\begin{align}
&\E\left[f(\bar{w}_j,z)-f(w^\star,z)\right]\notag\\
&\leq \frac{1}{j'}\E\left[\sum_{p=1}^{j'}f(w_p,z)-f(w^\star,z)\right]\notag\\
&= \frac{1}{j'}\E\left[\sum_{p=1}^{j'}\frac{1}{b_p}
\sum_{q=1}^{b_p}f(w_p,z_{p,q})-f(w^\star,z_{p,q})\right].\label{eq:jen}
\end{align}
The online update rule was performed on the averaged gradients obtained from $z_{p,1},\ldots,z_{p,b_p}$. This average gradient is equal to the gradient of the function $\frac{1}{b_p}\sum_{q=1}^{b_p}f(w_p,z_{p,q})$. Moreover, the variance of this gradient is at most $\varp^2/b_p\leq \varp^2/b$. Using the regret guarantee, we can upper bound \eqref{eq:jen} by
\[
\frac{1}{j'}\regretbound\left(\frac{\varp^2}{b},j'\right).
\]
Since $j'\geq j$, and since we assumed in the theorem statement that the expression above is monotonically decreasing in $j'$, we can upper bound it by
\[
\frac{1}{j}\regretbound\left(\frac{\varp^2}{b},j\right).
\]
From this sequence of inequalities, we get that for \emph{any} example processed by one of the $k'$ nodes during the good time period $j$, it holds that
\begin{equation}\label{eq:epochregret}
\E\left[f(\bar{w}_j,z)-f(w^\star,z)\right]\leq \frac{1}{j}\regretbound\left(\frac{\varp^2}{b},j\right).
\end{equation}
Let $\mu=b+2(t+2)d'M$ be the number of examples processed during each good time period. Since $m$ examples are processed overall, the total regret over all these examples is at most
\begin{equation}\label{eq:regretfinal}
\sum_{j=1}^{\ceil{m/\mu}}\frac{\mu}{j}\regretbound\left(\frac{\varp^2}{b},j\right).
\end{equation}
To get the specific regret form when $\regretbound(\varp^2,m)=2D^2L+2D\sigma\sqrt{m}$, we substitute into \eqref{eq:regretfinal}, and substitute $\mu=b+2(t+2)d'M$ to get
\begin{align*}
&\sum_{j=1}^{\ceil{m/\mu}}\left(2D^2L\frac{\mu}{j}+\frac{2D\sigma\mu}{\sqrt{b}}
\frac{1}{\sqrt{j}}\right)\\
&\leq 2D^2L\mu(1+\log(m))+
\frac{4D\sigma\mu}{\sqrt{b}}\sqrt{\frac{m}{\mu}}\\
&= 2D^2L(b+2(t+2)d'M)(1+\log(m))+
4D\sigma\sqrt{\left(1+\frac{2(t+2)d'M}{b}\right)m}.
\end{align*}
\end{proof}
\bibliographystyle{plain}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,647
|
\section{Introduction}
Smooth and highly collinear data generally arises in applications where high frequency acquisition devices are employed, like chemometrics, spectroscopy and electrical engineering. A natural way of modeling these data-generating processes is by means of functional data analysis (FDA) \citep{book_fda,book_nonparafda}, where each covariate can be seen as a smooth function $x \in L^{2}(I)$ that has been evaluated at sequential timesteps, often with missing values and different spacing between the observations. Without loss of generality, we will focus on scalar-valued functions defined on $I=[0,1]$, but this approach can be extended to deal with vector valued functions defined on multidimensional domains. The two main aspects to be considered are the estimation of the underlying functional covariates from the raw data, and the estimation of the predictive model itself. Let $\mathcal{D}=\lbrace (x_i,y_i) \rbrace_{i=1}^{N}$ be the training set with random i.i.d. functions $x_i \in L^{2}(I)$ and responses $y_i \in \mathbb{R}$, we focus on the scalar on function linear model:
\begin{equation*}
\label{eq:linearmodel}
y_{i} = \beta_{0} + \int_{I} x_i(t)\beta(t)dt + \epsilon_{i}
\end{equation*}
\noindent where $\beta_{0} \in \mathbb{R}$ is the intercept, $\beta$ is the coefficient function and $\epsilon_{i} \sim \mathcal{N}(0,\sigma^{2})$ are random i.i.d. errors. In fact, the penalties that we analyze can also be employed in a generalized linear model (GLM) framework, and we will study classification problems with functional logistic regression as well. Regarding the first aspect, consider the functional covariate $x$ as a finite expansion $x(t)= \sum_{j=1}^{J} \xi_{j} \psi_{j}(t)$ with suitable basis functions $\psi_{j}$ and coefficients $\xi_{j}$, a common approach is to recover each functional sample $x_i$ individually, by means of interpolating or smoothing splines, depending on the amount of noise. If the raw data contains a large number of missing observations, this approach fails as some of the functional covariates may have been observed on just a few points over the domain. This issue can be solved by leveraging the information from the whole dataset, estimating the basis and the coefficients of the expansion by means of functional principal components (fPCA) with local smoothing \citep{fda_sparselongdata} or mixed effects \citep{fda_smoothsplnested,fda_pcasparse}. Once the input functions have been recovered, depending on the approach that has been implemented, it is possible to either work directly with the coefficients of the expansion, or to evaluate the estimated functions on the same dense equispaced $p$-dimensional grid. Without loss of generality, in this work we opt for the grid approach and we recover the functions individually, but the methods that we propose are not directly tied to this choice, as long as the discretized functional samples have the same dimensionality. Regarding the aspect of estimating the predictive model, the coefficient function $\beta$ is also expressed as a basis expansion, with some form of regularization as an identifiability constraint, given that the theoretical functional linear model is ill-posed. A parsimonious approach is to restrict the number of basis functions, by either fixing a known suitable basis like a Fourier basis, or by considering only the first $K$ eigenfunctions of the covariance operator obtained from fPCA \citep{fda_pred}, which for any given $K$ explains most of the variation of the input functions in the $L^{2}$ sense. On the opposite side of the spectrum, another approach is instead to employ a rich enough basis while at the same time including some form of penalization, typically an $L_{2}$ penalty on $\beta$ or its derivatives in order to impose smoothness \citep{fda_splineflm,fda_splineerrors,fda_smoothsplines,fda_rkhs}, but $L_{1}$-based penalties have also been used \citep{fda_L1reg}. Note that the restricted basis and the penalization approaches are not mutually exclusive, and hybrid techniques have been proposed as well \citep{psplinesgenreg,fda_flirti,fda_sparseest}. In our setting we choose to adopt the penalization approach, by using the following simple grid basis with $p$ dense and equispaced knots placed on the $p$ evaluations corresponding to the evaluation grid of the estimated input functions:
\begin{equation*}
\beta(t) = \sum_{j=1}^{p} \beta_{j} b_{j}(t) \hspace{1.5cm}
b_{j}(t) =
\begin{cases}
1 & \hspace{0.1cm} \text{if} \hspace{0.2cm} \frac{j-1}{p} < t \leq \frac{j}{p} \\
0 & \hspace{0.1cm} \text{otherwise}
\end{cases}
\end{equation*}
\noindent which is a common solution that enables us to use any multivariate method for the numerical estimation, allowing for a proper comparison between different approaches, as the initial FDA preprocessing is shared between all the tested methods. The main objective of this work is to propose an adaptive penalization approach that is able to fit smooth and sparse coefficient functions \citep{chapter_fdasparsity}, ideally being able to recover the regions of the domain in which the covariates have no effect on the response, while at the same time allowing for a smooth behaviour if needed. Given the abundance of $p>>N$ applications with different requirements, it is no surprise that the literature on variable selection in linear models has experienced a significant growth in both the statistical and machine learning communities. What appears to be the most successful framework is based on the well known penalized least squares formulation (in the multivariate notation), and in particular the bridge estimator \citep{bridge}:
\begin{equation*}
\label{eq:bridge}
\underset{\scaleto{\beta \in \mathbb{R}^{p}}{6pt}}{\scaleto{min}{7pt}} \:\:\: \sum_{i=1}^{N} \left( y_{i} - x_{i}^{\top} \beta \right)^{2} + \lambda \sum_{j=1}^{p} |\beta_{j}|^{\gamma}
\end{equation*}
\noindent where $\gamma>0$, and $\lambda>0$ that controls the strength of the penalization (we omit the intercept). It is known that for $\gamma<1$ this yields a non-convex optimization problem, where in particular for $\gamma \rightarrow 0$ the bridge reduces to best subset selection \citep{asymplasso}. Besides the computational issues, subset selection methods are also known to be unstable \citep{heuristicsinstability}, and for this reason we will focus only on penalty-based approaches, although we are aware of the different stepwise algorithms. Moreover, given that the penalties are not scale-invariant, we will assume that the input data has been standardized. When $\gamma \geq 1$ the problem is instead convex but we pay the price of unwanted shrinkage of the coefficients, which introduces bias. For $\gamma=1$ in particular we obtain the lasso \citep{lasso}, which is a convex relaxation of best subset selection, while $\gamma=2$ corresponds to ridge regression \citep{ridge}. Regarding our specific setting, which deals with high dimensional and highly collinear data, it is not clear which approach to adopt, as the lasso may exclude important variables from the model and produce nonsmooth coefficient functions, the ridge may yield both nonsparse and nonsmooth ones, while the usual FDA roughness penalty may be too smooth and fail to recover any sharp change in the support of the coefficient function $\beta$. A possible solution is to impose hybrid penalties, like in the case of the elastic net \citep{elasticnet} or the smooth lasso \citep{smoothlasso}. Our proposed approach is instead exclusively based on the nonzero centered $L_{2}$ penalty \citep{ridgeprior,ridgefusion,genridgeinvcov,targetedridge}, and it is also inspired by the adaptive ridge estimator and other reweighted bias reduction techniques, as we will discuss in Section 2. Section 3 describes our method in detail, the applications are shown in Section 4, with concluding remarks in Section 5.
\section{Related Work}
As previously introduced, a significant issue that follows from the convex formulations of the bridge estimator ($\gamma \geq 1$) is that in order to perform variable selection, we inevitably end up with unwanted shrinkage of the "true" coefficients. This is even worse for $\gamma>1$, where the amount of shrinkage increases with the magnitude of the coefficient being estimated \citep{asymplasso}. Moreover, it is known that except when the OLS coefficients are exactly zero, the ridge is not able to yield sparse solutions, although the coefficients can get arbitrarly small for larger values of $\lambda$. It follows that when estimating a sparse model with ridge regression, there could be the need to employ some form of manual thresholding, setting to zero the smaller coefficients while at the same time accepting the overshrinkage of the larger ones, with a tradeoff between $\lambda$ and the threshold. In practice, lasso is usually the preferred choice when sparsity is sought after, as it is able to set coefficients to exactly zero by acting as a soft thresholding operator. However, an issue with the lasso is the fact that the optimal $\lambda$ with respect to prediction gives inconsistent results from the point of view of variable selection \citep{varsellassograph}, and for functional data in particular, the irrepresentable condition \citep{lassoirrepr} is likely to be violated, given that the curves often have high autocorrelation and the response may depend only on a subset of the domain. The main motivation behind our work is the idea of reducing bias by coefficient-wise adaptive tuning of the penalization. While the ordinary ridge regression corresponds to a sphere centered at the origin, which shrinks all the coefficients uniformly towards zero, \cite{ridge} already introduced a generalized form of ridge regression, which allowed to shrink each coefficient individually, resulting in an ellipsoid. In the usual penalized least squares formulation, the generalized ridge can be expressed as:
\begin{equation*}
\label{eq:genridge}
\underset{\scaleto{\beta \in \mathbb{R}^{p}}{6pt}}{\scaleto{min}{7pt}} \:\:\: \sum_{i=1}^{N} \left( y_{i} - x_{i}^{\top} \beta \right)^{2} + \sum_{j=1}^{p} \lambda_{j}\beta_{j}^{2}
\end{equation*}
\noindent where the parameters $\lambda_{j}>0$ control the amount of shrinkage on the corresponding coefficients $\beta_{j}$. This type of penalty is also known as the adaptive ridge estimator, and regardless of the loss function, it has been shown to be equivalent to the lasso, in the sense that they recover the same solution \citep{chapter_lassoridge} \citep{chapter_outcomeslassoridge}. In principle one would like to optimize with respect to both $\beta \in \mathbb{R}^{p}$ and $\lambda \in \mathbb{R}^{p}$, but globally the problem is nonconvex, and the adaptive ridge estimator uses an EM approach that is guaranteed to converge to a local optimum. Instead of alternating optimization, other methods are based on iterative refinements of an initial solution $\tilde{\beta} \in \mathbb{R}^{p}$, and we will refer to such methods as two-stage or multi-stage approaches. One of the first is the non-negative garrote (NNG) \citep{nngarrote}, which is closely related to the EM adaptive ridge and has the following formulation:
\begin{equation*}
\label{eq:nngarrote}
\underset{\scaleto{c \in \mathbb{R}^{p}}{6pt}}{\scaleto{min}{7pt}} \:\:\: \sum_{i=1}^{N} \left( y_{i} - \sum_{j=1}^{p} c_{j} x_{ij} \tilde{\beta}_{j} \right)^{2} + \lambda \sum_{j=1}^{p} c_{j}
\end{equation*}
\begin{center}
$ s.t. \:\:\:\:\:\:
c_{j}\geq 0
$
\end{center}
\noindent where $\lambda >0$ and the fitted coefficients are recovered as $\hat{\beta}_{j} = \hat{c}_{j} \tilde{\beta}_{j}$. The original NNG was initialized with the OLS solution $\tilde{\beta}=\beta^{ols}$, but other works experimented with other initial estimators like the ridge, the lasso, and the elastic net for high dimensional scenarios \citep{onthenngarrote}. On a side note, the NNG was also the inspiration for the original lasso paper \citep{book_sls}. A generalization of the NNG (without the sign constraint) is the adaptive lasso \citep{adalasso}, which assumes a known weight vector $w \in \mathbb{R}^{p}$ and solves:
\begin{equation*}
\label{eq:adalasso}
\underset{\scaleto{\beta \in \mathbb{R}^{p}}{6pt}}{\scaleto{min}{7pt}} \:\:\: \sum_{i=1}^{N} \left( y_{i} - x_{i}^{\top} \beta \right)^{2} + \lambda \sum_{j=1}^{p} w_{j} |\beta_{j}|
\end{equation*}
\noindent with \( w_{j}= 1/|\beta_{j}^{ols}|^{\gamma} \), $\gamma>0$ and $\lambda >0$ selected by cross-validation. This is also a two-stage approach and the final coefficients $\hat{\beta}_{j}$ can be computed by setting $\tilde{x}_{ij}=x_{ij}/w_{j}$, solving a lasso problem with $\tilde{x}_{i}$ as input data, and finally recovering the coefficients as $\hat{\beta}_{j}=\tilde{\beta}_{j}/w_{j}$, with $\tilde{\beta}$ the solution of the previous lasso problem. As for the NNG, the initial estimator is not restricted to the OLS and the ridge is suggested in case of collinearity. Another reweighted estimator is the broken adaptive ridge (BAR) \citep{bar}, which is a multi-stage approach that starts from a ridge penalized solution $\hat{\beta}^{0}$ and at each iteration refines the previous one $\hat{\beta}^{k} = g ( \hat{\beta}^{k-1} )$, with $\hat{\beta}^{*} = \lim_{k \to \infty} \hat{\beta}^{k}$ and
\begin{equation*}
\label{eq:bar}
g ( \tilde{\beta} ) = \underset{\scaleto{\beta \in \mathbb{R}^{p}}{6pt}}{\scaleto{arg \:\, min}{10pt}} \:\:\: \sum_{i=1}^{N} \left( y_{i} - x_{i}^{\top} \beta \right)^{2} + \lambda \sum_{j=1}^{p} \beta_{j}^{2}/\tilde{\beta}_{j}^{2}
\end{equation*}
\noindent The subsequent iterations share the same $\lambda$, which is fixed starting from $k=1$ and not tuned individually at each step. Instead, the initial solution $\hat{\beta}^{0}$ is not necessarily obtained with the same $\lambda$ and could be further tuned, although empirically the BAR estimator was found to be insensitive to the initial value. All the methods that we have discussed share the common idea of using multiplicative weights in order to reduce bias, but in fact this is not the only viable approach. Nonconcave penalties like the SCAD \citep{scad} and the MCP \citep{mcp} are both based on quadratic splines with singularities at the origin, giving rise to nonconvex optimization problems that depend on different parameters and are often regarded as unstable, although more refined optimization algorithms have been proposed \citep{onestepsparse} \citep{nonconvexpenoptalgo}. Our approach is instead based on a convex formulation, but it is worth considering both the (elastic) SCAD and MCP for comparison purposes. Finally, yet another option to reduce bias is the one adopted by the relaxed lasso \citep{relaxedlasso}, which separates the variable selection aspect from the coefficient estimation one by first fitting a standard lasso model, followed by a second lasso but only including the covariates that correspond to the nonzero coefficients, with a relaxation parameter $\phi \in \left( 0,1 \right]$ in order to reduce unwanted shrinkage. Both problems share the same fixed $\lambda$ and therefore this can be done pathwise, unlinke from the adaptive lasso where the initial estimator has already been optimized with respect to $\lambda$, and then $\lambda$ is tuned again for the reweighted problem. In particular, let $\hat{\beta}^{\lambda}$ be the lasso solution for a fixed $\lambda$ and let $\mathcal{A} _{\lambda} = \{ 1\leq j \leq p \: | \: \hat{\beta}^{\lambda}_{j} \neq 0 \}$, the relaxed lasso solution $\hat{\beta}^{\lambda,\phi}$ is obtained by solving:
\begin{equation*}
\label{eq:relaxo}
\underset{\scaleto{\beta \in \mathbb{R}^{p}}{6pt}}{\scaleto{min}{7pt}} \:\:\: \sum_{i=1}^{N} \left( y_{i} - x_{i}^{\top} \{ \beta \mathds{1}_{\mathcal{A} _{\lambda}} \} \right)^{2} + \phi \lambda \sum_{j=1}^{p} |\beta_{j}|
\end{equation*}
\begin{equation*}
\{ \beta \mathds{1}_{\mathcal{A} _{\lambda}} \}_{j} =
\begin{cases}
\beta_{j} & \: \text{if} \:\:\: j \in \mathcal{A} _{\lambda} \\
0 & \: \text{otherwise}
\end{cases}
\end{equation*}
\noindent Our approach in a way is built on a similar relaxation scheme, but instead of performing variable selection and parameter estimation sequentially, we do it jointly and without directly removing any covariate from the initial model, by employing an adaptive weight function that acts on the center of the penalty. Like the ordinary ridge, our penalty is spherical, but is based on the nonzero centered ridge \citep{ridgeprior}:
\begin{equation*}
\label{eq:nonzerocenteredridge}
\underset{\scaleto{\beta \in \mathbb{R}^{p}}{6pt}}{\scaleto{min}{7pt}} \:\:\: \sum_{i=1}^{N} \left( y_{i} - x_{i}^{\top} \beta \right)^{2} + \lambda \sum_{j=1}^{p} (\beta_{j} - c_{j})^{2}
\end{equation*}
\noindent where the center of the sphere $c \in \mathbb{R}^{p}$ is provided by the user and $\lambda>0$ is selected by cross-validation. For $\lambda$ and $c$ fixed, let $X \in \mathbb{R}^{N \times p}$ be the design matrix and $Y \in \mathbb{R}^{N}$ the response vector, the solution can be computed in closed form as:
\begin{equation*}
\hat{\beta}^{\lambda,c} = (X^{\top}X +\lambda I)^{-1}(X^{\top}Y +\lambda c)
\end{equation*}
\noindent Moreover, the expected value of this estimator is:
\begin{equation*}
\mathbb{E}_{Y|X} [ \hat{\beta}^{\lambda,c} ] = (X^{\top}X +\lambda I)^{-1}(X^{\top}X \beta +\lambda c)
\end{equation*}
\noindent and therefore it is unbiased for $c=\beta$, meaning that the true value of the parameter is used as the center of the penalty. Clearly there would not be the need to fit any model if $\beta$ was already known, but this suggests that the bias will be low if the fixed $c$ is a good approximation of the true regression coefficient, and we propose to find $c$ by adaptively reweighting the ridge solution.
\section{Smoothly Adaptively Centered Ridge}
We already discussed some of the similarities between our approach and other known methods, with specific attention to the generalized/adaptive ridge and the nonzero centered ridge. The main downside of the adaptive ridge is that optimizing with respect to the shrinkage parameters yields a nonconvex problem, while for the nonzero centered ridge we need to specify the center of the penalty. Our focus is on the smooth $p>>N$ setting and in particular we will refer to the FDA terminology. We propose a convex formulation that allows for adaptive tuning of the type of shrinkage that is imposed on each region of the domain of the coefficient function $\beta$. Instead of employing a variable shrinkage parameter function like in the adaptive ridge, we uniformly shrink $\beta$ as in the ordinary ridge, while at the same time jointly optimizing the center of the penalty. Let $\tilde{\beta}^{\lambda}$ be the solution of the ordinary ridge for a fixed $\lambda$, we introduce a smooth weight function $w:I \rightarrow \mathbb{R}^{+}$ that acts on $\tilde{\beta}^{\lambda}$ and fit our estimator by solving the following convex problem with linear constraints:
\begin{equation}
\label{eq:sacr}
\begin{aligned}
\underset{\scaleto{\beta_{0},\beta,w}{6pt}}{\scaleto{min}{7pt}} \:\:\: \sum_{i=1}^{N} \left[ y_{i} -\beta_{0} - \int_{I}x_{i}(t)\beta(t)dt \right]^{2} + \lambda \phi \int_{I} \big[ \beta(t) -w(t)\tilde{\beta}^{\lambda}(t) \big]^{2}dt \\
+ \: \lambda(1-\phi) \int_{I} \big[ D^{2}w(t)\tilde{\beta}^{\lambda}(t)\big]^{2}dt \\
\end{aligned}
\end{equation}
\begin{center}
$ s.t. \:\:\:\:\:\:
\begin{cases}
\int_{I} w(t)dt=|I| \\
w(t)\geq 0
\end{cases}
$
\end{center}
\noindent where $\beta_{0} \in \mathbb{R}$ is the intercept, $\lambda>0$ and $\phi \in \left( 0,1 \right]$ that controls the balance between the two penalty terms. Both $\lambda$ and $\phi$ are selected by cross-validation and the $\lambda$ used in Problem \ref{eq:sacr} is the same as the one used for computing $\tilde{\beta}^{\lambda}$. The first term of the penalty is a nonzero centered ridge that shrinks $\beta$ uniformly towards $w\tilde{\beta}^{\lambda}$, while the second term is a roughness penalty on the center of the previous one. The weight function $w$ can be seen as an adaptive density which either contracts or dilates the initial center $\tilde{\beta}^{\lambda}$, allowing the nonzero centered penalty to selectively shrink $\beta$ towards zero in the regions of the domain that are not correlated with the response, while reducing the unwanted shrinkage in the informative regions. This sparsity inducing behaviour is motivated by the constraints imposed on $w$, which necessarily lead to a tradeoff between inflating and deflating $\tilde{\beta}^{\lambda}$, as proven in Proposition \ref{eq:tradeoff}.
\begin{proposition}
\label{eq:tradeoff}
Let $I \subset \mathbb{R}$ be a closed interval and let $w:I \rightarrow \mathbb{R}^{+}$ be a smooth function such that $\int_{I} w(t)dt=|I|$. Consider the closed subintervals $I_{i} \subset I$ such that $\mu(I_{i}) \neq 0$ and $I_{i} \cap I_{j} = \emptyset$ for $i\neq j$. Partition $I$ in disjoint intervals $I_{<}$, $I_{>}$ and $I_{=}$ such that $I = I_{>} \cup I_{<} \cup I_{=}$ where:
\begin{equation*}
\begin{aligned}
I_{>} &= \bigcup_{i} I_{i}: \hspace{0.1cm} w(t)>1 \hspace{0.2cm} \forall t \in I_{i} \\
I_{<} &= \bigcup_{i} I_{i}: \hspace{0.1cm} w(t)<1 \hspace{0.2cm} \forall t \in I_{i} \\
I_{=} &= \bigcup_{i} I_{i}: \hspace{0.1cm} w(t)=1 \hspace{0.2cm} \forall t \in I_{i}
\end{aligned}
\end{equation*}
then $|I_{>}| \neq 0 \iff |I_{<}| \neq 0$.
\begin{proof}
From the first mean value theorem for integrals follows that:
\begin{equation*}
\begin{aligned}
\int_{I_{>}} w(t)dt &= w(c_{>})|I_{>}|, \hspace{0.5cm} c_{>} \in I_{>}^{o}, \hspace{0.5cm} w(c_{>}) >1 \\
\int_{I_{<}} w(t)dt &= w(c_{<})|I_{<}|, \hspace{0.5cm} c_{<} \in I_{<}^{o}, \hspace{0.5cm} w(c_{<}) <1 \\
\int_{I_{=}} w(t)dt &= w(c_{=})|I_{=}|, \hspace{0.5cm} c_{=} \in I_{=}^{o}, \hspace{0.5cm} w(c_{=}) =1 \\
\end{aligned}
\end{equation*}
\noindent therefore, from $I = I_{>} \cup I_{<} \cup I_{=}$ results that:
\begin{equation*}
\begin{aligned}
\int_{I} w(t)dt &= \int_{I_{>}} w(t)dt + \int_{I_{<}} w(t)dt + \int_{I_{=}} w(t)dt \\
|I| &= w(c_{>})|I_{>}| \hspace{0.1cm} + \hspace{0.1cm} w(c_{<})|I_{<}| + |I_{=}| \\
|I| &= w(c_{>})|I_{>}| \hspace{0.1cm} + \hspace{0.1cm} w(c_{<})|I_{<}| + |I| - |I_{>}| - |I_{<}| \\
0 &= |I_{>}|[w(c_{>})-1] \hspace{0.1cm} + \hspace{0.1cm} |I_{<}|[w(c_{<})-1] \\
\end{aligned}
\end{equation*}
\noindent by construction we know that $[w(c_{>})-1] > 1$ and $[w(c_{<})-1] < 1$, proving that $I_{>}$ and $I_{<}$ are either both null sets or both non-null sets.
\end{proof}
\end{proposition}
It follows that when the true $\beta$ is provided as $\tilde{\beta}^{\lambda}$, there is no need to either inflate or deflate the center of the penalty, and therefore the optimal $w$ is 1 almost everywhere, leading to an unbiased estimator as it is equal to the nonzero centered ridge with $c=w\tilde{\beta}^{\lambda}=\beta$. In practice there is no guarantee that solving Problem \ref{eq:sacr} with the optimal center will lead to uniform unitary weights, as $w$ is penalized and jointly estimated with $\beta$ from the data. From the geometrical perspective, the weight function acts as an anisotropic scaling on $\tilde{\beta}^{\lambda}$, which has the effect of adaptively moving the center of the penalty. This has the advantage of nonuniform shrinkage between the coefficients, as in the adaptive ridge with its ellipsoidal penalty, while at the same time keeping the tractable convex formulation of the spherical penalty, since $\lambda$ is a scalar that is selected by cross-validation. Therefore, the adaptive shrinkage of the coefficient function is the result of an adaptive target, and not of an adaptive shrinkage intensity. Our approach can be described as a continuous way of doing variable selection, which is executed jointly with the estimation of the regression coefficients. As the weight function is not included in the model and is never used for prediction, we are not adding further parameters to the model itself, although we are doubling the parameters to be estimated. Regarding the two terms of the penalty, it is worth noting that since the roughness penalty is imposed on the center of the first term, the coefficient function is only indirectly penalized with respect to its roughness, by being pushed towards an adaptively scaled center. Imposing the roughness penalty on the center itself, instead of on the weight function only, ensures the option of retrieving a smooth centerfunction, with $\phi$ controlling the amount of smoothness, and not just a smooth scaling of $\tilde{\beta}^{\lambda}$. The choice of the ridge solution as the initial center is quite natural, as it is stable and almost always not exactly zero. This latter property of the $L_{2}$ penalty is often regarded as a problem or at least an incovenience, but in our case is instead welcomed, as the weight function is multiplicative and would not be able to inflate an initial zero coefficient. It follows that using a sparse coefficient function as the initial center equals to excluding multiple variables from the model, which is not a problem if the initial zero coefficients should indeed be zero, but is also not necessary to produce sparse or at least interpretable solutions, as the variable shrinkage induced by the weight function should already push to zero the coefficients of the unwanted variables. In practice, higher values of $\lambda$ will correspond to higher shrinkage of the coefficient function towards the centerfunction, where the selected $\lambda$ depends on how suitable the centerfunction is. Therefore, for a sparse and adequate centerfunction, the selected $\lambda$ will be high and the fitted coefficient function can get arbitrarly close to zero where needed, without the tradeoff of the ordinary ridge, where we pay the price of unwanted shrinkage on the nonzero coefficients.
Until now we only considered the context of regression, but in fact our approach can be generalized to the GLM framework as follows:
\begin{equation*}
\label{eq:gensacr}
\begin{aligned}
\underset{\scaleto{\beta_{0},\beta,w}{6pt}}{\scaleto{min}{7pt}} \:\:\: J(\beta_{0},\beta,x,y) + Pen_{\lambda \phi}(\beta,w)
\end{aligned}
\end{equation*}
\begin{center}
$ s.t. \:\:\:\:\:\:
\begin{cases}
\int_{I} w(t)dt=|I| \\
w(t)\geq 0
\end{cases}
$
\end{center}
\noindent where $J$ can be any convex loss function, as in the case of functional logistic regression, and it is independent from the weight function.
With respect to the numerical optimization, as the proposed formulation is quadratic with linear equality and inequality constraints, we opted for interior point methods, which are a class of optimization algorithms that are often regarded as state of the art for these types of problems \citep{iposurvey}. In particular, we employ the solver IPOPT \citep{ipopt} which is based on a primal-dual interior point algorithm with filter line search. The worst-case number of iterations is $O(\sqrt{n})$ with $n$ the number of variables, although interior point methods usually converge in a few steps. At each iteration the dominating cost is $O(n^{3})$ for applying Newton's method in order to solve a system of equations, and therefore the overall worst-case computational cost is $O(n^{3.5})$. In our specific scenario we have $n=2p$, as our formulation doubles the amount of parameters to be estimated.
\section{Applications}
In this section we provide some empirical results of the performance of our method (SACR) in the contex of FDA with $p>>N$. In particular, we show a simulation study and two real world applications, one for classification and one for regression. We compare SACR with multiple penalized methods that are known for inducing sparsity and/or smoothness, like the lasso, adaptive lasso, relaxed lasso, NNG, ridge, BAR, elastic net, elastic SCAD, elastic MCP and the roughness penalty. The base implementations for the lasso, ridge and elastic net are the ones from \textit{scikit-learn} \citep{scikit}, while for the adaptive lasso, relaxed lasso, NNG and BAR we implemented our own wrappers based on those. The elastic SCAD and elastic MCP are available in the
CRAN package \textit{ncvreg} \citep{package_ncvreg}, while we also use our own python implementation of the roughness penalized functional linear model. We interface with the solver IPOPT by modeling the optimization problems with Pyomo \citep{book_pyomo}.
\subsection{Simulation Study}
This is a simulated regression problem where the true coefficient function $\beta$ is sparse and smooth, in order to show how the adaptive centering is able to jointly shrink towards a mixed target. In particular, we simulate the input curves with the same B-spline model but with two different configurations of dependency between the coefficients, resulting in two separate simulations. The shared base model is a cubic B-spline with inner knots equispaced between $[-0.5,1.5]$, while the spline coefficients are sampled from a multivariate normal for each of the $N=50$ observations, with either a diagonal covariance matrix and 35 inner knots as an edge case, or high positive correlation and 50 inner knots in order to simulate a standard FDA setting. The $N$ input functions $x_{i}$ are then evaluated on the same equispaced grid of length $p=150$ over $I=[0,1]$, and the responses are computed as $y_{i} = \int_{I} x_i(t)\beta(t)dt + \epsilon_{i}$ with $\epsilon_{i} \sim \mathcal{N}(0,1)$. Figure \ref{fig:data_simulation} shows the input curves for both simulations and the true coefficient function, while Figures \ref{fig:allbetas_ind} and \ref{fig:allbetas_dep} show the fitted coefficient functions for all methods tested. Regarding the strictly $L_{2}$-based methods, the ridge, the roughness penalty and the SACR provide similar solutions, with varying degrees of smoothness as expected. The SACR in fact reminds of a warped version of the ridge, with a much smooother behaviour in the highly collinear simulation, which is very close to the solution of the roughness penalized model. In Figures \ref{fig:sacrcomparison_ind} and \ref{fig:sacrcomparison_dep} we show the comparison between the initial center, the true coefficient function and the fitted SACR function, together with the corresponding fitted weight function. In particular, while our method is not able to recover exactly zero values of the coefficient function (given the $L_{2}$ norm), the overall sparsity pattern is arguably recognizable and is confirmed by the shape of the weight function, which is above one where the initial solution should be inflated, while tapering towards zero in the regions that should be sparse. Note that the fact of having a zero weight does not guarantee an exactly zero coefficient function $\beta$, since the weight acts on the center of the penalty and not on the coefficients themselves, with analogous considerations for very high values of the weight, as shown in the simulation with independent coefficients. The BAR estimator seems to have problems of instability, which could be a numerical issue of our implementation given its asymptotic definition, although the method is competitive in both the subsequent real world case studies. Regarding the pure sparsity inducing penalties like the lasso, the relaxed lasso, the adaptive lasso, and the NNG, there is no clear distinction between the two simulations, and in both cases the methods recover coefficient functions with the typical spikes on some of the variables, failing to recover the exact sparsity pattern and therefore excluding from the model many of the significant predictors. On the other hand, the thresholding effect of the $L_{1}$ penalty allows to set to zero the coefficients of the unwanted variables. Finally, the hybrid penalties show a clearly different behaviour in the two simulations, where all three methods visibly leverage the ridge part of the penalty in the highly collinear simulation, including in the model all the correct predictors and many unwanted ones, while in the independent simulation only the elastic net leverages the ridge part, and instead both the elastic SCAD and elastic MCP recover very sparse solutions. We report the regression results in Table \ref{tab:simulations}, obtained by 5-fold cross-validation with 3-fold cross-validation for grid-search hyperparameter selection. It is worth noting that in the simulation with independent coefficients, the adaptive lasso, the elastic SCAD and elastic MCP have lower mean-square error than the SACR, despite the fact that they leave out of the model many of the relevant predictors.
\begin{figure}[H]
\centering
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/x.pdf}
\caption{$x_{i}$ - independent}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/x.pdf}
\caption{$x_{i}$ - dependent}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/true_beta.pdf}
\caption{true $\beta$}
\end{subfigure}
\caption{Simulated data with independent and highly dependent spline coefficients, true $\beta$}
\label{fig:data_simulation}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/lasso.pdf}
\caption{lasso}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/adalasso.pdf}
\caption{adaptive lasso}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/relaxo.pdf}
\caption{relaxed lasso}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/NNG.pdf}
\caption{NNG}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/BAR.pdf}
\caption{BAR}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/elasticnet.pdf}
\caption{elastic net}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/SCAD.pdf}
\caption{elastic SCAD}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/MCP.pdf}
\caption{elastic MCP}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/ridge.pdf}
\caption{ridge}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/roughness.pdf}
\caption{roughness}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/SACR.pdf}
\caption{SACR}
\end{subfigure}
\caption{Independent spline coefficients: fitted coefficient functions $\hat{\beta}$ by penalty type}
\label{fig:allbetas_ind}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[c]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/comparison.pdf}
\caption{true $\beta$ - initial $\tilde{\beta}^{\lambda}$ - fitted $\hat{\beta}$}
\end{subfigure}
\begin{subfigure}[c]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/ind/w.pdf}
\caption{$w$}
\end{subfigure}
\caption{Independent spline coefficients: comparison between true $\beta$, initial centerfunction $\tilde{\beta}^{\lambda}$ and fitted SACR estimator $\hat{\beta}$, with corresponding weight function $w$}
\label{fig:sacrcomparison_ind}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/lasso.pdf}
\caption{lasso}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/adalasso.pdf}
\caption{adaptive lasso}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/relaxo.pdf}
\caption{relaxed lasso}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/NNG.pdf}
\caption{NNG}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/BAR.pdf}
\caption{BAR}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/elasticnet.pdf}
\caption{elastic net}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/SCAD.pdf}
\caption{elastic SCAD}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/MCP.pdf}
\caption{elastic MCP}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/ridge.pdf}
\caption{ridge}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/roughness.pdf}
\caption{roughness}
\end{subfigure}
\begin{subfigure}[c]{0.22\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/SACR.pdf}
\caption{SACR}
\end{subfigure}
\caption{Dependent spline coefficients: fitted coefficient functions $\hat{\beta}$ by penalty type}
\label{fig:allbetas_dep}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[c]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/comparison.pdf}
\caption{true $\beta$ - initial $\tilde{\beta}^{\lambda}$ - fitted $\hat{\beta}$}
\end{subfigure}
\begin{subfigure}[c]{0.45\textwidth}
\includegraphics[width=\textwidth]{figs/sacr_sim/dep/w.pdf}
\caption{$w$}
\end{subfigure}
\caption{Dependent spline coefficients: comparison between true $\beta$, initial centerfunction $\tilde{\beta}^{\lambda}$ and fitted SACR estimator $\hat{\beta}$, with corresponding weight function $w$}
\label{fig:sacrcomparison_dep}
\end{figure}
\begin{table}[H]
\centering
\caption{\label{tab:simulations} Regression results for both simulations: mean-square error}
\scalebox{1}{
\begin{tabular}{l D{,}{\, \pm \,}{-1} D{,}{\, \pm \,}{-1}}
\toprule
\midrule
& \multicolumn{1}{c}{independent} & \multicolumn{1}{c}{dependent} \\
\midrule
lasso & 1.807,1.03 & 2.011,1.2 \\
adaptive lasso & 1.479,.067 & 2.078,1.4 \\
relaxed lasso & 1.807,1.03 & 1.982,1.2 \\
NNG & 1.571,1.01 & 2.171,1.3 \\
BAR & 5.669,2.60 & 2.538,1.1 \\
elastic net & 1.708,.548 & 1.949,1.2 \\
elastic SCAD & 1.263,.347 & 2.414,1.2 \\
elastic MCP & 1.291,.407 & 2.407,1.1 \\
ridge & 2.408,.458 & 1.948,1.2 \\
roughness & 1.540,.391 & 1.894,1.3 \\
SACR & 1.521,.308 & 1.795,1.1 \\
\midrule
\bottomrule
\end{tabular}}
\end{table}
\subsection{IDRC 2018}
For regression we present a spectroscopy application that was originally proposed for the on-site competition of the 2018 International Diffuse Reflectance Conference. The data is already smooth and is available at \url{https://www.cnirs.org/content.aspx?page_id=22&club_id=409746&module_id=276203}, with $N=150$ and $p=635$. The response variable has values over the whole dataset of 27.7 $\pm$ 1 ($\mu\pm\sigma$), with no information about the nature of the data. Figure \ref{fig:idrc2018} shows the input curves with the corresponding (rescaled) coefficient function for each of the methods that we tested, where the dashed line indicates the zero of the coefficient function. The results are reported in Table \ref{tab:idrc2018} and are obtained by three random repetitions of 5-fold cross-validation, with 3-fold cross-validation for grid search hyperparameter selection. In this application, the SACR and the ridge both achieve comparable mean-square error (mse) scores, but with a clear difference in the fitted coefficient functions. In fact, while the ridge recovers a noisy solution, the SACR is able to recover a sparse coefficient function which is easier to interpret. Overall, most methods tend to include the same variables in the model, with the lasso variants and the elastic net that show a noisy behaviour similar to the ridge, which may be given by the high collinearity. The roughness penalty instead recovers a smooth but oscillating coefficient function, despite achieving a similar score to the BAR estimator, which is sparse and slightly noisy. Finally, the NNG, the elastic SCAD and elastic MCP all recover very sparse solutions as expected.
\begin{figure}[H]
\centering
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/lasso.pdf}
\caption{lasso}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/adalasso.pdf}
\caption{adaptive lasso}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/relaxo.pdf}
\caption{relaxed lasso}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/elasticnet.pdf}
\caption{elastic net}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/SCAD.pdf}
\caption{elastic SCAD}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/MCP.pdf}
\caption{elastic MCP}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/NNG.pdf}
\caption{NNG}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/BAR.pdf}
\caption{BAR}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/ridge.pdf}
\caption{ridge}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/roughness.pdf}
\caption{roughness}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/idrc2018/SACR.pdf}
\caption{SACR}
\end{subfigure}
\caption{IDRC 2018: comparison of the fitted coefficient functions $\hat{\beta}$ scaled with respect to the spectra, the black dashed line represents the zero level for $\hat{\beta}$}
\label{fig:idrc2018}
\end{figure}
\begin{table}
\caption{IDRC 2018: regression results, mean-square error}
\label{tab:idrc2018}
\begin {center}
\begin{tabular}{l D{,}{\, \pm \,}{-1}}
\toprule
\midrule
lasso & .1029,.019 \\
adaptive lasso & .0926,.018 \\
relaxed lasso & .0951,.017 \\
NNG & .0750,.015 \\
BAR & .0717,.015 \\
elastic net & .1048,.019 \\
elastic SCAD & .1559,.061 \\
elastic MCP & .1415,.072 \\
ridge & .0695,.013 \\
roughness & .0711,.016 \\
SACR & .0691,.014 \\
\midrule
\bottomrule
\end{tabular}
\end {center}
\end{table}
\newpage
\subsection{Wine}
This spectroscopy application is a binary classification problem in which we want to discriminate between two different wine types. The data is available at \url{http://www.timeseriesclassification.com/description.php?Dataset=Wine} and is already smoothed with $N=111$ and $p=234$, although there is no additional information about the acquisition process. The results are reported in Table \ref{tab:wine} and are obtained by three random repetitions of 5-fold cross-validation, with additional 3-fold cross-validation for grid search hyperparameter selection. While there is no clear visual distinction between the spectra of the two classes, this problem is not exceptionally hard and it is well suited for a linear model. In particular, the roughness penalty has the second lowest accuracy, suggesting that a very smooth coefficient function is not appropriate. In fact, the SACR provides the highest accuracy without leveraging the smoothing term of the penalty, as the resulting coefficient function is sparse and with multiple spikes, similar to what is usually obtained with $L_1$-based methods, as shown in Figure \ref{fig:wine}. Given the relatively high sample size, the lasso is also able to include many variables in the model, with the adaptive lasso and the relaxed lasso that gradually produce sparser solutions. It is interesting to note that the elastic net instead yields a coefficient function that is almost identical to the one obtained with the ridge, while the elastic SCAD and elastic MCP do not seem to leverage the ridge part of the penalty. Despite that, both approaches include different variables in the model and their resulting accuracy is lower than the other sparse methods, which may be related to the known difficulties of optimizing nonconcave penalties. The NNG has the second highest accuracy and the fitted coefficient function is in fact very similar to the one resulting from the SACR, further suggesting that a sparse solution is indeed adequate for this application, which is also confirmed by the BAR estimator, that for the most part recovers the same variables.
\newpage
\begin{figure}[H]
\centering
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/lasso.pdf}
\caption{lasso}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/adalasso.pdf}
\caption{adaptive lasso}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/relaxo.pdf}
\caption{relaxed lasso}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/elasticnet.pdf}
\caption{elastic net}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/SCAD.pdf}
\caption{elastic SCAD}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/MCP.pdf}
\caption{elastic MCP}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/NNG.pdf}
\caption{NNG}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/BAR.pdf}
\caption{BAR}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/ridge.pdf}
\caption{ridge}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/roughness.pdf}
\caption{roughness}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[width=\textwidth]{figs/wine/SACR.pdf}
\caption{SACR}
\end{subfigure}
\caption{Wine: comparison of the fitted coefficient functions $\hat{\beta}$ scaled with respect to the spectra, the black dashed line represents the zero level for $\hat{\beta}$}
\label{fig:wine}
\end{figure}
\begin{table}
\caption{Wine: classification results, accuracy (\%)}
\label{tab:wine}
\begin {center}
\begin{tabular}{l D{,}{\, \pm \,}{-1}}
\toprule
\midrule
lasso & 94.64,5.9 \\
adaptive lasso & 96.77,4.6 \\
relaxed lasso & 95.88,5.0 \\
NNG & 97.95,3.1 \\
BAR & 96.82,3.3 \\
elastic net & 94.34,5.9 \\
elastic SCAD & 93.35,3.4 \\
elastic MCP & 89.49,6.7 \\
ridge & 95.25,5.5 \\
roughness & 90.28,8.4 \\
SACR & 99.44,1.1 \\
\midrule
\bottomrule
\end{tabular}
\end {center}
\end{table}
\newpage
\section{Conclusions}
In the context of high dimensional linear models, the ordinary ridge penalty is widely known to shrink the coefficients uniformly towards zero, resulting in stable solutions at the price of intruducing some bias. In order to reduce unwanted shrinkage on a subset of the coefficients, generalized and adaptive ridge estimators introduce coefficient-wise penalty parameters that allow for a non-uniform regularization effect, with the downside that tuning such parameters is a nonconvex problem. The nonzero centered ridge instead allows for a convex formulation that uniformly shrinks the coefficients towards a specific target, which in turn has to be specified by the user. In this work we have provided a convex formulation that leverages the nonzero centered ridge and allows for variable shrinkage of the coefficient function along its domain, mitigating the downside of uniform shrinkage towards zero, without the need to specify a center for the penalty, as it is learned from the data in a supervised way. In particular, we introduced a constrained weight function that is jointly estimated while fitting the model and acts as a scaling transformation on the initial centerfunction, which is the ordinary ridge solution. We referred to our method as smoothly adaptively centered ridge (SACR), since the centerfunction is adaptively scaled with respect to the loss and is further penalized for its roughness, as it is common in the functional data setting. Regarding the computational aspect, our approach doubles the number of variables to be estimated but not the ones introduced in the model, and for the numerical optimization we resorted to known primal-dual interior point methods with line search. Finally, we provided some empirical evidence with a simulation study, and two real world spectroscopy applications for both classification and regression.
\section*{Acknowledgments}
Edoardo Belli was financially supported by the ABB-Politecnico di Milano Joint Research Center through the PhD scholarship \textit{"Development and prototyping of distributed control systems for electric networks based on advanced statistical models for the analysis of complex data"}.
\clearpage
\bibliographystyle{elsarticle-harv}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,248
|
\section{Introduction}\label{sec:Introduction}
The field of topological systems in photonics \cite{Top-photonics}, exciton-polaritons \cite{Exciton-Polariton} and Bose-Einstein Condensates (BEC) \cite{BEC} has brought to the fore the interplay between non-linear effects such as solitons \cite{Nonlinear-Waves}, \cite{Nonlinear-optics} and linear topological phases of matter \cite{TKNN}, \cite{Kane-Mele-2}, \cite{Avron-Seiler-Simon}. An interesting question arises when we couple a non-linear medium to a periodic potential \cite{Weinstein-Physics}, particularly when the linear system is topologically non-trivial, e.g. quantum Hall effect (QHE), SQHE and topological insulators (TI) \cite{Top-photonics}. Are systems which combine both still topological? And if so, which topological invariants classify these new non-linear systems? Though there are already many theoretical and experimental results on the stability of solitons, non-linear Bloch waves and their interactions as well as resulting macroscopic properties of materials \cite{Nonlinear-top-photonics}, \cite{Top-BandGap-Solitons}, \cite{Exciton-Polariton}, there are not many proposals for topological invariants classifying these systems (see \cite{Nonlinear-Berry}, \cite{Nonlinear-Dirac-Cones} for some results in this direction). For weakly non-linear systems, the Non-linear Schrödinger/Gross-Pitaevskii equation (NLS/GP) \cite{Nonlinear-optics}, \cite{Nonlinear-top-photonics}, \cite{Weinstein-Book} and variations of these are good approximations to low-energy behaviour. Here we shall consider weakly non-linear $d$-dimensional systems with a periodic potential \cite{Weinstein-Physics} and magnetic field described by
\begin{equation}\label{eq:NLS/GP}
i\partial_{z}\Psi(\vec{x},z) = [-(i\nabla - \vec{A}(\vec{x}))^2 + V(\vec{x}) - f(\vec{x},|\Psi|^2)]\Psi(\vec{x},z),
\end{equation}
where $z$ can be either time (BEC) or the distance along the direction of propagation (Optics, here $z\geq 0$). We also assume that $\vec{A}(\vec{x}+\vec{a}) = \vec{A}(\vec{x});\,\,V(\vec{x}+\vec{a}) = V(\vec{x})$ for all $\vec{a} \in \Lambda$, where $\Lambda$ is a $d$-dimensional lattice. Note that in general the assumption is not true for the magnetic potential $\vec{A}$ and we should include disorder \cite{Prodan-Schulz-Baldes}. We denote the linear part by $\mathcal{H}_{l} = -(i\nabla - \vec{A})^2 + V$. We can simultaneously include systems with a boundary in our discussion by splitting $\vec{x}= (x_{\perp},\vec{x}_{||})$ and making $V(\vec{x}) = A(\vec{x})= 0$ for $ \,x_{\perp} \leq 0$ and periodic only in the $\vec{x}_{||}$-direction with respect to $\Lambda_{||}$, a $d-1$-dimensional lattice, parallel to the boundary. We would like $\mathcal{H}_{l}$ to correspond to a linear topological system (phase) with a gap \cite{Avron-Seiler-Simon} or gapped bulk condition (systems with boundary) \cite{AASS} under adiabatic evolution \cite{Avron-Adiabatic} at a fixed energy scale $E_{gap}$, as there is no analogue of the Fermi energy $E_{F}$ for our non-linear systems, since they are generally bosonic \cite{Top-photonics}, \cite{DeNittis-Maxwell}.
$\mathit{K}$-theory \cite{Hatcher-K-theory} and cohomology \cite{Hatcher-Alg-Top} have by now been used in different fields of physics, for example $\mathit{K}$-theory was used in \cite{Kane-Mele-2}, \cite{Horava-Fermi} and cohomology in \cite{Wen-PRL}, and \cite{Kapustin-PRL}, among many others. For those readers who are not familiar with algebraic topology, we present a brief description of what it entails. Given a topological space $X$, cohomology and $\mathit{K}$-theory assign abelian groups $H^n(X;\mathbb{Z})$ and $\tilde{\mathit{K}}^{0}(X)$ to it, hence the name algebraic topology. A homotopy between two maps from a topological space $X$ to $Y$ is simply a continuous family of maps parametrized by $s$ in $[0,1]$ such that at 0 and 1 it coincides with the original ones. Homotopies are the backbone of algebraic topology. Said differently, everything in algebraic topology (and topological phases) has an expression in terms of homotopy theory, e.g. cohomology, $\mathit{K}$-theory and their equivariant versions \cite{Freed-Moore} are defined using homotopy theory, the all-encompassing framework. In particular, the groups assigned to $X$, $H^{n}(X;\mathbb{Z})$ and $\tilde{\mathit{K}}^{0}(X)$ can be constructed out of maps (up to homotopy) from $X$ to another space called the classifying space, which is independent of $X$, but does depend on whether it is $H^n(X;\mathbb{Z})$ or $\tilde{\mathit{K}}^{0}(X)$. Here, as it is done in the physics community, we interpret adiabatic evolution as a homotopy \cite{Avron-Adiabatic} and show how these classifying spaces arise from physical and stability conditions. For our purposes, equivariant topology can be thought of in the following way: If we have symmetries represented by a group $P$, we only want to consider maps that preserve them and see which groups we get with this restriction. We represent these as $H^{n}_{P}(X;\mathbb{Z})$ and $\mathit{K}_{P}^{0}(X)$ respectively. With this in mind let us proceed.
For fully periodic systems with a gap, there is a coarse classification \footnote{Coarse means we do not care about adding trivial valence bands to our systems.} where a phase $[\mathcal{H}_{l}-E_{gap} I] \in \tilde{\mathit{K}}^{0}(\mathbb{T}^{d})$, the $\mathit{K}$-theory group constructed out of vector bundles over the Brillouin torus $\mathbb{T}^{d}$ arising from the Bloch bands below the Fermi energy \cite{Kitaev}, \cite{Freed-Moore}. Meanwhile, for systems with a boundary, $\mathit{K}$-theory arises naturally and $[\mathcal{H}_{l}-E_{gap}I]\in \mathit{K}^{-1}(\mathbb{T}^{d-1})$, where $\mathbb{T}^{d-1}$ is the surface Brillouin torus \cite{AASS}. Ideally we would imitate the linear classification for systems with $f(\vec{x},|\Psi|^2)$ by defining a non-linear gap condition and a notion of non-linear adiabatic evolution \cite{Nonlinear-Adiabatic}. The problem is that so far there is no analogue of a gap condition for non-linear systems \cite{Nonlinear-top-photonics}. We will consider the simplified problem of classifying the topological behaviour of modes around soliton solutions, stationary solutions of the form $\Psi(\vec{x},z) = e^{-i\lambda z}\Phi^{\lambda}(\vec{x})$ to eq (\ref{eq:NLS/GP}), which decay exponentially as we go to spatial infinity.
But, can all solitons have topological modes? And even if some do, could these be destroyed by the non-linearity? If a soliton is unstable it will eventually disappear and its modes together with it. Positive (ground state) solitons have two types of instabilities, one is a \textit{focusing} instability \cite{Weinstein-Physics}, where without increasing, its energy focuses towards a single point, yielding an arbitrary high density which blows up. The other is a \textit{drift} instability, where via asymmetric distortions, infinitesimal displacements of its original position make it drift towards infinity \cite{Weinstein-Physics}. What happens for non-positive solitons, so-called gap solitons, i.e. those which live between the spectral bands of the linear problem? These suffer from a different set of instabilities \cite{Pelinovsky-Oscillatory}. One of these is an \textit{oscillatory} instability, where among other mechanisms, a vibration mode appears and resonates with radiation, causing the soliton to oscillate at higher and higher frequencies \cite{Pelinovsky-Oscillatory-Numerical}. We would like our solitons to avoid such instabilities; therefore, we shall further impose some stability conditions \cite{Weinstein-Physics}, \cite{Pelinovsky-Oscillatory}. However, how compatible are these conditions with those necessary for topological modes? Do these conditions have a topological interpretation? We shall elucidate their topological character and their relation to the other topological restrictions in what follows.
\section{Linearization and stability conditions}\label{sec:Linearization}
We linearize eq (\ref{eq:NLS/GP}) around $\Phi^{\lambda}$ \cite{Weinstein-Zhou}, \cite{Weinstein-Physics} which yields
\begin{equation}\label{eq:LNLS}
\partial_{z}\vec{\chi} = \mathcal{L}(\lambda)\vec{\chi},
\end{equation}
where
\begin{equation} \label{eq:bigL}
\mathcal{L}(\lambda) =
\begin{pmatrix}
0 & L_{-}(\lambda)\\
-L_{+}(\lambda) & 0
\end{pmatrix},
\end{equation}
with self-adjoint operators
\begin{eqnarray}\label{eq:L-L+}
L_{-}(\lambda) &=& \mathcal{H}_{l} -\lambda I -f(x,|\Phi^{\lambda}|^2) ,\\
L_{+}(\lambda) &=& \mathcal{H}_{l} -\lambda I -f(x,|\Phi^{\lambda}|^2) - 2df|_{\phi^{\lambda}}. \nonumber
\end{eqnarray}
Our linearized problem has a clear analogue of a gap condition for the mode operators $L_{\pm}(\lambda)$. We note that $\mathcal{H}_{l} -\lambda I$ has its spectrum shifted and hence the scale at which the soliton satisfies a \textit{gapped modes} condition is at
\begin{eqnarray}\label{eq:mode-gap}
E_{modes}(\lambda) &=& E_{gap}-\lambda, \\
0 &<&\lambda < E_{gap}.
\end{eqnarray}
Thus, the first constraint we put on solitons to have topological modes is $\lambda < E_{gap}$.
We also have extra potentials determined by $f(x,|\Phi^{\lambda}|^2)$ and $f(x,|\Phi^{\lambda}|^2) -2df|_{\phi^{\lambda}}$, which we name the \textit{soliton potentials}. For the soliton potentials not to destroy the topological character of $\mathcal{H}_{l}- \lambda I - E_{modes}(\lambda)$, we need them to behave as a perturbation/impurity/defect that does not break the gapped modes condition
\begin{eqnarray}\label{eq:Soliton-potential}
|f(x,|\Phi^{\lambda}|^2)| &<<& E_{modes}(\lambda),\\
|f(x,|\Phi^{\lambda}|^2)-2df|_{\phi^{\lambda}} &<<& E_{modes}(\lambda).\label{eq:Soliton-potential2}
\end{eqnarray}
These constrain both the non-linearities and the width of solitons with topological modes. Just as an example, for power non-linearities $f(|\Psi|) = |\Psi|^{p-1},\,\,p>1$, solitons which have a peak of the order $E_{modes}(\lambda)^{\frac{1}{p-1} }$ will destroy the gapped modes condition and this will be easier as $\lambda$ increases. For systems with a boundary the constraint is only necessary for solitons that are surface-localized. Here arises our first connection with instabilities. Positive (ground-state) solitons that are focusing unstable \cite{Weinstein-Physics}, that is, solitons whose modulus blows up will quickly break the gapped modes condition. This means that our solitons should satisfy the Vakhitov-Kolokolov stability condition
\begin{equation}\label{eq:VK}
\frac{dP}{d\lambda}< 0,
\end{equation}
where $P = \int |\Psi|^2d\vec{x}$ is the particle number (BEC) or optical power, which is conserved. Let us now consider solitons that are drift stable, i.e. those which stay put under small displacements of their initial position. These have to satisfy the spectral condition
\begin{equation}\label{eq:SpectralCondition}
n_{-}(L_{+}(\lambda)) = 1,
\end{equation}
where $n_{-}(L_{+}(\lambda))$ is the number of negative eigenvalues. Note that for positive solitons, these conditions are necessary and sufficient for full stability \cite{Weinstein-Physics}. The latter condition can be interpreted as topological if we further note that from the positivity of $\mathcal{H}_{l}$, there is a restriction on the continuous spectrum $\sigma_{c}(L_{+}(\lambda))$ (scattering modes) to be positive. Modes associated to $L_{+}(\lambda)$ live in a Hilbert space $\mathfrak{H}^{2}(\mathbb{R}^{d},\mathbb{C})$ and the above means there is a natural split
\begin{equation}
\mathfrak{H}^{2}(\mathbb{R}^{d},\mathbb{C}) =\mathfrak{H}_{-1}(\lambda)\oplus \mathfrak{H}_{\geq 0}(\lambda).
\end{equation}
The set of all such $1$-dimensional subspaces $\mathfrak{H}_{-1}(\lambda)$ of $\mathfrak{H}^{2}(\mathbb{R}^{d},\mathbb{C})$ forms a topological space known as the infinite dimensional Grassmannian $Gr_{1}(\mathfrak{H}^{2}(\mathbb{R}^{d},\mathbb{C}))$ \cite{Grassmannians}, which we denote $Gr_{1}$ for shortness. The space $Gr_{1}$ is a classifying space for the second cohomology group $H^{2}$ \cite{Grassmannians}, \cite{Hatcher-Alg-Top}. This means that (up to homotopy) maps to $Gr_1$ are used to construct the cohomology group $H^2$. Note that the gapped modes condition never entered into our discussion of drift stability. These two conditions are independent as the drift stability is about what happens below $\sigma_{c}(L_{+}(\lambda))$, while the gapped modes condition is about what happens in between (same for gapped Bulk-modes). Thus, we can view linearization around the soliton as a map
\begin{equation}
\Phi^{\lambda} \mapsto \bigg\{\begin{pmatrix}
0 & P_{\geq0}(\lambda)L_{-}(\lambda)\\
-P_{\geq 0}(\lambda)L_{+}(\lambda) & 0
\end{pmatrix}, P_{-1}(\lambda)\bigg\},
\end{equation}
where $P_{-1}(\lambda)$ and $P_{\geq 0}(\lambda$) are projections to $\mathfrak{H}_{-1}(\lambda)$ and $\mathfrak{H}_{\geq 0}(\lambda)$. Because of conditions (\ref{eq:Soliton-potential}), (\ref{eq:Soliton-potential2}), the first component can be seen to be equivalent, up to mode adiabatic evolution (for definition see sec (\ref{sec:Mode})), to a Hamiltonian operator in $Gap(L^2(\mathbb{R}^{d}))$, the space of gapped $d$-dimensional single-particle periodic Hamiltonians. In \cite{Freed-Moore}, using Bloch's theorem, it is shown using the periodicity that $Gap(L^2(\mathbb{R}^{d}))$ is coarsely equivalent (by adding trivial bands) to $Map(\mathbb{T}^d,BGL_{\infty})$, the space of continuous maps from the $d$-dimensional Brillouin torus to the classifying space $BGL_{\infty}$, which can be thought of as an ever increasing sequence of Grassmannians. Maps (up to homotopy) to $BGL_{\infty}$ give rise to the group $\tilde{\mathit{K}}^{0}$ \cite{Hatcher-K-theory}. Analogously, for systems with a boundary, we instead have $Gap_{Bulk}(L^2(\mathbb{R}^d))$, itself being equivalent to $Map(\mathbb{T}^{d-1},\mathcal{F}^{sa}_{*}(\mathfrak{H}))$ \cite{AASS}, where $\mathbb{T}^{d-1}$ is now the surface Brillouin torus and $\mathcal{F}^{sa}_{*}(\mathfrak{H})$ is a subspace of self-adjoint Fredholm operators \cite{Atiyah-Skew}. Maps to $\mathcal{F}^{sa}_{*}(\mathfrak{H})$ now give rise to the group $\mathit{K}^{-1}$ instead of $\tilde{\mathit{K}}^{0}$. The bulk-boundary correspondence can be seen by noticing that the $-1$ in the degree of the $\mathit{K}$-group compensates the $-1$ in the dimension of the surface Brillouin torus for $d=2$ \cite{Prodan-Schulz-Baldes}. The projection $P_{-1}(\lambda)$ in the second component represents a point in $Gr_{1}$, as discussed previously.
Regarding the scattering of an unstable soliton with a stable soliton, we can speculate that if the unstable soliton implodes, then as the two solitons get closer to each other, it will behave as a strong potential that breaks the gapped modes condition. If the soliton does not implode, however, the expectation is that the other instabilities do not affect these modes, as generally solitons go right through each other under collision. Further non-linear analysis is required. Nevertheless, for solitons sufficiently isolated, these modes should be topologically stable to other perturbations such as defects \cite{Exciton-Polariton}, \cite{Nonlinear-top-photonics}.
\section{Mode adiabatic evolution}\label{sec:Mode}
Consider now eq (\ref{eq:NLS/GP}) with potential, gap energy and non-linearity, which are also $z$-dependent but in such a way that the linearized evolution of the modes around the soliton is adiabatic \cite{Avron-Adiabatic}. We set $1/E_{gap}$ as the adiabatic scale and set $s = E_{gap} z$ to be the dimensionless variable that replaces $z$. We name this key concept \textit{mode adiabatic evolution}. We now use the homotopy interpretation \cite{Avron-Seiler-Simon}, \cite{Freed-Moore} for our mode adiabatic evolution. Two solitons $\Phi^{\lambda_0}(V_0,f_0),\,\Phi^{\lambda_1}(V_1,f_0)$ are in the same topological class if there exists an $s$-dependent family of soliton solutions $\Phi^{\lambda(s)}(V(s),f(s))$ such that
\begin{widetext}
\begin{eqnarray}\label{eq:Mode-Adiabatic}
[0,1] &\longrightarrow &Gap(L^{2}(\mathbb{R}^{d}))\times Gr_{1} \nonumber\\
s &\mapsto &
\bigg\{\begin{pmatrix}
0 & P_{\geq0}(\lambda(s))L_{-}(\lambda(s))\\
-P_{\geq 0}(\lambda(s))L_{+}(\lambda(s)) & 0
\end{pmatrix},P_{-1}(\lambda(s))\bigg\},\nonumber\\
\Phi^{\lambda(0)}(V(0),f(0)) & = &\Phi^{\lambda_0}(V_0,f_0) ,\nonumber\\
\Phi^{\lambda(1)}(V(1),f(1)) &=& \Phi^{\lambda_1}(V_1,f_1).
\end{eqnarray}
\end{widetext}
Thus, using this homotopy interpretation of mode adiabatic evolution, we can separate modes into topological classes. Employing the homotopy type of the spaces discussed above, we have that for periodic systems the set of distinct classes of topological modes around solitons is equivalent to the groups:
\begin{equation}\label{eq:local-periodic}
\tilde{\mathit{K}}^{0}(\mathbb{T}^{d})\oplus H^2(*;\mathbb{Z}),
\end{equation}
where $*$ denotes (from here on) a point, viewed as a topological space. For systems with a boundary we replace $Gap(L^{2}(\mathbb{R}^{d}))$ with $Gap_{Bulk}(L^2(\mathbb{R}^d))$ and using the results of \cite{AASS}, we obtain $\mathit{K}^{-1}(\mathbb{T}^{d-1})\oplus H^2(*;\mathbb{Z})$ instead. We remark that many solitons of interest such as those that are surface-localized are often gap solitons \cite{Exciton-Polariton} and do not satisfy the spectral condition mentioned above. The topological interpretation of the drift stability condition might seem irrelevant since the group $H^2(*;\mathbb{Z})$ is trivial, but we shall see it yields new classes for systems with more symmetry. Let us reflect on what this result implies. If our solitons satisfy (\ref{eq:Soliton-potential}) and (\ref{eq:Soliton-potential2}), and we ignore issues of soliton stability (momentarily), we can approximate (up to mode adiabatic evolution) eq (\ref{eq:LNLS}) \cite{Weinstein-Zhou} for
\begin{equation}
\partial_z\chi \approx -i[\mathcal{H}_{l} -\lambda I]\chi\,;\,\, \chi = \chi_1 +i\chi_2,\,\, \begin{pmatrix}\chi_1\\\chi_2\end{pmatrix} = \vec{\chi},
\end{equation}
which is a linear Schr\"odinger equation for the mode $\chi$. This means that all the well-known invariants, such as the Chern number \cite{TKNN}, the spectral flow (for systems with boundary) \cite{AASS}, the $\mathbb{Z}_2$ index (if we add time-reversal symmetry) or mirror Chern number (when we add crystallographic symmetries), etc., have a physical analogue for the mode $\chi$, as discussed for photonic systems in \cite{Top-photonics}, if they exist for the original linear system $\mathcal{H}_{l}$. These invariants can be seen as generating the different $\mathit{K}$-groups which arise. Thus, $\mathit{K}$-groups handle in a single swoop all the different topological mode invariants, instead of discussing each one of them separately.
Let us now switch gears a bit and consider solitons which lie between the gaps of the linear periodic system \cite{Pelinovsky-Oscillatory}, \cite{Top-BandGap-Solitons}, \cite{Weinstein-Periodic}. These are not generally positive and the above spectral condition does not apply. Gap solitons may suffer an oscillatory instability \cite{Pelinovsky-Oscillatory-Numerical}, \cite{Pelinovsky-Oscillatory}, and the condition for gap solitons to be oscillatory stable can also be stated in topological terms. Let $L(\lambda) = \mathcal{H}_{l} -\lambda I$, the neutral (sometimes called internal) modes (eigenvectors with positive eigenvalues below $\sigma_c (L_{+}(\lambda))$) of $L_{+}(\lambda)$ must lie between the band gaps of the inverted operator $-L(\lambda)$. If they embed in the bands of $-L(\lambda)$, the eigenvalues of $\mathcal{L}(\lambda)$ start bifurcating into complex pairs \cite{Pelinovsky-Oscillatory}. If there are $m$ neutral modes between the inverted bands, then there is an $m$-dimensional vector space $\mathfrak{H}_{N}$ associated to these, which will not change dimension under mode adiabatic evolution, as long as it remains oscillatory stable. Thus, again, we have a natural splitting
\begin{equation}\label{eq:Neutral}
\mathfrak{H}^{2}(\mathbb{R}^{d},\mathbb{C}) = \mathfrak{H}_{N}(\lambda)\oplus \mathfrak{H}_{rest}(\lambda).
\end{equation}
Once again, the space of all possible $m$-dimensional subspaces $\mathfrak{H}_{N}(\lambda)$ of $\mathfrak{H}^{2}(\mathbb{R}^{d},\mathbb{C})$ is the $m$-dimensional Grassmannian $Gr_{m}(\mathfrak{H}^2(\mathbb{R}^{d},\mathbb{C}))$. We can again simplify matters by replacing the finite-dimensional Grassmannian with $BGL_{\infty}$ \cite{Hatcher-K-theory}. Repeating the same analysis as before we obtain the topological classes for oscillatory stable gap solitons, which are given by
\begin{equation}
\tilde{\mathit{K}}^{0}(\mathbb{T}^{d})\oplus \tilde{\mathit{K}}^{0}(*).
\end{equation}
We have not seen if this condition generalizes to systems with a boundary; however, considering the appearance of topological surface gap solitons \cite{Exciton-Polariton} and further assuming there is no qualitative difference in their oscillatory behaviour (relative to their bulk analogues), we could suggest that for systems with a boundary, their gap soliton topological classes are given by $\mathit{K}^{-1}(\mathbb{T}^{d-1})\oplus \tilde{\mathit{K}}^{0}(*)$. Once again, the reformulation of oscillatory stability seems irrelevant but let us see what happens when we add symmetries.
On a last note, surface gap solitons can suffer from other types of instabilities such as decaying to small amplitude linear waves. It was found in \cite{Leykam-surface} that solitons with topological edge modes are stable and propagate unidirectionally as in the linear case. It would be interesting to express these results in terms of our mode invariants.
\section{Crystallographic and Time-reversal symmetries}\label{sec:Crystal}
Consider systems which further have a crystallographic symmetry with point group $P \subset O(d)$ \cite{Fu-Crystalline}, \cite{Freed-Moore}. If we restrict to $P$-symmetric soliton solutions, their corresponding $L_{\pm}(\lambda)$ will be $P$-invariant. Further, if they satisfy all of the conditions discussed above and the mode adiabatic evolution respects this $P$-invariance, then the crystalline topological classes of modes around $P$-symmetric positive solitons are given by:
\begin{equation}
\bar{\mathit{K}}^{0,\tau}_{P}(\mathbb{T}^{d})\oplus H^{2}_{P}(*;\mathbb{Z}).
\end{equation}
The groups $\bar{\mathit{K}}^{0}_P$ and $H^2_P$ denote a twisted equivariant version of $\mathit{K}$-theory \footnote{By $\bar{\mathit{K}}^{0}_P(\mathbb{T}^{d})$ we mean the kernel $\mathit{K}^{0}_P(\mathbb{T}^{d})\rightarrow \mathit{K}^{0}(*)$. This is because the coarser classification does not care about adding trivial bands and changing the dimension of our bundles.}, \cite{Freed-Moore} and equivariant cohomology \cite{Gomi-Twists}, \cite{Adem-Groupcoho}, respectively.
The interesting thing here is that $H^2_{P}(*;\mathbb{Z})$ is no longer trivial! Instead it is equivalent to $H^{2}(BP;\mathbb{Z})$, where $BP$ is an infinite dimensional space known as the classifying space of $P$ \cite{Hatcher-Alg-Top}. To have an example in mind note that for $P = \mathbb{Z}_2$, $B\mathbb{Z}_2 \simeq \mathbb{R}P^{\infty}$, the infinite dimensional real projective space. Hence, the spectral condition becomes topologically non-trivial when we include more symmetries. For systems with a boundary we replace $\bar{\mathit{K}}^{0,\tau}_{P}(\mathbb{T}^{d})$ with $\mathit{K}^{-1,\tau}_{P}(\mathbb{T}^{d-1})$, where $P$ now denotes surface crystallographic symmetry and $d\geq 2$ \cite{DS-Thesis}. This easily extends to $P$-symmetric oscillatory stable solitons and yields the groups $\bar{\mathit{K}}^{0,\tau}_{P}(\mathbb{T}^{d})\oplus \bar{\mathit{K}}^{0}_{P}(*)$ and $\mathit{K}_{P}^{\tau,-1}(\mathbb{T}^{d-1})\oplus\bar{\mathit{K}}^{0}_{P}(*)$. Once again, $\bar{\mathit{K}}^{0}_{P}(*)$ is not trivial, it is equivalent to the well-known representation ring of $P$, $\bar{R}(P)$ \cite{Tom-Dieck-Compact}.
What is the physical interpretation of these classes? On the one hand for positive solitons, $\mathfrak{H}_{-1}$ is a direction of instability which has to be controlled \cite{Weinstein-Book}. For the perturbed solution to remain at most $\epsilon$-distance from $\Phi^{\lambda}$ at any value of $z$, the initial perturbation needs to be at a distance $\delta(\epsilon,\mathfrak{H}_{-1})$ from $\Phi^{\lambda}$. As we adiabatically evolve $\mathcal{L}(\lambda)(s)$, we would expect an $s$-dependence $\delta(\epsilon,s)$; however, the topological character of $\mathfrak{H}_{-1}$ and $P$-symmetry will mean that $\delta$ is $s$-independent. Thus, how close the initial perturbation has to be to our soliton depends only on the topological action of $P$ on $\mathfrak{H}_{-1}$ and the $\epsilon$ chosen. On the other hand for oscillatory stability, the neutral mode subspace $\mathfrak{H}_{N}(\lambda)$ is a $P$-representation. As discussed in \cite{Soffer-Weinstein-Scattering}, neutral modes are relevant for scattering. In particular, there is a mechanism in which non-linear excited bound states dissipate their energy towards radiation modes and the ground state. The dissipation coefficient $\mathit{\Gamma}$ is a function of the neutral modes ($\mathit{\Gamma} \neq 0$ is the non-linear analogue of Fermi's golden rule). As we mode adiabatically evolve our system, so will $\mathit{\Gamma}(s)$ vary; however, we should always be able to split it $\mathit{\Gamma}(s) = \gamma(\mathfrak{H}_{N}(\lambda))\mathit{\Gamma}_{*}(s)$, with $\gamma(\mathfrak{H}_{N}(\lambda))$ only depending on $\mathfrak{H}_{N}(\lambda)$ as a $P$-representation. Furthermore, we speculate that since these modes must be linearly topologically robust, $\gamma(\mathfrak{H}_{N})$ should dominate and imply a qualitatively slower decay of the modes. We leave the interesting task of explicitly determining $\gamma$ to future work. We remark that both of these new invariants do not arise in linear systems.
We now briefly discuss the inclusion of time-reversal symmetry $\Theta$. Since our systems are bosonic, we only have the so called class AI ($\Theta^2 = I$) in the AZ classification \cite{AZ}. We thus have to replace the $\mathit{K}$-groups with $\mathit{KR}$-groups in the sense of Atiyah \cite{Atiyah-Real}, \cite{Kitaev}. Let us focus on the invariants arising from soliton stability. For drift stability, instead of $H^{2}(*;\mathbb{Z})$, we have the group $H^{2}_{\mathbb{Z}_{2}}(*;\mathbb{Z}(1))$ \cite{D-G-AI}, which is trivial. Similarly for oscillatory stability, instead of $\tilde{\mathit{K}}^{0}(*)$ we have the group $\tilde{\mathit{KR}}^{-6}(*)$ \cite{Kitaev}, which using Bott periodicity (a fundamental result in $\mathit{K}$-theory) turns out to be, again, trivial. Hence, we can conclude that for individual solitons in bosonic systems, time-reversal symmetry $\Theta^2 = I$ does not distinguish between the linear classification \cite{Kitaev}, \cite{AASS}.
\begin{table}[]
\begin{tabular}{cccccc}
\toprule
\multicolumn{1}{p{1cm}}{\centering{$d$}} & \multicolumn{1}{p{1.3cm}}{\centering{Boundary (y/n)}} & $P$ & $\Theta$ & \multicolumn{1}{p{1.2cm}}{L+DS} & \multicolumn{1}{p{1.2cm}}{L+OS} \\
\hline
2 & n & 0 & 0 & $\mathbb{Z}$ & $\mathbb{Z}$ \\
2 & y & 0 & 0 & $\mathbb{Z}$ & $\mathbb{Z}$ \\
2 & n & pm & 0 & $\mathbb{Z}^2\oplus\mathbb{Z}_2$ & $\mathbb{Z}^2\oplus\mathbb{Z}$ \\
3 & y & pm & 0 & $\mathbb{Z}^3\oplus\mathbb{Z}_2$ & $\mathbb{Z}^3\oplus\mathbb{Z}$ \\
2 & n & 0 & $\Theta^2=I$ & 0 & 0 \\
2 & y & 0 & $\Theta^2=I$ & 0 & 0\\ \hline
\end{tabular}
\caption{Some examples of topological classes for modes around solitons in $d$-dimensional systems, where $P$ is the crystallographic point group, $\Theta$ is the time-reversal operator, L+DS stands for linear plus drift stable topological classes and L+OS for linear plus oscillatory stable topological classes.}
\label{Table1}
\end{table}
We present a few examples in dimension $d =2,\,3$ with either of these symmetries in Table \ref{Table1}.
\section{Spaces of soliton solutions and Global classes}\label{sec:Moduli}
So far our analysis tells us the different topological character of individual solitons, but does a single soliton define the character of eq (\ref{eq:NLS/GP})? Given $\vec{A}, V$ and $f$ there will generally be many solitons which satisfy conditions (\ref{eq:mode-gap}, \ref{eq:Soliton-potential}, \ref{eq:Soliton-potential2}, \ref{eq:VK}). The set of soliton solutions can be given a topology. Let $M_{D}(E_{gap},\vec{A},V,f),\, M_{O}(E_{gap},\vec{A},V,f)$ be subspaces of soliton solutions, which further satisfy either eq (\ref{eq:VK}, \ref{eq:SpectralCondition}) or eq (\ref{eq:Neutral}) respectively. $M_{D}(E_{gap})$ actually forms a manifold \cite{Weinstein-Zhou}, but to our knowledge not much is known about $M_{O}(E_{gap})$. Let us further identify two solitons in $M_{D}(E_{gap})$ (or $M_{O}(E_{gap})$) as equivalent if they only differ by a translation ($\mathbb{Z}^{d}$), a Galilean boost ($\mathbb{R}^{d}$) or a phase ($\mathit{S}^1$) when these are symmetries of $(\vec{A},V,f)$. Abusing notation we will employ the same symbols for the spaces resulting from the identification. Hence a triple $(\vec{A},V,f)$ induces the maps
\begin{equation}\label{eq:Drift-Soliton-Manifold}
G^{D}_{\vec{A},V,f}:M_{D}(E_{gap})\longrightarrow Map(\mathbb{T}^{d},BGL_{\infty})\times Gr_{1},
\end{equation}
and
\begin{equation}\label{eq:Oscillatory-Soliton-Manifold}
G^{O}_{\vec{A},V,f}:M_{O}(E_{gap}) \longrightarrow Map(\mathbb{T}^{d},BGL_{\infty})\times BGL_{\infty}.
\end{equation}
The analogues for systems with boundaries are obtained using $Map(\mathbb{T}^{d-1},\mathcal{F}^{sa}_{*}(\mathfrak{H}))$ instead.
Then, if we mode adiabatically evolve the system $(\vec{A}(s),V(s),f(s))$, these spaces $M_{D}(E_{gap},s),\,M_{O}(E_{gap},s)$ will also change and not necessarily in a continuous fashion. However, as long as they can be deformed into one another via a homotopy, we can use $M_{D}(E_{gap}),\,M_{O}(E_{gap})$ as a characteristic of the entire system and study the maps $G^{D}_{\vec{A},V,f},\,G^{O}_{\vec{A},V,f}$ up to homotopy to define global topological classes. We can also easily extend this to include symmetries. Let us consider the easiest example for $f(s,\Psi) = f(s,|\Psi|^2)$ where $M_{D}(E_{gap})$ is the same as an interval \cite{Weinstein-Zhou} once we have made the proper identifications. This interval is simply the $\lambda$'s which satisfy eqs (\ref{eq:mode-gap}), (\ref{eq:Soliton-potential}) and (\ref{eq:Soliton-potential2}). Then $M_{D}(E_{gap})$ can be contracted to a point and the global drift topological classes for these systems are the same (eq (\ref{eq:local-periodic})) as those for individual solitons. However, suppose that we allow our systems to have $f$'s whose $\Psi$-dependence is not only on its modulus. Then the phase symmetry $\Phi^{\lambda} \mapsto e^{i\theta}\Phi^{\lambda}$ \cite{Weinstein-Zhou} is lost, then we have $M_{D}(E_{gap}) = \mathit{S}^1\times (\lambda_{min},\lambda_{max})$ \cite{Tao-Blog} and the topological classes are equivalent to
\begin{equation}\label{eq:Drift-Circle}
\bar{\mathit{K}}^{0,\tau}_{P}(\mathit{S}^1\times \mathbb{T}^{d})\oplus H^{2}_{P}(\mathit{S}^{1};\mathbb{Z}),
\end{equation}
or the analogue for systems with boundary. Just as an appetizer we note that for $P =0$, the trivial group, we obtain for fully periodic systems in $d =2$, the group $\tilde{\mathit{K}}^{0}(\mathbb{T}^3) = \mathbb{Z}^3$ and with boundary $\mathit{K}^{-1}(\mathbb{T}^2) = \mathbb{Z}^2$, so the presence of a topologically non-trivial space $M_{D}(E_{gap})$ can add more non-trivial topological classes than there are for individual solitons. We leave equivariant computations for future work.
\section{Summary and Conclusions}\label{sec:Conclusion}
We have attacked, for the first time (to our knowledge), the problem of assigning topological invariants to non-linear topological systems in photonics, exciton-polaritons and BECs simultaneously \cite{Nonlinear-top-photonics}, \cite{Exciton-Polariton}. Our analysis provides under which conditions the modes around solitons have the same topological character as linear phases do and describes how soliton stability conditions become topologically non-trivial when including crystallographic symmetries, providing new invariants for modes, that have no analogue in linear systems. The same does not happen if we instead have bosonic time-reversal symmetry. Using these constructions together with the space of soliton solutions, we build novel, global invariants for the entire system, providing a partial answer to the problem of classifying non-linear topological systems. General optical systems are non-hermitian as they have gains and losses. Thus, a natural extension of this work is to consider the $\mathit{K}$-theory arising from the point and line gap generalizations for these systems \cite{Shiozaki-Non-hermitian}. It would also be very interesting to see if other stability conditions for gap solitons have a topological character and to combine them with the non-linear topological nature of vortices, skyrmions and other standard topological solitons \cite{Manton-Solitons}.
\\
We thank O. Antol\'in-Camarena, K. Ramos-Musalem and J. Sheinbaum for useful comments.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,931
|
Stay tuned for series two of the Parkinson's Life podcast
Author: Johanna Stiefler JohnsonPublished: 6 May 2021
The award-winning Parkinson's Life podcast is back this month. Helping to amplify the voices of the international Parkinson's community, the podcast has so far reached more than 12,000 people around the world.
Thanks to the support of pharmaceutical companies, and the backing of a grant from the Boston Scientific Foundation Europe, the second series will bring together people with Parkinson's disease and experts in their field to explore topics such as sleep hygiene, exercise and mental health, and to offer advice to listeners.
Sandrine Bazile, president of the Boston Scientific Foundation Europe, says: "We fully support this project because it mirrors perfectly our mission to improve patient wellbeing using innovative solutions. We believe in the importance of the [Parkinson's Life] podcast series, with its aim to improve the information and education available to people with Parkinson's and their families – to help them live life to the full."
Find out more about the Parkinson's Life podcast.
Podcast: How do women experience Parkinson's differently from men?
Podcast: What's it like caring for someone with Parkinson's?
Podcast: How can you speak the same language as your healthcare professional?
Physiotherapist Josefa Domingos opens up with her patient Idelta Oliveira
Podcast: What's it like managing your working life with Parkinson's?
Our guests reflect on work life with Parkinson's
Podcast: Getting creative with Parkinson's
How creativity and Parkinson's can work hand-in-hand
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,929
|
\section{Overview}
Let $K\subset S^3$ be a knot and $k\in\mathbb{Z}\setminus\{0\}$.
In this paper by a \emph{$k$--twisting move} we mean a move depicted in Figure~\ref{fig:k_untwist},
that is, a full right $k$--twist on two strands of $K$ going in the opposite direction (in \cite{Przy} this move
is called a $\ol{t}_{2k}$--move). We will call a knot \emph{$k$--simple}
if it can be unknotted by a single $k$--untwisting move. A knot is \emph{algebraically $k$--simple} if a single $k$--untwisting move
turns it into a knot with Alexander polynomial $1$.
\begin{figure}[h]
\input{twist.tex}
\caption{A $k$--twisting move for $k=2$. Note that the strands in the picture go in different directions.}\label{fig:k_untwist}
\end{figure}
Our first result gives an obstruction to the untwisting move in terms of the algebraic unknotting number \cite{Fo93,Muk90,Sae99}.
\begin{theorem}\label{cor:unknotting}
Suppose $K$ is an algebraically $k$--simple knot.
If $k$ is odd, then $K$
can be turned into a knot with Alexander polynomial $1$ using at most two crossing changes. If $k$ is even, then at most three crossing
changes are enough to turn $K$ into a knot with Alexander polynomial $1$.
\end{theorem}
Our second result restricts the homology of the double branched cover of an algebraically $k$--simple knot.
\begin{theorem}\label{thm:iscyclic}
Suppose $K$ is an algebraically $k$--simple knot. Denote by $\Sigma(K)$ the double branched cover of $K$. Then $H_1(\Sigma(K);\mathbb{Z})$ is cyclic.
\end{theorem}
Both Theorem~\ref{cor:unknotting} and Theorem~\ref{thm:iscyclic} follow from the following result, which is the main technical result of this paper.
\begin{theorem}\label{thm:main}
Suppose $K$ is an algebraically $k$--simple knot. Then there exists a polynomial $\alpha(t)\in\mathbb{Z}[t,t^{-1}]$ satisfying $\alpha(1)=0$,
$\alpha(t^{-1})=\alpha(t)$, such that
the matrix
\[\begin{pmatrix} \alpha(t) & 1 \\ 1 & -k \end{pmatrix}\]
represents the Blanchfield pairing for $K$.
\end{theorem}
Theorem~\ref{thm:main} can be regarded as a generalization of \cite[Theorem 3.2(b)]{Przy}.
It is possible to generalize the techniques used in this paper to study knots that are untwisted with several $\ol{t}_{2k}$ moves, possibly with varying
the twisting coefficients $k$. This generalization is straightforward, we omit to make the paper shorter and more concise.
Proof of Theorem~\ref{thm:main} is given in Section~\ref{sec:proof}.
Proof of Theorem~\ref{cor:unknotting} is given in Section~\ref{sec:applications}.
Section~\ref{sec:linkingforms} contains
the proof of a stronger version of Theorem~\ref{thm:iscyclic}.
\begin{ack}
The author is grateful to A.~Conway, S.~Friedl, C.~Livingston, W.~Politarczyk and J.~Przytycki for fruitful conversations.
He is especially indebted to A.~Ranicki for pointing out \cite[Proposition 3.30]{Ran02}.
The research is supported by
the National Science Center grant 2016/22/E/ST1/00040.
\end{ack}
\section{Blanchfield pairing}\label{sec:blanchfield}
Let $K\subset S^3$ be a knot and let $M_K$ denote its zero-framed surgery. Denote by $\wt{M}_K$ the universal abelian cover of $M_K$. The chain
complex $C_*(\wt{M}_K;\mathbb{Z})$ admits the action of the deck transform and thus it has a structure of a $\Lambda$--module, where $\Lambda=\mathbb{Z}[t,t^{-1}]$.
The homology of this complex, regarded
as a $\Lambda$--module, is denoted by $H_*(M_K;\Lambda)$. The module $H_1(M_K;\Lambda)$ is called the \emph{Alexander module} of the knot $K$.
\begin{remark}
Usually the Alexander module is defined using knot complements instead of zero--framed surgeries, but the two definitions are
equivalent; see e.g. \cite{FP}.
\end{remark}
The ring $\Lambda$ has a naturally defined convolution $t\mapsto t^{-1}$.
The Blanchfield pairing defined in
\cite{Bl57} for $K$ is a sesquilinear symmetric pairing $H_1(M_K;\Lambda)\times H_1(M_K;\Lambda)\to Q/\Lambda$, where $Q$ is the field
of fractions for $\Lambda$. We refer to \cite{FP,Hi12} for a precise and detailed construction of the Blanchfield pairing and \cite{Con,CFT} for
generalizations.
\begin{definition}\label{def:repres}
We say that an $n\times n$ matrix $A$ with entries in $\Lambda$ \emph{represents} the Blanchfield pairing if $H_1(M_K;\Lambda)\cong \Lambda^n/A\Lambda^n$
as a $\Lambda$--module, under this identification the Blanchfield pairing has form $(a,b)\mapsto a^TA^{-1}\ol{b}$ and moreover
$A(1)$ is diagonalizable over $\mathbb{Z}$.
\end{definition}
It is known, see \cite{Kear}, that every Blanchfield pairing can be represented by a finite matrix. The minimal size of a matrix representing the Blanchfield
pairing of a knot is denoted by $n(K)$. It is equal to the algebraic unknotting number $u_a(K)$; see \cite{BF3,BF1}.
The invariant $n(K)$ can also be generalized for other coefficient ring $R$. In this paper we restrict to rings $R$ that are subrings of $\mathbb{C}$.
We denote by $n_R(K)$ the minimal size of a matrix over $R[t,t^{-1}]$ representing the
Blanchfield pairing over $R[t,t^{-1}]$.
We have that $n_R{K}\le n_{R'}(K)$
if $R'$ is a subring of $R$.
Often $n_R(K)$ is easier to compute than $n(K)=n_\mathbb{Z}(K)$,
for example the value of $n_{\mathbb{R}}$ can be calculated from the Tristram--Levine signature \cite{BF2}. One motivation of this paper is to give
a geometric interpretation of $n_R(K)$ for some rings $R$.
\section{Proof of Theorem~\ref{thm:main}}\label{sec:proof}
The main ingredient in the proof of Theorem~\ref{thm:main} is the following.
\begin{theorem}[see \expandafter{\cite[Theorem 2.6]{BF1}}]\label{thm:26}
Suppose $W_K$ is a topological four--manifold such that $\partial W_K=M_K$, $\pi_1(W_K)=\mathbb{Z}$ and the inclusion induced map $H_1(M_K;\mathbb{Z})\to H_1(W_K;\mathbb{Z})$
is an isomorphism. Then $H_2(W_K;\Lambda)$ is free of rank $b_2(W_K)$. Moreover if $A$ is matrix over $\Lambda$
representing the twisted intersection form on $H_2(W_K;\Lambda)$ in some basis of $H_2(W_K;\Lambda)$, then $A$ also represents
the Blanchfield pairing on $M_K$.
\end{theorem}
In the light of Theorem~\ref{thm:26}, the
proof of Theorem~\ref{thm:main} consists of constructing an appropriate manifold $W_K$ and applying Theorem~\ref{thm:26}.
The construction begins with noticing that the
twisting move can be realized by a surgery. Namely we have the following well-known fact.
\begin{proposition}\label{prop:surgerytwist}
A $k$--twisting move can be realized by a $-1/k$ surgery on a knot. That is, if $K_2$ arises from $K_1$ by a $k$--twisting move, then there is a simple closed
circle $C$ disjoint from $K_1$, such that $C$ bounds a smooth disk intersecting $K_2$ at two points with opposite signs and
such that the $-1/k$ surgery on $C$ transforms $K_1$ into $K_2$; see Figure~\ref{fig:twist_surg}
\end{proposition}
\begin{figure}
\input{twist_surgery.tex}
\caption{The $1/k$ surgery on the circle in the top picture induces $k$ full left twists of the two strands passing through the circle.}\label{fig:twist_surg}
\end{figure}
\begin{remark}
The move described in Figure~\ref{fig:twist_surg} is a special case of the Rolfsen twist, see \cite[Figure~5.27]{Stipsicz}.
It can be seen on \cite[Figure 3.12]{Sav} that the surgery with a positive coefficient (i.e. the $1/k$ surgery if $k>0$)
gives rise to a left $k$--twist and the surgery with a
negative coefficient (i.e. the $-1/k$ surgery with $k>0$) gives rise to a right $k$--twist.
\end{remark}
The surgery in Figure~\ref{fig:twist_surg} can be changed into a surgery with integer coefficients as in Figure~\ref{fig:othersurg}
by a `slam-dunk' operation, see \cite[Section 5.3]{Stipsicz}.
\begin{figure}
\input{othersurg.tex}
\caption{Changing a $1/k$ surgery on a circle to a surgery on a two-component link with framings $0$ and $-k$.}\label{fig:othersurg}
\end{figure}
Suppose $J$ is a knot with Alexander polynomial $1$ and $K$ is a knot resulting from $J$ by applying a full left $k$--twist
(so $J$ is obtained from $K$ by a full right $k$--twist).
Let $M_J$ be the zero-surgery on $J$ and $M_K$ the zero--surgery on $K$. By \cite[Theorem 117B]{FreedmanQuinn}
$M_J$ is a boundary of a topological four--manifold that is a homotopy $D^3\times S^1$. Denote this four--manifold by $W_J$.
A full left $k$--twist on $J$ can be realized as a surgery on a two-component link with framings $0$ and $-k$ as in Figure~\ref{fig:othersurg}.
Let $c_0$ and $c_1$ denote the components of this link. The curve $c_0$ has framing $0$, $c_1$ has framing $k$. Both $c_0$ and $c_1$ are curves disjoint from $J$,
so we can and will assume that they are separated from a small neighborhood of $J$ in $S^3$. Performing a 0--surgery on $J$ does not affect these curves, therefore
$c_0$ and $c_1$ can also be viewed as curves on $M_J$. Now performing surgery on $c_0$ and $c_1$ produces $M_K$.
The trace of the surgery on $c_0$ and $c_1$ yields a cobordism between $M_J$ and $M_K$. Call this cobordism $W_{JK}$.
Define now
\[W_K=W_J\cup W_{JK}\]
so that $\partial W_K=M_K$.
We have the following fact.
\begin{lemma}\label{lem:cobounds}
We have $\pi_1(W_K)\cong\mathbb{Z}$, $H_1(W_K;\mathbb{Z})\cong \mathbb{Z}$ and the inclusion of $M_K$ to $W_K$ induces an isomorphism on the first homology.
Moreover $H_2(W_K;\mathbb{Z})\cong\mathbb{Z}^2$ and there exists spherical generators of $H_2(W_K;\mathbb{Z})$.
\end{lemma}
\begin{proof}
The homology groups of $W_K$ are calculated using the Mayer-Vietoris sequence. The manifold $W_K$ is obtained
from $W_J$ by adding two-handles along null-homologous curves $c_0$ and $c_1$.
This shows that $H_1(W_K;\mathbb{Z})\cong\mathbb{Z}$ and $H_2(W_K;\mathbb{Z})\cong\mathbb{Z}^2$.
To compute $\pi_1$ we observe that $\pi_1(W_J)\cong\mathbb{Z}$. Hence $c_0,c_1$ being null-homologous
are also null-homotopic. The van Kampen theorem implies that $\pi_1(W_K)\cong\mathbb{Z}$.
To show that the generators of $H_2(W_K;\mathbb{Z})$ can be chosen to be spherical we again use the fact that $c_0$ and $c_1$ are null-homotopic in $W_J$.
This implies that $c_0$ and $c_1$ bound disks $D_0$ and $D_1$ in $W_J$. The disk $D_1$
can be chosen to be the obvious disk on $M_J$, but $D_0$ is in general only an immersed disk and it cannot lie on $M_J$ (because in general $c_0$ is
not null-homotopic on $M_J$). We can form spheres $\Sigma_0$ and $\Sigma_1$
by adding to $D_0$ and $D_1$ the cores of the two-handles that are attached. It is clear that the homology classes $[\Sigma_0]$ and $[\Sigma_1]$
generate $H_2(W_K;\mathbb{Z})$. Moreover, by construction, $\Sigma_1$ is a smoothly embedded sphere and $\Sigma_0$ can be chosen to intersect $\Sigma_1$
precisely at one point.
Finally, in order to prove that the inclusion induced map $H_1(M_K;\mathbb{Z})\to H_1(W_K;\mathbb{Z})$ is an isomorphism, invert the cobordism $W_{JK}$, that is,
present $W_{JK}$ as $M_K\times[0,1]$ with two two--handles attached. The attaching curves of these handles are homologically trivial (but not necessarily
homotopy trivial, $\pi_1(M_K)$ can be complicated),
hence the boundary inclusion induces an isomorphism $H_1(M_K;\mathbb{Z})\cong H_1(W_{JK};\mathbb{Z})$. Clearly $H_1(W_{JK};\mathbb{Z})\cong H_1(W_K;\mathbb{Z})$.
\end{proof}
Lemma~\ref{lem:cobounds} gives us two spheres $\Sigma_0,\Sigma_1\subset W_K$,
which are the generators of $H_2(W_K;\mathbb{Z})$. Choose a basepoint $x_0=\Sigma_0\cap\Sigma_1$. This choice allows us to consider $\Sigma_0$
and $\Sigma_1$ as elements of $\pi_2(W_K,x_0)$.
\begin{lemma}\label{lem:generates}
The group $\pi_2(W_K,x_0)$ is freely generated as a $\Lambda=Z[\pi_1(W_K,x_0)]$--module by classes of $\Sigma_0$ and $\Sigma_1$. In particular
$\pi_2(W_K,x_0)\cong\Lambda^2$.
\end{lemma}
\begin{proof}
The space $W_K$ is obtained from $W_J$ by attaching two two--handles along null-homotopic curves $c_0$ and $c_1$. We have that $\pi_1(W_J)=\mathbb{Z}$
and $\pi_2(W_J)=0$ by definition. The statement follows from \cite[Proposition 3.30]{Ran02}.
\end{proof}
We will use Lemma~\ref{lem:generates} in connection with the following well-known result.
\begin{lemma}\label{lem:lambdaiso}
We have an isomorphism of $\Lambda$--modules $\pi_2(W_K,x_0)\cong \pi_2(\wt{W}_K,\wt{x}_0)\cong H_2(\wt{W}_K;\mathbb{Z})\cong H_2(W_K;\Lambda)$.
\end{lemma}
\begin{proof}
The
first isomorphism in the lemma is the isomorphism of higher homotopy groups under the covering map. The second is the Hurewicz isomorphism because $\wt{W}_K$ is
simply connected. The third isomorphism is the definition of the twisted homology groups.
\end{proof}
In particular,
Lemma~\ref{lem:generates} together with Lemma~\ref{lem:lambdaiso} gives
a simple and independent argument that $H_2(W_K;\Lambda)$ is a free $\Lambda$--module, compare \cite[Lemma 2.7]{BF1}.
\begin{corollary}\label{cor:liftsgenerate}
The (classes of the) lifts of $\Sigma_0$ and $\Sigma_1$ to $\wt{W}_K$ generate $H_2(W_K;\Lambda)$ as a $\Lambda$--module.
\end{corollary}
Recall that $A(t)$ is a matrix over $\Lambda$ representing the intersection
form on $H_2(W_K;\Lambda)$.
The following result together with Theorem~\ref{thm:26} gives the proof of Theorem~\ref{thm:main} from the introduction.
\begin{theorem}\label{thm:form}
The matrix $A(t)$ has form
\[\begin{pmatrix} \alpha(t) & 1\\ 1 & -k\end{pmatrix},\]
where $\alpha(t)\in \Lambda$ is such that $\alpha(1)=0$ and $\alpha(t^{-1})=\alpha(t)$.
\end{theorem}
\begin{proof}
By Corollary~\ref{cor:liftsgenerate}
the entries of $A(t)$ are twisted intersection indices of $\Sigma_0$ and $\Sigma_1$.
For example, the bottom-right entry of $A(t)$
is equal to the twisted intersection index of $\Sigma_1$ and $\Sigma_1'$, where $\Sigma'_1$ is a small perturbation of $\Sigma_1$
intersecting $\Sigma_1$ in finitely many points.
To compute the twisted intersection index of $\Sigma_1$ and $\Sigma_1'$, choose a basing for $\Sigma_1$, $\Sigma_1'$,
that is a path $\gamma$ from $x_0$ to $\Sigma_1$ and a path $\gamma'$ from $x_0$ to $\Sigma_1'$. Let $x,x'$
be the end points of $\gamma$ and $\gamma'$.
For any intersection point $y\in\Sigma_1$ and $\Sigma_1'$
we choose a smooth path $\rho_y$ from $x$ to $y$ on $\Sigma_1$ and a path $\rho_y'$ from $x'$ to $y$ on $\Sigma_1'$; see Figure~\ref{fig:paths}.
\begin{figure}
\input{paths.tex}
\caption{Notation in proof of Theorem~\ref{thm:form}. In the four-dimensional situation
the intersection of $\Sigma$ and $\Sigma'$ at $y$ is transverse.}\label{fig:paths}
\end{figure}
Let $\theta_y$ be the loop
$(\gamma')^{-1}(\rho_y')^{-1}\rho_y\gamma$. Define $n_y\in\mathbb{Z}$ to be the homology class of $\theta_y$ in $H_1(W_K;\mathbb{Z})\cong\mathbb{Z}$. Finally, let
$\epsilon_y$ be the sign of the intersection point $y$ assigned in the usual way, that is, if $T_y\Sigma_1\oplus T_y\Sigma_1'=T_yW_{K}$ agrees with
the orientation, we set $\epsilon_y=+1$, otherwise we set $\epsilon_y=-1$.
Given these definitions, the twisted intersection index of $\Sigma_1$ and $\Sigma_1'$ is equal to
\begin{equation}\label{eq:twistint}
\sum_{y\in\Sigma_1\cap\Sigma_1'} \epsilon_yt^{n_y}\in \mathbb{Z}[t,t^{-1}].
\end{equation}
In general this sum might depend on the choice of $\rho_y$ and $\rho_y'$. However if any smooth closed curve on $\Sigma_1$ and on $\Sigma_1'$
is homologically trivial in $W_K$ (in the language of \cite[Section 3.2]{BF3} this means that $\Sigma_1$ and $\Sigma_1'$ are homologically invisible in $W_K$),
the definition does not depend on paths $\rho_y$ and $\rho_y'$. In the present situation
$\Sigma_1$ and $\Sigma_1'$ are immersed (and even embedded) spheres, so they are homologically invisible, in particular \eqref{eq:twistint}
is a well-defined Laurent polynomial.
As $\Sigma_1$ and $\Sigma_1'$ are embedded spheres, we claim more, namely that $n_y$ does not depend on $y$. In fact, suppose $z$
is another intersection point of $\Sigma_1$ and $\Sigma_1'$.
If $n_z\neq n_y$, then
the curve $\delta=\rho_y\rho_z^{-1}\rho'_z(\rho_y')^{-1}$ is not homology trivial in $W_K$. As $\Sigma_1'$ is a perturbation of $\Sigma_1$,
the path $\rho'_z(\rho_y')^{-1}$ can be pushed by a homotopy (in $W_K$) to a path $\wt{\rho}$ on $\Sigma_1$ having the same endpoints. Then
$\rho_y\rho_z^{-1}\wt{\rho}$ is a loop hootomically equivalent to $\delta$, but this is a loop on a smoothly embedded sphere $\Sigma_1$. Hence it is
contractible in $W_K$.
This shows that $n_y=n_z$.
We conclude that the twisted intersection index of $\Sigma_1$ and $\Sigma_1'$ is equal to the standard intersection number of $\Sigma_1$ and $\Sigma_1'$
(which is equal to the self-intersection of $\Sigma_1$, that is $-k$) multiplied by $t^{n_y}$. We can choose a basing for
$\Sigma_1'$ in such a way that $n_y=0$.
An analogous, but simpler argument shows that $\Sigma_0\cdot\Sigma_1=\pm 1$. Indeed by construction $\Sigma_0\cap\Sigma_1$ consists of a single point.
It follows that the twisted intersection between $\Sigma_0$ and $\Sigma_1$ is $\pm t^m$ for some $m$. We choose a basing for $\Sigma_0$ in such a way that
$m=0$. We can also choose an orientation of $\Sigma_0$ in such a way that the sign is positive.
\end{proof}
\begin{remark}
There is an alternative calculation of the matrix $A$ using Rolfsen's argument \cite{Rol75}. However one still has to make some effort proving
that $A$ represents not only the Alexander module, but also the Blanchfield pairing.
\end{remark}
\section{Proof of Theorem~\ref{cor:unknotting}}\label{sec:applications}
We begin with proving Theorem~\ref{cor:unknotting}. The following corollary deals with the first part of this theorem.
\begin{corollary}\label{thm:unknotting}
Suppose $K$ is an algebraically $k$--simple and $k$ is odd. Then there are at most two crossing
changes that turn $K$ into a knot with Alexander polynomial $1$.
\end{corollary}
\begin{proof}
We have $A(1)=\left(\begin{smallmatrix} 0 & 1 \\ 1 &-k\end{smallmatrix}\right)$. As $k$ is odd, this matrix is diagonalizable over $\mathbb{Z}$.
By
\cite[Theorem 1.1]{BF3} we infer that the algebraic unknotting number of $K$ is at most $2$.
\end{proof}
If $k$ is even, then $A(1)$ is not diagonalizable over $\mathbb{Z}$, but $A(1)\oplus (1)$ is diagonalizable. The block matrix $A(t)\oplus (1)$ is a $3\times 3$
matrix over $\Lambda$ representing the Blanchfield pairing, so the algebraic unknotting number of $K$ is bounded from above by $3$. This shows the second
part of Theorem~\ref{cor:unknotting}.
We have the following consequence of Theorem~\ref{thm:main}.
\begin{theorem}
Suppose $K$ can be algebraically $k$--simple. Let $R_k=\mathbb{Z}[\frac1k]$. Then
$n_{R_k}=1$.
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:main} we know that the Blanchfield pairing
over $\mathbb{Z}$ can be represented by a matrix of form $\left(\begin{smallmatrix} \alpha(t) & 1 \\ 1 & -k\end{smallmatrix}\right)$. The same matrix represents
the Blanchfield pairing over $R_k$, but over $R_k$ this matrix is congruent to a matrix $\left(\begin{smallmatrix} \wt{\alpha}(t) & 0 \\ 0 & 1\end{smallmatrix}\right)$
for $\wt{\alpha}(t)\in R_k[t,t^{-1}]$.
By \cite[Proposition 1.7.1]{Ran81} (see also \cite[Proposition 3.1]{BF1}) the matrix $(\wt{\alpha}(t))$ also represents the Blanchfield pairing over $R_k[t,t^{-1}]$.
\end{proof}
The following corollary is well known, see \cite{Przy}.
\begin{corollary}
If $K$ is algebraically $k$--simple, then its Alexander polynomial is equal to $\Delta_K(t)=1+k\alpha(t)$, where $\alpha(t)\in\mathbb{Z}[t,t^{-1}]$.
\end{corollary}
\begin{proof}
This follows from Theorem~\ref{thm:unknotting} because if $A(t)$ represents the Blanchfield pairing of a knot $K$, then $\Delta_K(t)=\det A(t)$
up to multiplication by a unit in $\Lambda$.
\end{proof}
\section{Linking forms}\label{sec:linkingforms}
An abstract \emph{linking pairing} is a pair $(H,l)$,
where $H$ is a finite abelian group of an odd order and $l$ is a bilinear symmetric pairing $l\colon H\times H\to\mathbb{Q}/\mathbb{Z}$,
As a model example, if $Y$ is a closed three--manifold with $b_1(Y)=0$, there is defined a linking pairing $l(Y)$ on $H=H_1(Y;\mathbb{Z})$. If $Y=\Sigma(K)$
is the double branched cover of a knot $K$, we denote this pairing by $l(K)$. It is known that the linking pairing $l(K)$ is represented by $V+V^T$,
where $V$ is the Seifert matrix for $K$. The meaning of `represented' is explained in the following definition.
\begin{definition}
Let $P$ be an $n\times n$ matrix with integer coefficients and such that $\det P$ is odd. The \emph{linking form represented by $P$} is the pair
$(H(P),l(P))$, where $H(P)=\mathbb{Z}^n/P\mathbb{Z}^n$ and $l(P)$ is the bilinear
form defined by
\begin{align*}
\mathbb{Z}^n/P\mathbb{Z}^n\times \mathbb{Z}^n/P\mathbb{Z}^n&\to \mathbb{Q}/\mathbb{Z}\\
(a,b)&\mapsto a^T P^{-1} b\bmod 1.
\end{align*}
\end{definition}
We have the following relation between the Blanchfield form for $K$ and the linking form $l(K)$.
\begin{proposition}[see \expandafter{\cite[Lemma 3.3]{BF1}}]
If $A$ is a matrix over $\Lambda$ representing the Blanchfield pairing, then $l(A(-1))=2l(K)$.
\end{proposition}
Here $2l(K)$ means the linking pairing with the same underlying group as $l(K)$, but the linking form is multiplied by $2$; compare \cite[Section 3]{BF1}.
We can use this result to obtain the following corollary.
\begin{corollary}\label{cor:linking_form_obstruction}
Suppose $K$ is algebraically $k$--simple. Then the linking form $2l(K)$ is isometric to the linking form
represented by
\begin{equation}\label{eq:asB}
B=\begin{pmatrix} d & 1\\ 1 & -k\end{pmatrix},
\end{equation}
where $d=\alpha(-1)\in\mathbb{Z}$ is such that $-(dk+1)$ is the (signed) determinant of $K$.
\end{corollary}
As in \cite[Section 5.2]{BF1} we can use Corollary~\ref{cor:linking_form_obstruction} to obstruct untwisting number $2$.
From Corollary~\ref{cor:linking_form_obstruction} we immediately recover Theorem~\ref{thm:iscyclic} from the introduction.
\begin{proposition}\label{prop:is_cyclic}
If $K$ is algebraically $k$--simple and $\Sigma(K)$ is the double branched cover, then $H_1(\Sigma(K);\mathbb{Z})$ is cyclic.
\end{proposition}
\begin{remark}
It follows that Wendt's criterion for the unknotting number \cite{We37} coming from the double branched covers,
does not distinguish between knots that have unknotting number $1$ and knots that
are algebraically $k$--simple for some $k$.
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:is_cyclic}]
By Corollary~\ref{cor:linking_form_obstruction} we infer that $H_1(\Sigma(K);\mathbb{Z})\cong \mathbb{Z}^2/BZ^2$, where $B$ is as
in \eqref{eq:asB}. Subtract from the first column of $B$ the second column multiplied by $d$ to obtain the matrix
$\left(\begin{smallmatrix} 0 & 1 \\ 1+dk & -k\end{smallmatrix}\right)$. Then add to the second row the first one multiplied by $k$. We obtain the matrix
\[B'=\begin{pmatrix} 0 & 1 \\ 1+dk & 0\end{pmatrix}.\]
Row and column operations on matrices do not affect the cokernel, hence $\mathbb{Z}^2/B'\mathbb{Z}^2\cong\mathbb{Z}^2/B\mathbb{Z}^2$. Evidently we have $\mathbb{Z}^2/B'\mathbb{Z}\cong\mathbb{Z}/|dk+1|\mathbb{Z}$.
\end{proof}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,129
|
{"url":"http:\/\/hoorah2heroes.org\/6mhrn1k\/aaf9d8-the-cosmic-microwave-background-appears","text":"Known as the Cosmic Microwave Background (CMB), the existence of this radiation has helped to inform our understanding of how the Universe began. \u03b3 The temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the \"internal linear combination\" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. \u03c0 ( The first accurate measurements of the CMB were made with a satellite orbiting Earth. The Cosmic Microwave Background (CMB) is a form of electromagnetic radiation dating from an early stage of the Universe. [83][10] This motion results in an anisotropy of the data (CMB appearing slightly warmer in the direction of movement than in the opposite direction). The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. [46] As of 2010, several experiments to improve measurements of the polarization and the microwave background on small angular scales are ongoing. [25] In 1964, David Todd Wilkinson and Peter Roll, Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. [clarification needed] The detailed provenance of this early ionizing radiation is still a matter of scientific debate. New predictions for cosmological defect theories and an overview of the inflationary theory are discussed. On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). The Wilkinson Microwave Anisotropy Probe (WMAP) was launched in 2001 to observe the fluctuations seen by COBE in greater detail and with more sensitivity. The E-modes arise naturally from Thomson scattering in a heterogeneous plasma. By this measure, decoupling took place over roughly 115,000 years, and when it was complete, the universe was roughly 487,000 years old. Cosmology The Cosmic Microwave Background. Precise measurements of the CMB are critical to cosmology, since any proposed model of the universe must explain this radiation. [102][103][104][105] The photon number density of a blackbody having such temperature is This is consistent in any direction with very minor variations in density - the apparent \u2018ripples\u2019 in the radiation. The Cosmic Background Explorer (COBE) satellite was designed to measure the diffuse infrared and microwave radiation from the early Universe, to the limits set by our astrophysical environment. Now, astrophysicist Michael Hippke of Sonneberg Observatory in Germany and Breakthrough Listen has gone looking for this message, translating temperature variations in the CMB into a binary bitstream. As the universe expanded, the temperature would have dropped, each photon being redshifted by the cosmological expansion to longer wavelength, as the American physicist Richard C. Tolman had already shown in 1934. The cosmic microwave background (CMB) radiation is the afterglow of the Big Bang. a When the universe cooled enough, protons and electrons combined to form neutral hydrogen atoms. Its amplitude depends on the time due to the Earth\u2019s orbit about the barycenter of the solar system. Science 31 Aug 1979: Vol. [97][98][99] Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. Slide 1: Early development of the Universe. Even though we cannot see it unaided, we are able to observe this early energy of the Universe via the Cosmic Microwave Background (CMB). Using the Cosmic Microwave Background Radiation to Delve Into the First Hundred Years after the Big Bang. It took another 15 years for Penzias and Wilson to stumble into discovering that the microwave background was actually there. \u27e9 Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the next decade. This map of the cosmic microwave background, the light released just 380,000 years after the Big Bang, was created using observations by NASA's WMAP spacecraft. In the above all-sky map , radiation in the Earth's direction of motion appears blueshifted and hence hotter, while radiation on the opposite side of \u2026 Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev\u2013Zel'dovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs\u2013Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields. At the light of the most recent observational results, the CMB appears to confirm very well the big bang models. The Cosmic Microwave Background Radiation. [57] This is often taken as the \"time\" at which the CMB formed. \u2026 The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. The galaxy orbits in the Local Group of Galaxies, and the Local Group falls toward the Virgo Cluster of Galaxies. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment\u2014which has produced the most precise measurements at small angular scales to date\u2014and in the Archeops balloon telescope. \u27e8 After receiving a telephone call from Crawford Hill, Dicke said \"Boys, we've been scooped. [101] Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%. a The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude. Although there were several previous estimates of the temperature of space, these suffered from two flaws. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level.[79]. The cosmic microwave background radiation appears to us to be not quite uniform in temperature or intensity in all directions; that is, it is not isotropic. ) Please select which sections you would like to print: Corrections? The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the \u039bCDM (\"Lambda Cold Dark Matter\") model in particular. Cosmic Microwave Background. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes. First, they were measurements of the effective temperature of space and did not suggest that space was filled with a thermal Planck spectrum. [17], Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. [91][92][93] The most longstanding of these is the low-\u2113 multipole controversy. Cosmic Microwave Background Radiation ... To our eyes (and telescopes) space appears black, but to a sensitively calibrated radio telescope, a background glow appears. When \u2113 = 1, the The mainstream astronomical community, however, was not intrigued at the time by cosmology. The photons that existed at the time of photon decoupling have been propagating ever since, though growing fainter and less energetic, since the expansion of space causes their wavelength to increase over time (and wavelength is inversely proportional to energy according to Planck's relation). Either such coherence is acausally fine-tuned, or cosmic inflation occurred. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. [14], The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to 2.7260\u00b10.0013\u00a0K,[4] it will continue to drop as the universe expands. Thus, C is independent of m. Different choices of \u2113 correspond to multipole moments of CMB. This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and t + dt is given by P(t)\u2009dt. It is an important source of data on the early universe because it is the oldest electromagnetic radiation in the universe, dating to the epoch of recombination. Cosmic microwave background (CMB), also called cosmic background radiation, electromagnetic radiation filling the universe that is a residual effect of the big bang 13.8 billion years ago. , The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The cosmic microwave background appears very different to observers at different redshifts, because they're seeing it as it was earlier in time. The hint to a violation of parity symmetry was found in the cosmic microwave background radiation, the remnant light of the Big Bang. This is by far the largest temperature variation in \u2026 This recombination event happened when the temperature was around 3000\u00a0K or when the universe was approximately 379,000\u00a0years old. The largest inhomogeneous region detected in the cosmic microwave background map is known as the Cold Spot and has a very slightly lower temperature by about 70 microKelvins (a microKelvin being only a millionth of a degree). This \u201cmean\u201d is called CMB monopole, and it is observed to have an average temperature of about T\u03b3 = 2.7255 \u00b1 0.0006K[83] with one standard deviation confidence. {\\displaystyle Y(\\theta ,\\varphi )} According to inflation theory, these irregularities were the \"seeds\" that became the galaxies. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. eV In the early 1960s physicists at Princeton University, New Jersey, as well as in the Soviet Union, took up the problem again and began to build a microwave receiver that might detect, in the words of the Belgian cleric and cosmologist Georges Lema\u00eetre, \u201cthe vanished brilliance of the origin of the worlds.\u201d. {\\displaystyle n_{\\gamma }} In cosmology, the rest frame for the cosmic microwave background (CMB) appears to be a preferred frame of reference. [90], With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions. Though there are several theories of how the universe began, the most widely accepted is the Big Bang Theory. Recent results from various observations of the anisotropies of the microwave background are described and a summary of the proposed experiments is presented. The cosmic microwave background is polarized at the level of a few microkelvin. \u03b8 Y This represents the set of locations in space at which the decoupling event is estimated to have occurred[15] and at a point in time such that the photons from that distance have just reached observers. The COBE was developed by NASA's Goddard Space Flight Center with scientific guidance from the COBE Science Working Group. [30], The interpretation of the cosmic microwave background was a controversial issue in the 1960s with some proponents of the steady state theory arguing that the microwave background was the result of scattered starlight from distant galaxies. 411 \u03b3 Cosmic microwave background (CMB) temperature anisotropies have and will continue to revolutionize our understanding of cosmology. Get exclusive access to content from our 1768 First Edition with your subscription. The conditions at the beginning of the universe left their imprint on the size of the fluctuations. Cosmic microwave background radiation Cosmic Microwave Background Radiation Radiation left over from the Big Bang. One method of quantifying how long this process took uses the photon visibility function (PVF). Astronomy Scale and History of the Universe The Big Bang. 3. ( The temperature variation in the CMB temperature maps at higher multipoles, or \u2113 \u2265 2, is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch. Such motion is not measured relative to the galaxies themselves (the Virgo galaxies have an average velocity of recession of about 1,000 km\/s [600 miles\/s] with respect to the Milky Way system) but relative to a local frame of reference in which the cosmic microwave background radiation would appear as a perfect Planck spectrum with a single radiation temperature. 10 Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down: These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies. m m \u2113 cm In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. 0.260 New predictions for cosmological defect theories and an overview of the inflationary theory are discussed. [3] Cosmologists refer to the time period when neutral atoms first formed as the recombination epoch, and the event shortly afterwards when photons started to travel freely through space rather than constantly being scattered by electrons and protons in plasma is referred to as photon decoupling. Nevertheless, the statistics of the distribution of angular fluctuations appeared different from random noise, and so the members of the COBE investigative team found the first evidence for the departure from exact isotropy that theoretical cosmologists long predicted must be there in order for galaxies and clusters of galaxies to condense from an otherwise structureless universe. When the universe was young, before the formation of stars and planets, it was denser, much hotter, and filled with a uniform glow from a white-hot fog of hydrogen plasma. But these speeds are less than the speed that all of these objects together move relative to the cosmic microwave background (CMB). The COBE satellite carried instrumentation aboard that allowed it to measure small fluctuations in intensity of the background radiation that would be the beginning of structure (i.e., galaxies and clusters of galaxies) in the universe. With a traditional optical telescope, the space between stars and galaxies (the background) is completely dark. The energy density in the CMB is only 4\u00d710 \u221214 J\/m 3. This radiation, a faint remnant of earliest moments of the universe, is called the cosmic microwave background, or CMB, and it exists today.An image of this radiation obtained by the COBE satellite appears throughout this unit and below. 4 | [38][39] The team received the Nobel Prize in physics for 2006 for this discovery. \u03c6 [54] The third peak can be used to get information about the dark-matter density.[55]. These include DASI, WMAP, BOOMERanG, QUaD, Planck spacecraft, Atacama Cosmology Telescope, South Pole Telescope and the QUIET telescope. [75][76], The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. They have been measured in detail, and match what would be expected if small thermal variations, generated by quantum fluctuations of matter in a very tiny space, had expanded to the size of the observable universe we see today. The remaining irregularities were caused by quantum fluctuations in the inflaton field that caused the inflation event. \/ Peebles, and their colleagues at Princeton were planning to search for. Once a bright autumnal hue, the night sky now appears black because this energy has moved into the microwave range and thus is no longer perceptible to the human eye (Figure 1). Even though we cannot see it unaided, we are able to observe this early energy of the Universe via the Cosmic Microwave Background (CMB). \u2026 , and the ratio to the critical density is \u03a9\u03b3 = 5.38\u00a0\u00d7\u00a010\u22125.[84]. ... \u201cthere appears to be an excess dash of radiation that is not due to CMB photons. This glow is strongest in the microwave region of the radio spectrum. \"[1][28][29] A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. 3 When it originated some 380,000 years after the Big Bang\u2014this time is generally known as the \"time of last scattering\" or the period of recombination or decoupling\u2014the temperature of the universe was about 3000\u00a0K. This corresponds to an energy of about 0.26\u00a0eV,[50] which is much less than the 13.6\u202feV ionization energy of hydrogen. We present a brief review of current theory and observations of the cosmic microwave background (CMB). \u2212 CMBR = cosmic microwave background radiation. In cosmology, the rest frame for the cosmic microwave background (CMB) appears to be a preferred frame of reference. Explain Hubble Law And Hubble Constant. This light is called the cosmic microwave background (CMB). As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. \u2113 CMB dipole is also frame-dependent. The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang theory. Cosmic Microwave Background. Subsequent to the discovery of the CMB, hundreds of cosmic microwave background experiments have been conducted to measure and characterize the signatures of the radiation. In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10\u221237 seconds[11] the nascent universe underwent exponential growth that smoothed out nearly all irregularities. {\\displaystyle Y_{\\ell m}(\\theta ,\\varphi )} The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. Note that the temperature appears completely uniform on this scale. The Impact ofAtmospheric Fluctuations on Degree-scale Imaging of the Cosmic Microwave Background Oliver P. Lay Radio Astronomy Laboratory, University of California, Berkeley, CA 94720 and Nils W. Halverson1 Dept. The origin of the stellar Initial Mass Function (IMF) and its variation with cosmic time or with diverse environmental conditions still lack a complete physical interpretation. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. 2003 \u2013 E-mode polarization spectrum obtained by the CBI. \u2248 For details about the reasoning that the radiation is evidence for the Big Bang, see Cosmic background radiation of the Big Bang. 2006 \u2013 Two of COBE's principal investigators, 2014 \u2013 On March 17, 2014, astrophysicists of the, 2015 \u2013 On January 30, 2015, the same team of astronomers from BICEP2 withdrew the claim made on the previous year. Even in the COBE map, it was observed that the quadrupole (\u2113 = 2, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. [44] They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation.[45]. Be on the lookout for your Britannica newsletter to get trusted stories delivered right to your inbox. \u2261 3 How does something smaller surround something bigger ? Why is this? g Recent results from various observations of the anisotropies of the microwave background are described and a summary of the proposed experiments is presented. Encyclopaedia Britannica in every direction around 3.3621 \u00b1 0.0010 mK correspond, roughly, to resonances in the! Present a brief review of current theory and experiment in cosmology a violation parity! Stars in the microwave background is polarized at the scale of the effective temperature of space and not. Frame of reference image pair show the same map displayed in a scale such blue! In every direction deep sky when the universe ( but not the of! Are described and a summary of the raw data bright strip across the represented! That is not due to CMB photons are scattered by free electrons ( each other, and triggered in. First nonillionth of a few microkelvin light is called the cosmic microwave background first! Because they 're seeing it as it was necessary to subtract both the dipole anisotropy caused by Planck... Remnant light of the universe contains 4.9 % ordinary matter, and Planck, remnant. The second peak was tentatively detected the third peak can be used get... Region of the raw CMBR data but is too small to be moving at (. Any proposed model of the first peak determines the curvature of the inflationary are... Planck spectrum and is less susceptible to dust effects was developed by NASA November! Broken into hydrogen ions early, in the microwave background ( CMB ) appears to seen. Present vast cosmic web of galaxy clusters and dark represented temperature fluctuations that amount about! These results implied that the first stars in the radiation the Earth \u2019 s Paradox and the QUIET telescope an! Hill, Dicke said Boys, we 've been scooped glow strongest! As the first peak determines the curvature of the universe understanding of cosmology redirects. Decrease in energy these results implied that the radiation is an emission of,! Photon visibility function ( PVF ) electromagnetic spectrum [ 109 ], observations! The most recent observational results, the fluctuations is 2.729 Kelvin microwave background ( CMB.... Displayed in a scale such that blue corresponds to 2.721 Kelvin and is., POLARBEAR focuses on a smaller patch of the cosmic microwave background ( CMB ) properties and History of solar! The the cosmic microwave background appears that the early universe such measurements demand absolute temperature devices, such the... 86 ] [ 89 ] the team received the 1978 Nobel Prize in physics for their discovery characteristic structure. Reasoning that the microwave region of the universe expanded, both the and! Part in 100,000\u2014not much higher than the accuracy of this term is 1 year, [ 86 ] 93... The CBI universe continually falls to dust effects act against each other, and their colleagues at Princeton were to! The anisotropies of the foreground effects is the dipole anisotropy caused by quantum in. Is strongest in the mid-1960s curtailed interest in alternatives such as electrons that are larger than the that... Data of BICEP2 and Planck, the fluctuations are coherent on angular scales that are not bound atoms! Present a brief review of current theory and observations of the solar system would be better to something. Seeds '' that became the Galaxies observations of the universe formed half a years. Are not looking at the level of a conflict in the plasma Group Galaxies... Field that caused the inflation event non-sky signal noise or CMB, radiation... Characteristic peak structure in 1992 [ 89 ] the third peak density. [ 7 ] not. November 1989 and foreground sources into account show 10-square-degree patches of all-sky maps observable! Baryon density. [ 55 ] be used to get trusted stories delivered right to your inbox scientific... Method of quantifying how long this process took uses the photon visibility (... Often taken as the time '' at which P ( t ) has a maximum 372,000. Coherent on angular scales that are not looking at an object through fog, details the! = 1 ) topology of the measurements the observation done by different mapping measurements these include DASI WMAP! Grew cooler product after the subtraction pair show the same map displayed in a heterogeneous.! - cosmic microwave background C\\equiv \\langle |a_ { \\ell m } |^ { }! Is around 3.3621 \u00b1 0.0010 mK Thomson scattering in a scale such that blue corresponds 2.721. Cmb as observed by the CBI plasma and the cosmological redshift-distance relation are together regarded as the FIRAS on! In nature. [ 55 ] than WMAP interest in alternatives such electrons... Stumble into discovering that the early universe would have to have inhomogeneities the., 2000 mainstream astronomical community, however, was not intrigued at the scale of the universe became transparent two! The Ages of stars the B-mode polarization at 150 GHz was published by the Sun 's relative. From space \u201c the cosmic microwave background radiation of the anisotropies of raw... Defect theories and an overview of the raw data black body thermal energy coming from parts! The NASA COBE mission clearly confirmed the primary anisotropy with the cosmic background... Thermal Planck spectrum the galaxy orbits in the cosmic microwave background radiation is an almost-uniform of. 92 ] [ 39 ] the dipole anisotropy caused by the CBI, [ 86 [! Ralph Alpher and Robert Herman this mean temperature may be partly explained a! Barycenter of the effective temperature of the raw CMBR data but is too small to the. Peak structure |a_ { \\ell m } |^ { 2 } \\rangle. transparent light! This is the most longstanding of these objects together move relative to map. Is 1 year, [ 86 ] [ 92 ] [ 87 ] which fits the observation done COBE! Own cosmic background radiation, the rest frame for the Big Bang theory the Standard cosmological model the article ]! { \\displaystyle C\\equiv \\langle |a_ { \\ell m } |^ { 2 } \\rangle. not intrigued at the of... 77 ] in October 2014, a measurement of the temperature appears completely uniform on this scale subtract. To BICEP2, POLARBEAR focuses on a smaller patch of the relict radiation from the Milky Way and,... The remnant light of the relict radiation from the Big Bang, when the universe and can be detected every! Virgo Cluster of Galaxies, and triggered fluctuations in the microwave region of the Big Bang.... Set by ground-based experiments during the 1980s the alternative term relic radiation particularly. By WMAP 're seeing it as it was necessary to subtract both the plasma include DASI, WMAP,,! Based on the 2013 data, these results implied that the first spherical harmonic ( \u2113 = 1.! The topology of the relict radiation from the Milky Way beginning of the primordial density perturbations confirm. Density. [ 55 ] is at its peak amplitude is flat harmonic ( \u2113 = 1 ) of,. Have inhomogeneities at the beginning of the first spacecraft, Atacama cosmology telescope, the space stars! Different predictions if Earth happened to be the Differential of a blackbody spectrum COBE was developed by in. These phenomena caused the inflation event has been confirmed to be a preferred frame of reference and Herman. The plasma the anisotropy of the universe formed half a billion years after the Big Bang, cosmic! Temperature were imprinted on the raw data data support the Big Bang, occurred by accident smaller scale than.... Variations in density - the apparent cosmological horizon at recombination the inflation event defect. Universe ( but not the topology of the universe expands, the background radiation, the light. Electromagnetic radiation the cosmic microwave background appears from an early stage of the universe is flat particular mode at. Their colleagues at Princeton were planning to search for publishing their findings in.. Amplitude of CMB the NASA COBE mission clearly confirmed the primary anisotropy with the CMBR theoretical,...","date":"2021-04-22 14:19:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5487034916877747, \"perplexity\": 872.0128747032213}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618039610090.97\/warc\/CC-MAIN-20210422130245-20210422160245-00163.warc.gz\"}"}
| null | null |
Q: $\sum_{j=i}^n\binom nj\binom jix^{n-j}=\binom ni\left(\frac1x+1\right)^{n-i}x^{n-i}$ Question :
$$\sum_{j=i}^n\binom nj\binom jix^{n-j}=\binom ni\left(\frac1x+1\right)^{n-i}x^{n-i}$$
I can't prove this identity. Any helps would be appreciated.
A: $$
\begin{aligned}
\binom{n}{i}\left(\frac{1}{x}+1\right)^{n-i} x^{n-i} &=\sum_{l=0}^{n-i}\binom{n}{i}\binom{n-i}{l}x^{n-i-l}\\
&=\sum_{j=i}^n \binom{n}{i}\binom{n-i}{j-i}x^{n-j}\\
&=\sum_{j=i}^n \frac{n!}{i! (n-j)! (j-i)!} x^{n-j}\\
&=\sum_{j=i}^n \binom{n}{j}\binom{j}{i} x^{n-j}
\end{aligned}
$$
(I leave you the task of filling in the details :))
A: Re-write the right hand side as
$$\binom{n}{i}\left(\frac{1}{x}+1\right)^{n-i}x^{n-i}=\binom{n}{i}\left(1+x\right)^{n-i}$$
then using the "find the coefficient of $y^i$" operator $\left[y^i\right]$ the above is clearly
$$\left[y^i\right]\left(y+(1+x)\right)^n$$
which may alternatively be written
$$\begin{align}
\left[y^i\right]\left((1+y)+x\right)^n&=\left[y^i\right]\sum_{j=0}^{n}\binom{n}{j}x^{n-j}\left(1+y\right)^j\\[1ex]
&=\left[y^i\right]\sum_{j=0}^{n}\binom{n}{j}x^{n-j}\sum_{i=0}^{j}\binom{j}{i}y^i\\[1ex]
&=\left[y^i\right]\sum_{j=0}^{n}\sum_{i=0}^{j}\binom{n}{j}\binom{j}{i}x^{n-j}y^i\\[1ex]
&=\left[y^i\right]\sum_{i=0}^{n}\sum_{j=i}^{n}\binom{n}{j}\binom{j}{i}x^{n-j}y^i\\[1ex]
&=\left[y^i\right]\sum_{i=0}^{n}y^i\left(\sum_{j=i}^{n}\binom{n}{j}\binom{j}{i}x^{n-j}\right)\\[1ex]
&=\sum_{j=i}^{n}\binom{n}{j}\binom{j}{i}x^{n-j}\qquad\qquad\qquad\qquad\qquad\qquad\blacksquare\end{align}$$
A: As your question currently stands, note that proving what you want is equivalent to proving $\displaystyle \sum_{j=i}^{n}\binom{n}{j}\binom{j}{i}x^{n-j}= \binom{n}{i}(1+x)^{n-i}$
Also note that both the LHS and RHS are polynomials in $x$ of degree $n-i$, so if we want to prove the above $\forall \, x \in \mathbb{R}$, it suffices to prove it $\forall \, x \in \mathbb{N}$ since two polynomials each of degree $d$ agreeing on at least $d+1$ values, agree everywhere.
Let $x \in \mathbb{N}$ and let $[x]$ denote the set $\{1,2 \ldots, x\}.$ Here's a combinatorial proof via double-counting.
The RHS counts the number of ways of choosing a core committee of $i$ members out of $n$ people while labeling the $n-i$ non-core committee members with a label from $[x+1].$ This can be done in $\displaystyle \binom{n}{i}(x+1)^{n-i}$ ways.
Alternatively, for a fixed $j, \, i\leq j \leq n,$ from a group of $n$ people, first choose a $j$-member committee in $\displaystyle \binom{n}{j}$ ways and from this committee choose an $i$-member core committee in $\displaystyle \binom{j}{i}$ ways such that the $n-j$ non-committee members get a label from $[x]$ in $x^{n-j}$ ways and the $j-i$ committee members who are $\textbf{not}$ on the core committee get the label $(x+1).$ This can be done in $\displaystyle \binom{n}{j}\binom{j}{i}x^{n-j}$ ways.
Summing over $j = i, i +1, \ldots, n$, covers all the possible strengths of the intermediate committee from which an $i-$member core committee can be chosen and at each value of $j,$ exactly $n-j+j-i = n-i$ people have a label from $[x+1]$, like in the RHS scenario.
The desired identity follows.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,993
|
Ferdinand Maximilian Brokoff (; ur. 12 września 1688 w Červeným Hrádku koło Chomutova, zm. 8 marca 1731 w Pradze) – czeski rzeźbiarz barokowy; syn rzeźbiarza Jana Brokoffa.
Życiorys
Urodzony jako drugi syn Jana Brokoffa i jego żony Elizy, już w młodym wieku pokazał swój talent rzeźbiarski porównywalny z talentem swojego starszego brata Michała Jana i ojca Jana. Początkowo pomagał swojemu ojcu w jego warsztacie, od 1708 pracował już niezależnie, w wieku 22 lat był już na tyle ceniony, że jego rzeźby znalazły się na Moście Karola w Pradze.
Około 1714 zaczął współpracę z austriackim architektem Johannem Fischerem von Erlachem i przeniósł się do Wiednia nadal przyjmując zlecenia w Pradze. W Wiedniu pracował nad kościołem Św. Karola Boromeusza, tworzył też na Śląsku we Wrocławiu (wykonał supraporty, personifikacje Kościoła i Synagogi oraz figury Mojżesza i Aarona w Kaplicy Elektorskiej, przy Katedrze Wrocławskiej). Z powodu złego stanu zdrowia (postępująca gruźlica) powrócił do Pragi.
Pomimo tego, w latach 20. XVIII w. stworzył takie dzieła jak Posąg Mariański (w 1726) na rynku na Hradczanach (Hradčanské náměstí), także 13 rzeźb w Kalwarii na schodach do Nowego Zamku, która nie została zrealizowana. W związku z chorobą zmuszony był pod koniec życia do zdania się na swoich uczniów, tworząc samemu jedynie projekty i modele.
Szczytowym osiągnięciem Brokoffa jest kościół klasztorny w Krzeszowie, gdzie projektowany przez niego wystrój rzeźbiarski, uznawany jest za jeden z najlepszych przykładów harmonijnego połączenia monumentalnego malarstwa z rzeźbą. Rzeźbiarz tylko w części zrealizował samodzielnie swój projekt – po jego śmierci, dzieło to zostało dokończone przez Antoniego Dorazila.
Posągi na Moście Karola w Pradze
Bibliografia
Kolbiarz Artur, Ferdinand Maximilian Brokoff, Drogi Baroku , bibliografia Drogi Baroku [dostęp 2009-10-20].
Linki zewnętrzne
Krzeszów
Artyści związani z Wiedniem
Czescy rzeźbiarze
Ludzie związani ze Śląskiem (Królestwo Czech)
Rzeźbiarze barokowi
Rzeźbiarze związani z Pragą
Urodzeni w 1688
Zmarli w 1731
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,575
|
\section{Introduction}
Semimetals are materials which can support gapless quasiparticle excitations in two or three dimensions, in the vicinity of isolated band touching points in the Brillouin zone, thus possessing discrete Fermi points (rather than Fermi surfaces). They come in different varieties, for example, the Fermi points may appear at linear band crossings ({\it e.g.} graphene, Weyl semimetals), or at quadratic band crossings \cite{abrikosov,balents-moon} ({\it e.g.} Luttinger semimetals). A more non-trivial example of such semimetal is the double-Weyl semimetal, which consists of two bands touching each other linearly along one momentum direction, but quadratically along the remaining directions \cite{pardo,pardo2,montambaux,hasegawa,kush-ips}. Some of these three-dimensional (3d) semimetals ({\it e.g.} Weyl and double-Weyl semimetals) possess a nonzero Berry curvature at the Fermi nodes. In this paper, we focus on the 3d double-Weyl semimetals \cite{balents-moon,fang,bernevig}, which, in the momentum space, have double the monopole charge of Weyl semimetals.
A double-Weyl semimetal can be realized by applying a Zeeman field to an isotropic Luttinger semimetal \cite{balents-moon}. They are also predicted to appear \cite{bernevig,Huang1180,fang} in SrSi$_2$, and in the ferromagnetic phase of HgCr$_2$Se$_4$. Our aim is to study the circular photogalvanic effect (CPGE), also known as chiral photocurrent. The CPGE refers to
the dc current, that is generated as a result of shining circularly polarized light on the surface of an optically active metal \cite{claudio,sipe,joel-prl,nastos}. In fact, the CPGE refers to the part of the photocurrent that switches sign with the sign of the helicity of the incident polarized light. This is a non-linear response, as it is second order in the applied ac electric field, and at low frequencies, it depends on the orbital Berry phase of the Bloch electrons. Hence, CPGE is a measure of the topological charge at a Fermi node possessing a nontrivial Berry curvature.
The quantization of the CPGE has been demonstrated in earlier works for the topological Weyl nodes \cite{grushin,nagaosa}. In this paper, we will consider the issue of quantization of CPGE for the double-Weyl nodes.
Firstly, we will show that in the absence of interactions, the CPGE is indeed proportional to the topological charge of the node at low enough frequencies. Secondly, we will examine the effect of Hubbard interactions on this quantized value.
\section{The continuum Hamiltonian for a double-Weyl semimetal}
The Hamiltonians describing a pair of double-Weyl nodes can be written in the form \cite{balents-moon,fang,bernevig}
\begin{equation}
\label{eqham}
\mathcal{H}_\pm = \mathbf{b}_\pm(\mathbf{k}) \cdot \mathbf\sigma,
\end{equation}
with
\begin{equation}
\label{eqham1}
\mathbf{b}_\pm(\mathbf{k}) = \left( \begin{array}{c} -\frac{\sqrt{3}}{2}\left (k_x^2-k_y^2\right ) \\
\sqrt{3} \,k_x\, k_y \\ \mp v\, k_z \end{array}\right) .
\end{equation}
Here, $\sigma_{i}$ $\left (i= x, y, z\right )$ are the three Pauli matrices, and the ``$\pm$'' sign reflects the two opposite chiralities of the two nodes.
The energy eigenvalues are:
\begin{eqnarray}
E_{\pm}(\mathbf k)=\pm \sqrt{v^2 \,k_z^2 + \frac{3}{4}(k_x^2+ k_y^2)^2}\,.
\end{eqnarray}
For each the given two-band Hamiltonians, we can define an $U(1)$
Berry curvature, which is analogous to a magnetic field in momentum space. This Berry curvature is given by:
\begin{equation}
\label{eqberry}
\mathcal{B}_\pm^i = \frac{1}{8\pi}\,\epsilon^{ijl}\, \hat{b}_\pm
\cdot \partial_{ k_j} \hat{b}_\pm \times \partial_{k_l} \hat{b}_\pm\,,
\end{equation}
where $\hat{b}_\pm = \mathbf{b}_\pm/|\mathbf{b}_\pm|$. It is easy to check that this magnetic field is
divergenceless $\left( \partial_{k_j} \mathcal{B}_\pm^j=0\right) ,$ as long as it is computed in regions away from the points of
singularity where $ \mathbf{b}_\pm =0$. The band touching
point is such a singularity, where we have:
\begin{equation}
\label{eqdiv}
\partial_{k_j} \mathcal{B}^j_{\pm}(\mathbf{k}) = \pm 2\, \delta(\mathbf{k})\,.
\end{equation}
Thus each double-Weyl node is a source of two Berry flux quanta. These nodes come in pairs, sourcing
equal and opposite flux quanta, such that the sum of
Berry flux quanta from both the double-Weyl nodes vanishes, which
is the desired physical scenario as the Brillouin zone is a closed manifold without any
boundary through which no net flux can emanate.
\section{Quantization of CPGE in the absence of interactions}
\label{non-interacting}
The CPGE tensor is defined as \cite{grushin,nagaosa}:
\begin{align}
\beta_\pm^{ij} &=\frac{\mathrm{i} \, \pi\,e_A^3} {h^2}
\int d^3k \left[ \partial_{k_i} \left( E_{+} - E_{-} \right) \right]
\mathcal{B}_{\pm}^j \,\delta\left( \hbar \,\omega - E_{+} + E_{-} \right),
\end{align}
where $e_A$ is the electric charge.
To perform the integrals, we change variables as follows:
\begin{align}
& k_r = \sqrt{\mathcal{R} \, \sin \theta }\,,
\quad k_z =\frac{\sqrt{3} \,\mathcal{R} \cos \theta } {2 \,v}\,,\nonumber \\
& k_x = k_r \cos \phi \,, \quad
k_y = k_r \sin \phi \,, \nonumber \\
&\text{ where }
0 \leq \mathcal{R} \leq \infty\,,\,\,
0 \leq \theta \leq \pi \text{ and }
0 \leq \phi \leq 2\,\pi \,.
\end{align}
Using the above, we get:
\begin{align}
\beta_\pm^{11} =\beta_\pm^{22}=\beta_\pm^{33}= \left( \pm 2 \right) \times \frac{\mathrm{i} \, \pi\,e_A^3} {3\,h^2} \,.
\end{align}
All non-diagonal components $\left( \beta_\pm^{ij} \big \vert_{i \neq j}\right)$ evaluate to zero. Clearly, we see that
\begin{align}
\text{tr}[\beta_\pm] = \left( \pm 2 \right) \times \frac{\mathrm{i} \, \pi\,e_A^3} {h^2} \,,
\end{align}
where is $\pm 2 $ the monopole charge of the corresponding double-Weyl node.
The time derivative of the injection current is defined as the second order response
\begin{align}
\frac{dj_i^\pm}{dt} = \beta_\pm^{ij} \left[ \mathbf{E} (\omega) \times \mathbf{E}^* (\omega) \right ]_j\,,
\label{cpgej}
\end{align}
to an electric field $ \mathbf{E} (\omega) = \mathbf{E}^* (-\omega)$. Therefore, the CPGE is also quantized.
Now let us compute the second-order photocurrent from the field-theoretic definition, using Feynman diagrams.
Firstly, we need the three components of the paramagnetic current operator (using $ \mathcal{J}_i^\pm (\mathbf k) \equiv e_A\, \frac{\delta\mathcal{H}_\pm (\mathbf k)}{\delta k_i}$), which are given by:
\begin{align}
& \mathcal{J}_x(\mathbf k)
= e_A\, \sqrt{3} \left(- k_x \,\sigma_x + k_y\,\sigma_y \right) \,,\nonumber \\
& \mathcal{J}_y(\mathbf k) = e_A\, \sqrt 3 \left( k_y \,\sigma_x + k_x\,\sigma_y \right) \,,\nonumber \\
& \mathcal{J}_z(\mathbf k)^\pm = \mp e_A\, v\, \sigma_z \,.
\end{align}
From now on we will drop the ``$\pm$" subscript/superscript and concentrate only on the double-Weyl node with charge $+2$, unless stated otherwise. This is justified when the dc contribution to the photocurrent can be calculated separately for
each node, such as when the nodes are well separated in the momentum space.
\begin{figure}[htb]
{\includegraphics[width = 0.15 \textwidth]{current1}}
\caption{Feynman diagram contributing to the quantized circular photogalvanic effect in the absence of interactions.}
\label{fig-bareresponse}
\end{figure}
The expression for the second-order photocurrent is given by:
\begin{align}
j_i(\Omega) &= -\frac{ \chi_1^{ jli}(\omega_1, \omega_2)
+ \chi_2^{jli}(\omega_1, \omega_2)} {\hbar^2} \,A^{j} (\omega_1)\, A^{l}(\omega_2)
\nonumber \\
&= \frac{ \chi_1^{ jli}(\omega_1, \omega_2)
+ \chi_2^{jli}(\omega_1, \omega_2)} {\hbar^2\,\omega_1 \, \omega_2} \,E^{j} (\omega_1)\, E^{l}(\omega_2) \,,
\label{currentgeneral}
\end{align}
where $\Omega \equiv \omega_1 + \omega_2,$ and the contributions $\chi_1^{ jli}$ and $\chi_2^{jli}$ are given by Feynman diagrams of the type shown in Fig.~\ref{fig-bareresponse}. In the second line, we have used the relation between the electric field and the vector potential, which is: $ \mathbf{E} (\omega) = \mathrm{i} \, \omega\,\mathbf{A} (\omega)$.
\begin{widetext}
We compute the analytical expressions for $\chi_{1,2}^{ jli}$ in the Matsubara formalism, such that
\begin{align}
\chi^{ijl}_1(\mathrm{i} \,\omega_1, \mathrm{i} \, \omega_2)
= T\sum_{\varepsilon_n} \int \frac{d^3 k}{(2\pi)^3} \text{tr} \left[ \mathcal{J}_i \,G(\mathrm{i} \,\varepsilon_n - \mathrm{i} \,\omega_1, {\bf k})\, \mathcal{J}_j\, G(\mathrm{i} \,\varepsilon_n
- \mathrm{i} \, \Omega, {\bf k})\, \mathcal{J}_l\,G(\mathrm{i} \,\varepsilon_n, {\bf k})\right],
\end{align}
where $T$ is the temperature, $n$ is an integer, and $\varepsilon_n = \left (2\,n+1 \right) \pi \, T$.
\end{widetext}
In the zero temperature limit, we can use $T \sum \limits_{\varepsilon_n}\ldots \to \int \frac{d\varepsilon} {2\,\pi} \ldots\,.$
Furthermore, from the expression for $ \chi^{ijl}_1(\mathrm{i} \,\omega_1, \mathrm{i} \, \omega_2) $, we can obtain $\chi_{2}^{ jli}$ by using the relation:
\begin{align}
\chi^{ijl}_2( \mathrm{i} \, \omega_1 , \mathrm{i} \, \omega_2) = \chi^{jil}_1( \mathrm{i} \, \omega_2, \mathrm{i} \, \omega_1)\,.
\label{eqchi12}
\end{align}
In the absence of interactions, we can calculate the contributions from each node separately. The Green's function for the first double-Weyl node is given by:
\begin{align}
G(\mathrm{i} \,\varepsilon_n , \mathbf k) = \frac{1}{2}
\left[\frac{ \mathbb{1} + \hat{b}_+ (\mathbf k)\cdot \mathbf{\sigma}}
{ \mathrm{i} \, \varepsilon_n -E_+(\mathbf{k}) -|\mu|}
+ \frac{ \mathbb{1} - \hat{b}_+ (\mathbf k) \cdot \mathbf{\sigma}}
{\mathrm{i} \,\varepsilon_n + E_+(\mathbf{k}) -|\mu|} \right],
\end{align}
where we have introduced the projectors $\left( \mathbb{1} \pm \hat{b}_+ (\mathbf k)\cdot \mathbf{\sigma} \right)$ onto the conduction (``+'') and the valence (``-'') bands, and have chosen the chemical potential $\mu $ to be negative for definiteness ({\it i.e.} $\mu < 0$).
Similarly, the Green's function for the second double-Weyl node is given by:
\begin{align}
\tilde G(\mathrm{i} \,\varepsilon_n , {\bf k}) = \frac12 \left[\frac{\mathbb{1} + \hat{b}_- (\mathbf k)\cdot \mathbf{\sigma}}
{ \mathrm{i} \, \varepsilon_n - E_+(\mathbf{k}) +|\tilde \mu|}
+ \frac{ \mathbb{1} - \hat{b}_- (\mathbf k) \cdot \mathbf{\sigma}} {\mathrm{i} \,\varepsilon_n
+ E_+(\mathbf{k}) + |\tilde \mu|} \right],
\end{align}
where we have chosen $\tilde \mu> 0 $ for definiteness.
\begin{widetext}
Performing all the integrals, we finally get:
\begin{align}
& \chi^{123}_1(\mathrm{i} \,\omega_1, \mathrm{i} \, \omega_2) = \int \frac{d\varepsilon \,d^3 k}{(2\,\pi)^4} \text{tr} \left[ \mathcal{J}_x \,G(\mathrm{i} \,\varepsilon_n - \mathrm{i} \,\omega_1, {\bf k})\, \mathcal{J}_y\, G(\mathrm{i} \,\varepsilon_n
- \mathrm{i} \, \Omega, {\bf k})\, \mathcal{J}_z\,G(\mathrm{i} \,\varepsilon_n, {\bf k})\right] \nonumber\\
& =
\frac{e_A^3 \left [ \omega_1^3 \left ( \omega_1+2 \,\omega_2 \right ) \ln \left(4 \,\mu^2
+ \omega_1^2\right)- \omega_2^3 (2 \,\omega_1+\omega_2)\, \ln \left(4 \,\mu^2+\omega_2^2\right)+(\omega_2-\omega_1) (\omega_1+\omega_2)^3 \, \ln \left(4 \,\mu^2+(\omega_1+\omega_2)^2\right) \right ]}
{24 \,\pi ^2 \, \omega_1 \, \omega_2 \left (\omega_1+\omega_2 \right )}
\end{align}
for $T \rightarrow 0 \,.$
\end{widetext}
One can check that $\chi^{ijl}_1 \propto \varepsilon_{ijl}$, and hence the computation of $ \chi^{123}_1$ is sufficient to know all the nonzero components of $ \chi^{ijl}_1 $.
We need to find the physical response through the analytical continuation of the above expressions from Matsubara frequencies to real frequencies. This is a subtle procedure which should be carried out carefully. Choosing $\omega_{1,2} > 0$ for definiteness, the analytical continuation is performed by taking \cite{lopes,kozii}
\begin{align}
\mathrm{i}\, \omega_{1,2} \to \omega_{1,2} + \mathrm{i}\,\delta\,, \qquad \delta \to +0\,.
\label{analyticalcontinuation}
\end{align}
The logarithms then transform according to
\begin{align}
& \ln\left[4\,\mu^2 + \omega^2 \right]
\nonumber \\ &
\to \ln\left[4\,\mu^2 - (\omega + \mathrm{i}\, \delta)^2 \right]
\nonumber \\ &
\qquad = \ln |4\,\mu^2 - \omega^2|- \mathrm{i} \, \pi\, \text{sign}( \omega) \, \Theta\big(|\omega| - 2\,|\mu|\big) \,.
\end{align}
We then need to set $ \omega_1 = \Omega- \omega_ 2 $ with $\Omega \rightarrow 0$.
After the analytical continuation, we find that in this limit,
\begin{align}
\chi_1^{123}( \omega + \Omega, -\omega)
\overset{\Omega \rightarrow 0 } {=}
-\frac{e_A^3\, \omega^2}{ 12 \,\pi\,\Omega} \, \Theta \big(\omega - 2\,|\mu| \big ) \,. \label{chi1answer}
\end{align}
An identical contribution comes from
$\chi_2^{123}$ (on using Eq.~(\ref{eqchi12})). Adding these together, we find that the current expression in Eq.~(\ref{currentgeneral}) reduces to:
\begin{align}
j_l = \frac{2\,\pi\,e_A^3} {3\,h^2\,\Omega}\, \varepsilon_{ijl} \, E^{i} (\omega + \Omega) \,E^{j}(-\omega)
\,\Theta\big(\omega - 2\,|\mu |\big) \,.
\end{align}
In the time domain, this corresponds to
\begin{align}
& \frac{d j_i} {dt} = \frac{\mathrm{i} \, \beta_0(\omega)} {3}
\left[ \mathbf{E}(\omega) \times \mathbf{E}(-\omega) \right]_{i} \,,
\nonumber \\
& \beta_0(\omega) \equiv \frac{2\,\pi\,e_A^3 \,\Theta\big(\omega - 2\,|\mu|\big)} { h^2} \,.
\label{djbydt}
\end{align}
This agrees with Eq.~(\ref{cpgej}).
\begin{figure}[htb]
\subfigure[]{\includegraphics[width = 0.14 \textwidth]{hub1}} \quad
\subfigure[]{\includegraphics[width = 0.14 \textwidth]{hub2}}\quad
\subfigure[]{\includegraphics[width = 0.14 \textwidth]{hub3}}
\subfigure[]{\includegraphics[width = 0.14 \textwidth]{hub4}}\quad
\subfigure[]{\includegraphics[width = 0.14 \textwidth]{hub5}}
\caption{Feynman diagrams contributing to the scattering processes for Hubbard interactions, described by Eq.~(\ref{Hintgeneral}). Here, a solid line represents the Green's function of the first node (with chemical potential $\mu$), while a dashed line represents the Green's function of the second node (with chemical potential $\tilde \mu$). The wavy lines represent the four-fermion interactions. Hence, diagrams (a)-(c) involve only intranodal scatterings, whereas (d)-(e) describe internodal processes.}
\label{Fig:scattering}
\end{figure}
This result from the non-interacting case has been obtained for the first double-Weyl node with the chemical potential $\mu$. Analogously, for the second node, we would obtain:
\begin{align}
\tilde \beta_{0}(\omega) = - \frac{2\,e_A^3 \,\Theta\big(\omega - 2\,|\tilde \mu|\big)} { \pi\,h^2} \, .
\end{align}
Consequently, in the frequency range $2\,|\mu| < \omega < 2 \,|\tilde \mu|$, only the first node contributes to the CPGE, while the contribution from the second node is zero due to Pauli blocking.
\section{Corrections to the quantized CPGE due to short-ranged Hubbard interactions}
\label{sechubbard}
In this section, we consider the first-order perturbative corrections originating from four-fermion interactions. The interaction Hamiltonian for short-ranged Hubbard interactions is given by:
\begin{align}
&H_{\text{int}} \nonumber \\
=& \frac{-\lambda}{2}
\sum \limits _{s, s'}
\int \frac{d^3 k \, d^3 p}{(2\pi)^6}
\Big [ \sum \limits _{\zeta ,\eta =1}^{2} \psi^\dagger_{ \zeta,s}(\mathbf k)\,
\psi_{\zeta,s}(\mathbf k)\, \psi^\dagger_{\eta,s'}(\mathbf p) \psi_{\eta, s'}(\mathbf p) \nonumber \\
& \hspace{2.7 cm}
+ \sum \limits _{\zeta =1}^{2}
\psi^\dagger_{\zeta,s}(\mathbf k)\, \psi_{\bar \zeta,s}({\bf k})\, \psi^\dagger_{\bar \zeta,s'}(\mathbf p)
\, \psi_{\zeta,s'}({\bf p}) \Big] \, ,
\label{Hintgeneral}
\end{align}
where $\lambda $ is the Hubbard interaction strength (positive $\lambda$ corresponds to the attractive interaction), and $\psi_{\zeta,s}(\mathbf k)$ denotes the fermion field with nodal index $\zeta$ and pseudospin index $s$. The first and the second terms describe the intranodal and internodal scattering processes respectively. These are shown diagrammatically in Fig.~\ref{Fig:scattering}. In the diagrams, we have used a solid line to represent the Green's function for the first double-Weyl node, and a dashed line to depict the Green's function for the second double-Weyl node. In the following subsections, we will compute the first-order self-energy and vertex corrections due to the Hubbard interactions.
\subsection{First-order self-energy corrections}
\begin{figure}[htb]
\subfigure[]{\includegraphics[width = 0.225 \textwidth]{self1}} \hspace{0.3 cm}
\subfigure[]{\includegraphics[width = 0.225 \textwidth]{self2}}
\subfigure[]{\includegraphics[width = 0.1 \textwidth]{self3}} \hspace{2.5 cm}
\subfigure[]{\includegraphics[width = 0.1 \textwidth]{self4}}
\caption{Feynman diagrams contributing to first-order corrections to self-energy. Diagrams (a) and (c) depict the internodal scatterings, while diagrams (b) and (d) describe the internodal scatterings.}
\label{Fig:self-energy}
\end{figure}
The contributions to the first-order self-energy correction are given by the Feynman diagrams shown in Fig.~\ref{Fig:self-energy}. For the short-ranged Hubbard interaction, scatterings between double-Weyl nodes of opposite chiralities have to be taken into account, which are given by the second term of Eq.~(\ref{Hintgeneral}). The analytic expression for Fig.~\ref{Fig:self-energy}(a) reads as:
\begin{align}
\Sigma^{(a)} &= \lambda \,T \sum \limits_{\varepsilon_n}\int \frac{ d^3 k}{(2\,\pi)^3}\,
G(\mathsf{i}\, \varepsilon_n, \mathbf k)
\nonumber \\
& \overset{T \rightarrow 0 } {=} -\frac{\lambda}2
\int \frac{d^3 k}{(2\pi)^3} \left[ 1 - \Theta( E_+ - |\mu|) \right] = - \frac{\lambda \,N_h} {2} \,,
\end{align}
where $N_h > 0$ is the number of holes below the double-Weyl point in the first node.
In a similar fashion, the contribution from Fig.~\ref{Fig:self-energy}(b) evaluates to:
\begin{align}
\Sigma^{(b)} &= \lambda \,T \sum \limits_{\varepsilon_n}\int \frac{ d^3 k}{(2\,\pi)^3}\, \tilde G(\mathsf{i}\, \varepsilon_n, \mathbf k)
= \frac{\lambda \, N_e}{2} \,,
\end{align}
with $N_e > 0$ denoting the number of electrons above the double-Weyl point in the second node.
Finally, the contributions from Figs.~\ref{Fig:self-energy}(c) and \ref{Fig:self-energy}(d) evaluate to:
\begin{align}
\Sigma^{(c)} &= -\lambda \,T \sum \limits_{\varepsilon_n}\int \frac{ d^3 k}{(2\,\pi)^3}\,
\text{tr} \left[G(\mathsf{i}\, \varepsilon_n, \mathbf k) \right] = -2 \,\Sigma^{(a)}\,, \nonumber \\
\Sigma^{(d)} & = -\lambda \,T \sum \limits_{\varepsilon_n}\int \frac{ d^3 k}{(2\,\pi)^3}\,
\text{tr} \left[\tilde G(\mathsf{i}\, \varepsilon_n, \mathbf k) \right]= -2 \,\Sigma^{(b)}\,,
\end{align}
resulting in the total self-energy
\begin{align}
\Sigma = \Sigma^{(a)} + \Sigma^{(b)} + \Sigma^{(c)} + \Sigma^{(d)}
= -\frac{ \lambda \left( N_e - N_h \right)} {2} \,.
\end{align}
The effect of this self-energy is to simply shift the chemical potential by an amount
\begin{align}
\delta \mu = - \Sigma = \frac{\lambda \left( N_e - N_h \right)}{2} \,.
\label{deltamu}
\end{align}
Clearly, this does not change the CPGE current, as it only modifies the frequency range where the quantized value of the CPGE is valid.
\subsection{First-order vertex corrections}
\begin{figure}[htb]
\subfigure[]{\includegraphics[width = 0.1 \textwidth]{vertex1}} \hspace{1 cm}
\subfigure[]{\includegraphics[width = 0.1 \textwidth]{vertex2}}
\caption{Feynman diagrams contributing to first-order vertex corrections. Diagrams (a) and (b) describe the intranodal and internodal scatterings, respectively.}
\label{figvertex}
\end{figure}
The Feynman diagrams contributing to first-order vertex corrections are shown in Fig.~\ref{figvertex}.
When the vertex $i=x$, with the external Matsubara frequency set to $\omega_1$ for definiteness, Fig.~\ref{figvertex}(a) contributes as:
\begin{align}
&\sqrt{3} \left(- k_x \,\sigma_x + k_y\,\sigma_y \right) \nonumber \\
& \rightarrow \lambda \sqrt{3} \int \frac{d\varepsilon \,d^3 k}{(2\,\pi)^4} \,
G(\mathrm{i}\, \varepsilon, \mathbf{k}) \left( k_y\,\sigma_y - k_x \,\sigma_x \right)
G( \mathrm{i}\, \varepsilon - \mathrm{i}\,\omega_1, \mathbf{k})
\nonumber \\ & = 0\,.
\end{align}
Similarly, for $i=y$, Fig.~\ref{figvertex}(a) gives:
\begin{align}
&\sqrt{3} \left( k_y \,\sigma_x + k_x\,\sigma_y \right) \nonumber \\
& \rightarrow \lambda \sqrt{3} \int \frac{d\varepsilon \,d^3 k}{(2\,\pi)^4} \,
G(\mathrm{i}\, \varepsilon, \mathbf{k}) \left( k_y \,\sigma_x + k_x\,\sigma_y \right)
G( \mathrm{i}\, \varepsilon - \mathrm{i}\,\omega_1, \mathbf{k})
\nonumber \\
& = 0\,.
\end{align}
The only non-vanishing contribution from Fig.~\ref{figvertex}(a) comes for $i=z$, which gives:
\begin{align}
& -v\,\sigma_z \nonumber \\
& \rightarrow - \lambda \,v \int \frac{d\varepsilon \,d^3 k}{(2\,\pi)^4} \,
G(\mathrm{i}\, \varepsilon, \mathbf{k}) \,\sigma_z\,
G( \mathrm{i}\, \varepsilon - \mathrm{i}\,\omega_1, \mathbf{k}) \nonumber \\
& = \lambda \frac{6\, \Lambda -\sqrt{3} \left[4 \, |\mu|+ \mathrm{i}\, \omega_1
\ln \left(-\frac{\left(\sqrt{3} \Lambda
+ \mathrm{i}\, \omega_1\right) (\omega_1+2 \,\mathrm{i}\, |\mu|)} {\left(\sqrt{3} \Lambda
-\mathrm{i}\, \omega_1\right) (\omega_1-2 \,\mathrm{i}\, |\mu| )}\right)\right ]\sigma_z}
{192 \,\pi }\,,
\end{align}
where $\Lambda$ is the UV momentum cutoff.
The contribution from the diagram in Fig.~\ref{figvertex}(b) is analogous, but has an overall opposite sign due to the opposite chirality of the second node, and with $ |\mu| \rightarrow |\tilde \mu|$:
\begin{align}
& v\,\sigma_z \nonumber \\
& \rightarrow
\lambda \,v \int \frac{d\varepsilon \,d^3 k}{(2\,\pi)^4} \,
\tilde G(\mathrm{i}\, \varepsilon, \mathbf{k}) \,\sigma_z\,
\tilde G( \mathrm{i}\, \varepsilon - \mathrm{i}\,\omega_1, \mathbf{k}) \nonumber \\
& =\lambda \frac{-6 \,\Lambda + \sqrt{3} \left[4 \, |\tilde \mu| - \mathrm{i}\, \omega_1
\ln \left(-\frac{\left(\sqrt{3} \Lambda + \mathrm{i}\, \omega_1\right)
(\omega_1+2 \,\mathrm{i}\, |\tilde \mu|)}
{\left(\sqrt{3} \Lambda -\mathrm{i}\, \omega_1\right) (\omega_1-2 \,\mathrm{i}\, | \tilde \mu| )}\right)\right ]\sigma_z}
{192 \,\pi }\,.
\end{align}
Adding these two contributions together, we find that for the first node, the vertex with $\sigma_z$ (and external frequency $\omega_1$) is renormalized according to:
\begin{align}
& -v\,\sigma_z \vert_{\text{total}} \nonumber \\
& \rightarrow
\lambda \frac{ \left[4 \left( |\tilde \mu| -|\mu| \right) + \mathrm{i}\, \omega_1 \,
\ln \left(\frac{4\, | \mu|\, |\tilde \mu| + 2\,\mathrm{i}\,\omega_1
\left( |\tilde \mu| -|\mu| \right)+\omega_1^2}
{ 4\, | \mu|\, |\tilde \mu|-2 \,\mathrm{i}\, \omega_1 \left( |\tilde \mu| -|\mu| \right)
+\omega_1^2}\right)\right ]\sigma_z}
{64\, \sqrt{3} \,\pi }\,,
\end{align}
which is finite and does not contain the UV cutoff anymore.
\begin{widetext}
This gives the correction:
\begin{align}
& \delta \chi^{123}_1(\mathrm{i} \,\omega_1, \mathrm{i} \, \omega_2)
\nonumber \\
= &
\frac{\lambda \,e_A^3 \left [ \omega_1^3 \left ( \omega_1+2 \,\omega_2 \right ) \ln \left(4 \,\mu^2
+ \omega_1^2\right)- \omega_2^3 \left(2 \,\omega_1+\omega_2 \right ) \ln \left(4 \,\mu^2+\omega_2^2\right)
+\left (\omega_2-\omega_1 \right) \left(\omega_1+\omega_2 \right )^3 \ln \left(4 \,\mu^2+(\omega_1+\omega_2)^2\right) \right ]}
{1536 \,\sqrt{3}\, \pi ^3\,v\, \omega_1 \, \omega_2 \left (\omega_1+\omega_2 \right )} \nonumber \\
& \times \left[4 \left( |\tilde \mu| -|\mu| \right)
+ \mathrm{i} \left( \omega_1+\omega_2 \right)
\ln \left(\frac{ 4\, | \mu|\, |\tilde \mu| + 2\,\mathrm{i} \left (\omega_1+\omega_2 \right ) \left( |\tilde \mu| -|\mu| \right)
+ \left( \omega_1+\omega_2 \right)^2}
{4\, | \mu|\, \, |\tilde \mu|-2 \,\mathrm{i}\left( \omega_1+\omega_2 \right) \left( |\tilde \mu| -|\mu| \right)
+ \left( \omega_1+\omega_2 \right)^2}\right)\right ].
\end{align}
\end{widetext}
Performing the analytical continuation $\mathrm{i}\, \omega_{1,2} \rightarrow \omega_{1,2} + \mathrm{i}\,\delta$, and setting $\omega_1 = - \omega_2 = \omega$, we find that this contributes as:
\begin{align}
& \delta \chi^{123}_1( \omega+\Omega, -\omega)
= \delta \chi^{213}_1( -\omega,\omega+\Omega)
\nonumber \\ &
\overset{\Omega \rightarrow 0 } {=}
\frac{ \lambda\, e_A^3\, \omega^2 \left( |\tilde \mu| -|\mu|\right)}
{ 192\, \sqrt{3} \,\pi ^2 \,v\, \Omega}
\, \Theta \big(\omega - 2\,|\mu| \big ) \,,
\end{align}
which leads to the correction
\begin{align}
& \delta\left( \frac{d j_z} {dt} \right) \nonumber \\
& =-\frac{\mathrm{i}\,\lambda\, e_A^3\,\left( |\tilde \mu| -|\mu|\right)}
{ 24 \,\sqrt{3}\,h^2 \,v}
\, \Theta \big(\omega - 2\,|\mu| \big )
\left[ \mathbf{E}(\omega) \times \mathbf{E}(-\omega) \right]_{z} ,
\end{align}
for the current in the $z-$direction.
Here, we have neglected the corrections to the chemical potentials, since they only change the frequency range within which CPGE for the the non-interacting case is nonzero.
In a similar fashion, we get:
\begin{align}
& \delta \chi^{312}_1( \omega+\Omega, -\omega)= \delta \chi^{132}_1( -\omega, \omega+\Omega)
\nonumber \\
& \overset{\Omega \rightarrow 0 } {=}
\frac{ \lambda \,e_A^3\, \omega^2 \left[ 4 \left( |\mu|- |\tilde \mu|\right)
-\omega \ln \Big | \frac{(2 \,|\mu|+\omega ) (2\,|\tilde \mu|-\omega )}
{(\omega -2 \,| \mu| ) (2 \,|\tilde \mu|+\omega )} \Big |
\right ] }
{ 768\, \sqrt{3} \,\pi ^2 \,v\, \Omega} \nonumber \\
& \hspace{ 1 cm} \times \Theta \big(\omega - 2\,|\mu| \big ) \,,
\\
& \delta\left( \frac{d j_y} {dt} \right) = \mathrm{i}\,\lambda\, e_A^3\,
\frac{ 4 \left( |\tilde \mu| -|\mu|\right)
+ \omega \ln \Big | \frac{(2 \,|\mu|+\omega ) (2\,|\tilde \mu|-\omega )}
{(\omega -2 \,| \mu| ) (2 \,|\tilde \mu|+\omega )} \Big |}
{ 96 \,\sqrt{3}\,h^2 \,v}
\nonumber \\
& \hspace{ 1.7 cm} \times \Theta \big(\omega - 2\,|\mu| \big )
\left[ \mathbf{E}(\omega) \times \mathbf{E}(-\omega) \right]_{y} \,.
\end{align}
By symmetry in the $xy-$plane, we infer that
\begin{align}
\delta\left( \frac{d j_x} {dt} \right) &= \delta\left( \frac{d j_y} {dt} \right) \,.
\end{align}
Due to the intrinsic anisotropy of the problem, it is not surprising that the corrections for the current
in the $z-$direction is different from that in the $xy-$plane.
\section{Summary and outlook}
We have computed the CPGE for the double-Weyl semimetal, first in the absence of interactions and then in the presence of short-ranged Hubbard interactions. In the non-interacting case, for low-enough frequency ranges of the applied electric field, the CPGE gets contribution only from one double-Weyl node and has a quantized value proportional to the topological charge of the corresponding node. However, switching on Hubbard interactions affects this result, destroying the quantization. This is similar to the results found for the case of CPGE currents in Weyl semimetals \cite{kozii}. The only difference is that the corrections for the current in the $z-$direction is different from that in the $xy-$plane, due to the anisotropic dispersion of the starting Hamiltonian. These results imply that unlike the quantum Hall effect in gapped phases or the chiral anomaly in field theories, the quantization of the CPGE in topological semimetals is not protected.
In future, it will be interesting to look at the corrections coming from the Coulomb interactions. The computations will be cumbersome for this case compared to the Weyl semimetal, due to the anisotropic dispersion of the double-Weyl Hamiltonian. It will also be interesting to see the effect of short-ranged correlated disorder on the CPGE, using the well-known techniques \cite{rahul-sid,ips-rahul,ips-qbt-sc}.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,764
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.