text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
{"url":"https:\/\/aimath.org\/pastworkshops\/chipfiring.html","text":"# Generalizations of chip-firing and the critical group\n\nJuly 8 to July 12, 2013\n\nat the\n\nAmerican Institute of Mathematics, San Jose, California\n\norganized by\n\nLionel Levine, Jeremy Martin, David Perkinson, and James Propp\n\n## Original Announcement\n\nThis workshop will center around the abelian sandpile model and related chip-firing games on graphs, including generalizations to higher dimension, abelian networks, and pattern formation. The abelian sandpile model is a crossroads for a wide range of mathematics: Markov processes and statistical mechanics, combinatorics, algebraic geometry, graph analogues to Riemann surfaces, and tropical geometry. This workshop will bring together mathematicians from a variety of fields---including probability theory, combinatorics, and algebraic geometry---to apply a variety of viewpoints to a few specific problems. The topics of the workshop will be:\n1. Chip-firing in higher dimensions.\n\nBuilding on the work of Duval, Klivans, and Martin, we would like to develop a theory of chip-firing for general simplicial or CW-complexes. Is there a generalization of the Baker-Norine theorem to higher dimensions---perhaps a \"combinatorial Hirzebruch-Riemann-Roch theorem\"? Are there appropriate generalizations of the recurrent elements of the abelian sandpile model? What are the implications of a higher-dimensional theory for combinatorics?\n\n2. Abelian networks.\n\nAbelian networks, proposed by Dhar and developed by Bond and Levine, are systems of communicating finite automata satisfying a certain local commutativity condition. As a model of computation, they implement asynchronous algorithms on graphs. The two most widely studied examples of abelian networks are the abelian sandpile model and the rotor-router or Eulerian walkers model. How much more general are abelian networks than these? Is there a computational hierarchy within the class of abelian networks? Is the halting problem for abelian networks decidable in polynomial time?\n\n3. Pattern formation.\n\nHow can one rigorously identify and classify the rich patterns that arise in identity elements of critical groups? Can the proof of existence of the sandpile scaling limit by Pegden and Smart be adapted to prove properties of the limit? Ostojic has given a heuristic, involving the conformal map $z\\mapsto 1\/z^2$, for the locations and features of certain sandpile patterns. Can these heuristics be converted into precise conjectures, and what tools would be required to prove these conjectures?\n\n## Material from the workshop\n\nA list of participants.\n\nThe workshop schedule.\n\nA report on the workshop activities.\n\nA list of open problems.\n\nPapers arising from the workshop:\n\nFourientations and the Tutte Polynomial\nby\u00a0\u00a0Spencer Backman and Sam Hopkins, \u00a0Res. Math. Sci. 4 (2017), 4:18 \u00a0MR3696165\nChip-firing and energy minimization on M-matrices\nby\u00a0\u00a0Johnny Guzm\u00e1n and Caroline Klivans, \u00a0J. Combin. Theory Ser. A 132 (2015), 14-31 \u00a0MR3311336\nThreshold state and a conjecture of Poghosyan, Poghosyan, Priezzhev and Ruelle\nby\u00a0\u00a0Lionel Levine, \u00a0Comm. Math. Phys. 335 (2015), no. 2, 1003-1017 \u00a0MR3316648\nAbelian networks: foundations and examples\nby\u00a0\u00a0Benjamin Bond and Lionel Levine, \u00a0SIAM J. Discrete Math. 30 (2016), no. 2, 856-874 \u00a0MR3493110\nRotor-routing and spanning trees on planar graphs\nby\u00a0\u00a0Melody Chan, Thomas Church and Joshua A. Grochow, \u00a0Int. Math. Res. Not. IMRN 2015, no. 11, 3225-3244 \u00a0MR3373049\nCritical Groups of Graphs with Dihedral Actions\nby\u00a0\u00a0Darren Glass and Criel Merino, \u00a0European J. Combin. 39 (2014), 95-112 \u00a0MR3168517\nThe Bernardi process and torsor structures on spanning trees\nby\u00a0\u00a0Matthew Baker and Yao Wang\nSandpiles, spanning trees, and plane duality\nby\u00a0\u00a0Melody Chan, Darren Glass, Matthew Macauley, David Perkinson, Caryn Werner and Qiaoyu Yang, \u00a0SIAM J. Discrete Math. 29 (2015), no. 1, 461-471 \u00a0MR3319526\nG-parking functions and tree inversions\nby\u00a0\u00a0David Perkinson, Qiaoyu Yang and Kuai Yu, \u00a0Combinatorica 37 (2017), no. 2, 269\u2013282 \u00a0MR3638345","date":"2022-12-07 02:31:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.48072853684425354, \"perplexity\": 2328.792707693533}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711126.30\/warc\/CC-MAIN-20221207021130-20221207051130-00636.warc.gz\"}"}
| null | null |
BERLIN (AP) – German prosecutors say they've arrested a 17-year-old man on suspicion he was planning an Islamic extremist bombing attack in the Frankfurt area.
Frankfurt prosecutors' spokesman Sinan Akdogan said Thursday that the German teenager was arrested by Hesse state police Sept. 1 and ordered held by a judge on suspicion of preparing a serious act of violence.
Akdogan says the suspect had instructions on how to make explosives and was trying to procure chemicals online.
It was not clear how advanced the preparations were but Akdogan says small amounts of chemicals were found during a search of the suspect's home in Florstadt, northeast of Frankfurt.
The suspect's name was withheld for privacy reasons and Akdogan said he could not give further details on the planned attack due to the ongoing investigation.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,575
|
Tunas do Paraná este un oraș în Paraná (PR), Brazilia.
Tunas do Paraná
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,566
|
Q: Initializing an adjacency matrix in c++ I'm working on graph implementations in C++ and came across an implementation for an adjacency matrix that mostly made sense to me. The implementation uses an "init" function to initialize the matrix:
void init(int n) {
numVertex = 0;
numEdge = 0;
mark = new int[n]; //initialize mark array
for (int i = 0; i < numVertex; i++) {
mark[i] = 0;
}
matrix = (int**) new int*[numVertex]; //make matrix
for (int i = 0; i < numVertex; i++) {
matrix[i] = new int[numVertex];
}
for (int i = 0; i < numVertex; i++) { //mark all matrix cells as false
for (int j = 0; j < numVertex; j++) {
matrix[i][j] = 0;
}
}
}
The line I'm confused about is:
matrix = (int**) new int*[numVertex]; //make matrix
What does the (int**) aspect do? Why would I choose to use this instead of matrix = new int**[numVertex];?
Thanks so much!
A: (int**)value is a C-style cast operation.
Notes:
*
*Don't use those in C++, it tends to cause or hide problems, like mismatches between right and left side of an assignment.
*The code is relatively low quality, proper C++ would rather use std::vector.
*The code is also not complete, so little can be said with certainty about how it functions.
A: Note that matrix = new int**[numVertex]; as mentioned by you would create (for this example) a 3D array, because you'd have numVertex entries of int**.
The (int**) cast does not accomplish much, if anything at all, because if matrix is of type int**, there is no need for the cast (you get back an int** already from the new).
A: If column dimension is fixed, you can use vector of array there.
godbolt
wandbox
#include <vector>
#include <array>
#include <iostream>
#include <iomanip>
template<typename T, int col>
using row_templ = std::array<T,col>;
template<typename T, int col, template <typename,int> typename U = row_templ>
using mat_templ = std::vector<U<T,col>>;
int main()
{
constexpr int numVertex = 30;
constexpr int numEdge = 30;
constexpr int numCol = numVertex;
int numRow = numEdge;
using row_t = row_templ<int, numCol>; // alias to the explicit class template specialization
using mat_t = mat_templ<int, numCol>;
auto make_mat = [&](){ return mat_t(numRow); }; // define a maker if lazy
mat_t my_mat(numRow);
mat_t my_mat2 = make_mat(); // or just use our maker
// Due to that default allocator uses value initialization, a.k.a T().
// At this point, all positions are value init to int(), which is zero,
// from value init of array<int, col>() by the default allocator.
// numVertex x numEdge is one solid contaguous chunk and now ready to roll.
// range for
for (row_t r : my_mat) {
for (int n : r) {
std::cout << std::setw(4) << n;
}
std::cout << '\n';
}
// classic for
for (int i = 0; i < numRow; ++i) {
for (int j = 0; j < numCol; ++j) {
std::cout << std::setw(4) << (my_mat2[i][j] = i*numRow + numCol);
}
std::cout << '\n';
}
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,384
|
La ecorregión del bosque montano y estepario del Altái (WWF ID: PA0502) cubre zonas del cinturón de bosque subalpino existente en las montañas de Altái, cruzando la región fronteriza donde se unen Rusia, Kazajistán, Mongolia y China. La región tiene una gran biodiversidad, ya que se encuentra en zonas de transición entre diferentes ecorregiones, altitudes y zonas climáticas. Está en la ecozona Paleártica, con un clima semiárido frío. Cubre una superficie de 35 199 998 km² (13 590 795 millas cuadradas).
Ubicación y descripción
Se extiende desde la cordillera de Belukh de las montañas de Altái en la frontera entre Rusia y Kazajistán en el noroeste, hasta el Gobi-Altái en Mongolia al sureste por una extensión de 1500 km. La ecorregión deja fuera los picos alpinos del Altái, y los lagos y valles más bajos. Al sur de Altái se encuentran las regiones frías y áridas de Asia central, y al norte se encuentran los bosques y humedales de Siberia.
Clima
Debido a su lejanía del océano, la ecorregión tiene un clima semiárido frío (clasificación climática de Köppen Bsk). Se trata de un clima local caracterizado por veranos frescos e inviernos fríos y secos. Estas regiones climáticas tienden a encontrarse en elevaciones más altas en el centro de los continentes, con grandes diferencias entre las temperaturas diurnas y nocturnas.
Flora
Las zonas de bosques de coníferas tienden a encontrarse en las laderas más frías y húmedas del norte de las montañas, con una vegetación de estepas desérticas más predominante en las laderas del sur. Los bosques en el sudeste de la región incluyen zonas de alerce y cedros. Los pastos en zonas de mediana elevación están dominados por la festuca de tundra (Festuca lenensis) y la hierba de pradera (Koeleria macrantha). La vegetación de la estepa del desierto en el sur a menudo está formada por la hierba de pluma europea (Stipa pennata), cebolla silvestre (Allium polyrhizum), Anabasis breviloa y artemisa (Artemisia frigida). Esta es solo un conjunto representativo, pero la biodiversidad en el área es muy alta. La Federación Mundial de Vida Silvestre señala que el endemismo de la zona (12%) es más alto que el de los Pirineos o los Alpes.
Fauna
Como zona de unión de especies forestales del norte y del sur, las elevaciones medias de Altái presentan un gran número de especies. El más numeroso de los pequeños mamíferos del sur es la marmota gris (Marmota baibacina), la marmota Tarbagan (M. sibirica) y los lagomorfos (conejos, liebres y pikas). En la zona se encuentran varias especies amenazadas a nivel mundial, como el leopardo de las nieves (Uncia uncia).
Protecciones
Las áreas protegidas federalmente en la región son:
La reserva natural de Katun ("Katunsky"). Una "reserva ecológica estricta" de la IUCN clase (un Zapovednik) ubicada en la sección noroeste de la ecorregión de bosques esteparios y boscosos de Altái, al norte de la frontera entre Rusia y Kazajistán. Esta reserva natural se encuentra en las tierras altas de las montañas centrales de Altái; el río Katun que discurre por sus laderas forma la cabecera del río Obi. (Área: 150 079 km²).
Referencias
Enlaces externos
Mapa de la ecorregión bosque montano de Altai y la estepa del bosque. Globalspecies.org
Ecorregiones de Rusia
Ecorregiones de la ecozona paleártica
Ecorregiones de Mongolia
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,798
|
<?php
namespace Concrete\Core\Updater;
use Concrete\Core\Cache\Cache;
use Concrete\Core\Cache\CacheClearer;
use Concrete\Core\Database\Connection\Connection;
use Concrete\Core\Database\DatabaseStructureManager;
use Concrete\Core\Foundation\Environment\FunctionInspector;
use Concrete\Core\Support\Facade\Application;
use Concrete\Core\Updater\Migrations\Configuration;
use Concrete\Core\Utility\Service\Validation\Numbers;
use Doctrine\ORM\EntityManagerInterface;
use Exception;
use Localization;
use Marketplace;
class Update
{
/**
* Key of the mutex to be used when performing core upgrades.
*
* @var string
*/
const MUTEX_KEY = 'core_system_upgrade';
/**
* Fetch from the remote marketplace the latest available versions of the core and the packages.
* These operations are done only the first time or after at least APP_VERSION_LATEST_THRESHOLD seconds since the previous check.
*
* @return string|null Returns the latest available core version (eg. '5.7.3.1').
* If we can't retrieve the latest version and if this never succeeded previously, this function returns null
*/
public static function getLatestAvailableVersionNumber()
{
$app = Application::getFacadeApplication();
$config = $app->make('config');
// first, we check session
$queryWS = false;
Cache::disableAll();
$vNum = $config->get('concrete.misc.latest_version', true);
Cache::enableAll();
$versionNum = null;
if (is_object($vNum)) {
$seconds = strtotime($vNum->timestamp);
$version = $vNum->value;
if (is_object($version)) {
$versionNum = $version->version;
} else {
$versionNum = $version;
}
$diff = time() - $seconds;
if ($diff > $config->get('concrete.updates.check_threshold')) {
// we grab a new value from the service
$queryWS = true;
}
} else {
$queryWS = true;
}
if ($queryWS) {
$mi = Marketplace::getInstance();
if ($mi->isConnected()) {
Marketplace::checkPackageUpdates();
}
$update = static::getLatestAvailableUpdate();
$versionNum = null;
if (is_object($update)) {
$versionNum = $update->getVersion();
}
if ($versionNum) {
$config->save('concrete.misc.latest_version', $versionNum);
} else {
// we don't know so we're going to assume we're it
$config->save('concrete.misc.latest_version', APP_VERSION);
}
}
return $versionNum;
}
/**
* Retrieves the info about the latest available information.
* The effective request to the remote server is done just once per request.
*
* @return RemoteApplicationUpdate|null
*/
public static function getApplicationUpdateInformation()
{
$app = Application::getFacadeApplication();
$cache = $app->make('cache');
$r = $cache->getItem('APP_UPDATE_INFO');
if ($r->isMiss()) {
$r->lock();
$result = static::getLatestAvailableUpdate();
$r->set($result)->save();
} else {
$result = $r->get();
}
return $result;
}
/**
* Looks in the designated updates location for all directories, ascertains what
* version they represent, and finds all versions greater than the currently installed version of
* concrete5.
*
* @return ApplicationUpdate[]
*/
public function getLocalAvailableUpdates()
{
$app = Application::getFacadeApplication();
$fh = $app->make('helper/file');
$updates = [];
$contents = @$fh->getDirectoryContents(DIR_CORE_UPDATES);
foreach ($contents as $con) {
if (is_dir(DIR_CORE_UPDATES . '/' . $con)) {
$obj = ApplicationUpdate::get($con);
if (is_object($obj)) {
if (version_compare($obj->getUpdateVersion(), APP_VERSION, '>')) {
$updates[] = $obj;
}
}
}
}
usort(
$updates,
function ($a, $b) {
return version_compare($a->getUpdateVersion(), $b->getUpdateVersion());
}
);
return $updates;
}
/**
* Checks migrations to see if the current code DB version is greater than that registered in the database.
*/
public static function isCurrentVersionNewerThanDatabaseVersion()
{
$app = Application::getFacadeApplication();
$db = $app->make(Connection::class);
$config = $app->make('config');
$database = $db->fetchColumn('select max(version) from SystemDatabaseMigrations');
$code = $config->get('concrete.version_db');
return $database < $code;
}
/**
* Upgrade the current core version to the latest locally available by running the applicable migrations.
*
* @param null|Configuration $configuration
*
* @throws \Concrete\Core\Updater\Migrations\MigrationIncompleteException throws a MigrationIncompleteException exception if there's still some migration pending
*/
public static function updateToCurrentVersion(Configuration $configuration = null)
{
$app = Application::getFacadeApplication();
$functionInspector = $app->make(FunctionInspector::class);
if (!$app->isRunThroughCommandLineInterface()) {
if ($functionInspector->functionAvailable('ignore_user_abort')) {
// Don't abort the script execution when the web client disconnects,
// otherwise we risk to halt the execution in the middle of a migration.
@ignore_user_abort(true);
}
}
$canSetTimeLimit = $functionInspector->functionAvailable('set_time_limit');
$defaultTimeLimit = null;
if ($functionInspector->functionAvailable('ini_get')) {
$defaultTimeLimit = @ini_get('max_execution_time');
if ($app->make(Numbers::class)->integer($defaultTimeLimit)) {
$defaultTimeLimit = (int) $defaultTimeLimit;
} else {
$defaultTimeLimit = null;
}
}
$config = $app->make('config');
$clearer = $app->make(CacheClearer::class);
$clearer->setClearGlobalAreas(false);
$clearer->flush();
$em = $app->make(EntityManagerInterface::class);
$dbm = new DatabaseStructureManager($em);
$dbm->destroyProxyClasses('ConcreteCore');
$dbm->generateProxyClasses();
if (!$configuration) {
$configuration = new Configuration();
}
$configuration->registerPreviousMigratedVersions();
$isRerunning = $configuration->getForcedInitialMigration() !== null;
$migrations = $configuration->getMigrationsToExecute('up', $configuration->getLatestVersion());
$totalMigrations = count($migrations);
$performedMigrations = 0;
foreach ($migrations as $migration) {
if ($defaultTimeLimit !== 0) {
// The current execution time is not unlimited
$timeLimitSet = $canSetTimeLimit ? @set_time_limit(max((int) $defaultTimeLimit, 300)) : false;
if (!$timeLimitSet && $performedMigrations > 0) {
// We are not able to reset the execution time limit
if ($performedMigrations > 10 || $migration instanceof Migrations\LongRunningMigrationInterface) {
// If we have done 10 migrations, or if the next migration requires long time
throw new Migrations\MigrationIncompleteException($performedMigrations, $totalMigrations - $performedMigrations);
}
}
}
if ($isRerunning && $migration->isMigrated()) {
$migration->markNotMigrated();
$migrated = false;
try {
$migration->execute('up');
$migrated = true;
} finally {
if (!$migrated) {
try {
$migration->markMigrated();
} catch (Exception $x) {
}
}
}
} else {
$migration->execute('up');
}
++$performedMigrations;
if ($defaultTimeLimit !== 0 && !$timeLimitSet && $migration instanceof Migrations\LongRunningMigrationInterface) {
// The current eecution time is not unlimited, we are unable to reset the time limit, and the performed migration took long time
if ($performedMigrations < $totalMigrations) {
throw new Migrations\MigrationIncompleteException($performedMigrations, $totalMigrations - $performedMigrations);
}
}
}
try {
$app->make('helper/file')->makeExecutable(DIR_BASE_CORE . '/bin/concrete5', 'all');
} catch (Exception $x) {
}
$config->save('concrete.version_installed', $config->get('concrete.version'));
$config->save('concrete.version_db_installed', $config->get('concrete.version_db'));
$textIndexes = $app->make('config')->get('database.text_indexes');
$app->make(Connection::class)->createTextIndexes($textIndexes);
}
/**
* Retrieves the info about the latest available information.
*
* @return RemoteApplicationUpdate|null
*/
protected static function getLatestAvailableUpdate()
{
$app = Application::getFacadeApplication();
$config = $app->make('config');
$client = $app->make('http/client')->setUri($config->get('concrete.updates.services.get_available_updates'));
$client->getRequest()
->setMethod('POST')
->getPost()
->set('LOCALE', Localization::activeLocale())
->set('BASE_URL_FU', Application::getApplicationURL())
->set('APP_VERSION', APP_VERSION);
try {
$response = $client->send();
$update = RemoteApplicationUpdateFactory::getFromJSON($response->getBody());
} catch (Exception $x) {
$update = null;
}
return $update;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,123
|
package MVC_Observer_Observable;
public final class StartUp {
private StartUp() {}
public static void main(String[] l) {
ApplicationFrame frm = new ApplicationFrame();
frm.setLocation(100, 100);
frm.setSize(300, 300);
frm.pack();
frm.setVisible(true);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,375
|
\section{Introduction}
In this paper we will study the \emph{conjugacy search problem} in the $N$-strand braid group~$B_N$ (with $N\geqslant 3$): we are looking at algorithms which take as their input two words (whose letters are elements of some finite generating set of~$B_N$ and their inverses), and whose output is the information whether these words represent conjugate elements in~$B_N$. Moreover, if they do, then the algorithm should find an explicit conjugating element.
It is currently an open problem to find a polynomial-time algorithm, i.e.\ to find such an algorithm for any number of strands~$N$ such that the running time can be bounded by some polynomial $P_N(L)$ as a function of the length~$L$ of the longer one of the two input words.
Even more ambitious is the quest for a \emph{uniform} polynomial-time solution to the conjugacy problem, which means the following:
we start with some generating set of~$B_{\infty}$ whose intersection with~$B_N$, for any integer~$N$, is a finite generating set of~$B_N$. Now we are looking for
a polynomial~$P$ and for an algorithm which takes as its input two words in this generating set belonging to~$B_N$, for any~$N$, and decides whether these words represent conjugate elements of~$B_N$. The computation time should be bounded by $P(I)$, where $I$ denotes the bit-size of the input. (Note that $I$ is more than just the sum of the lengths of the input words, since the description of the generators adds to their bit-size.)
There are two families of approaches to these problems.
\begin{itemize} \item
Geometric approaches, using the curve complex, subsurface projections and the notion of hierarchically hyperbolic spaces, train track splitting sequences or flip sequences of triangulations, the space of projective measured foliations $\mathcal{PMF}$ etc. These approaches typically talk not only about braid groups, but more generally about mapping class groups of surfaces.
This family of techniques has lead to several spectacular successes in elucidating the structure of mapping class groups. Particularly interesting for the purposes of this paper, Mosher has constructed an automatic (though not bi-automatic) structure on all mapping class groups~\cite{MosherAutomatic}, and he has given a solution to the conjugacy problem in mapping class groups~\cite{MosherConjugacy} (in the pseudo-Anosov case). Masur and Minsky~\cite{MasurMinsky2} together with Tao~\cite{Tao} have proved a linear bound on the conjugator length in mapping class groups, which implies an exponential time solution to the conjugacy problem (see also \cite{ATW}). More recently, Bell and Webb~\cite{BellWebbPolyn} have constructed (and implemented~\cite{curver}) an algorithm for solving in polynomial time the closely related problem of determining the Nielsen--Thurston type and (in the reducible case) the canonical reduction system of any given mapping class.
Also, Margalit, Strenner and Yurtta\c{s} have announced a quadratic time algorithm, using very different techniques, for determining the Nielsen--Thurston type, the canonical reduction system, and (in the pseudo-Anosov case) the stable and unstable foliations and their stretch factors.
Most spectacularly, Bell and Webb have announced a polynomial time solution to the conjugacy problem in mapping class groups (where the constants of the polynomial depend exponentially on the complexity of the surface) -- again, details have not yet appeared.
\item
Garside-theoretic approaches, which typically talk not only about the braid groups but more generally about Garside groups~\cite{DehornoyGroupesDeGarside}, and in particular about Artin groups of spherical type~\cite{CharneyBiautomatic}. This approach has yielded a bi-automatic structure (which we still don't know to exist on other mapping class groups), and it gives rise to simple algorithms for the conjugacy problem which work very fast in practice. Also, in any braid group~$B_N$, there is a cubic-time algorithm for determining the Nielsen-Thurston type of any given element~\cite{CalvezPolynTime}.
\end{itemize}
Making these two approaches work together, creating a synergy between them, appears to be a difficult task. Here is a rather embarrassing illustration of this difficulty: let us consider the set of all Garside normal form words in~$B_N$, and let us also look at the curve complex of the $N$-times punctured disk, equipped with a base point. The traces of the base point under the action of the Garside words form a family of paths in the curve complex. It is currently an open problem whether these paths are a uniform family of reparametrised quasi-geodesics!
In the present paper we propose another interaction, namely between the Garside-theoretic solution to the conjugacy problem and subsurface projections.
Specifically, we study examples of braids where the Garside-theoretic conjugacy algorithm works relatively slowly, and we try to explain this lack of efficiency in terms of multiple subsurfaces having very large projections. Our analysis suggests that even these ``bad'' cases, which we conjecture to be the worst possible ones, are still quite satisfying, in that they would guarantee a polynomial bound on the computational complexity of the conjugacy problem.
In order to explain the results of this paper in more detail, we recall very briefly some ideas of Garside theory -- for more details, see Section~\ref{S:GarsideAndOuroboroi}.
All the classical Garside-theoretic solutions to the conjugacy problem in braid groups are based on the same principle: to every braid~$x$ one associates a certain finite subset of the conjugacy class of~$x$, and this subset is characteristic in the sense that if $x$ and $x'$ are conjugate, then the subsets in question coincide. Now in order to decide whether two given braids are conjugate, it suffices to determine algorithmically the full characteristic subset of one, and at least one element of the characteristic subset of the other, and check whether the latter belongs to the former.
Over time, various characteristic subsets have been proposed (the summit set, the super summit set, the ultra summit set...) \cite[Section 3.2]{Gebhardt-GM}, but we will use the \emph{sliding circuit set}~$SC(x)$ defined in~\cite{Gebhardt-GM}.
We will not give the definition for general braids~$x$, but only in the case where~$x$ is \emph{rigid}. Roughly speaking, this means that the Garside normal form of~$x^2$ is equal to two copies of the Garside normal form word of~$x$, concatenated -- in other words, the last letter of the normal form, followed by the first letter, is again in normal form. It is a theorem of Gebhardt and Gonz\'alez-Meneses~\cite{Gebhardt-GM} that if $x$ is conjugate to a rigid braid, then $SC(x)$ consists exactly of all rigid conjugates of~$x$.
The main difficulty proving a polynomial bound on the computational complexity of the conjugacy problem (using the classical Garside-theoretic strategy, and in the pseudo-Anosov case) is establishing a polynomial bound on the size of the sliding circuit set of rigid braids, as a function of braid length, for a fixed number of strands. (In fact, for reducible, non-rigid braids it is known that the size of $SC$ \emph{can} grow exponentially with the braid length~\cite{GonzMenRed}.)
In this paper we present some examples of braids with remarkably large sliding circuit sets:
\begin{theorem}\label{T:Main}
There exists a family of positive braids $\gamma(N,L)$, with $N$ strands and of Garside-length~$L$ (with $N$~even, and $N\geqslant 4$, and $L$~odd, $L\geqslant N-1$) such that $$|SC(\gamma(N,L))| = 2\cdot (L-1)\cdot L^{N-3}$$
\end{theorem}
For $N=5$ we have examples of braids whose sliding circuit sets are even slightly larger, namely of size $2L^3=2L^{N-2}$. Also, we have some sporadic exemples of very short braids (with $L\leqslant 3$) whose sliding circuit set has strictly more than $2L^{N-2}$ elements. However, we propose
\begin{conjecture}[Polynomial bounds on the sliding circuit set]\label{C:BoundOnSC}
There exists a constant~$C$ such that for any rigid braid~$x$ with $N$ strands and Garside-length~$L$,
$$
|SC(x)| \leqslant C\cdot L^{N-2}
$$
We even conjecture that the value $C=2$ is valid for sufficiently large~$L$. (Note that we are not supposing $x$ to be pseudo-Anosov.)
\end{conjecture}
More interesting than the precise size of the sliding circuit set in Theorem~\ref{T:Main} are the \emph{geometric} properties of the example braids~$\gamma(N,L)$.
In all the examples which we found (in part through computer searches), of long braids with very unusually large sliding circuit sets,
the size of~$SC$ is readily explained by the presence of multiple \emph{ouroboroi}. We will give a rigorous definition of an ouroboros later, but roughly speaking, an ouroboros is a subsurface of the $n$-times punctured disk which is almost invariant under the action of the braid. Thus the ``bad'' braids in question are very close to being reducible -- for instance, they act on the curve complex of~$D_N$ with very small translation distance. Moreover, in our examples the largest sliding circuit sets occur when different ouroboroi move relative to each other -- we say they ``slither''. This situation is strongly reminiscent of disjoint subsurfaces with large projections.
We conjecture that the presence of ouroboroi is essentially the \emph{only} reason sliding circuit sets can become big. The following very vague conjecture will be made more precise later (Conjecture~\ref{C:BigSCisGeometric}).
\begin{conjecture}[Commutativity conjecture]\label{C:CommutativityConjecture}
At least for sufficiently long braids, large sliding circuit sets come from some kind of internal commutativity of the braid, and this internal commutativity is a geometric fact: if a braid has a big sliding circuit set, then it has several slithering ouroboroi.
\end{conjecture}
If some reasonable interpretation of the Commutativity Conjecture was proven, then we would probably obtain a polynomial bound on the size of the sliding circuit set, and hence on the complexity of the conjugacy problem in the braid group~$B_N$, for any fixed~$N$, at least in the pseudo-Anosov case.
The rest of the paper is organised as follows. In Section~\ref{S:GarsideAndOuroboroi} we briefly review some of the background in Garside theory; we also define ouroboroi, our main geometric tool. Section~\ref{S:Examples} contains our main results, namely the examples of braids with unusually large sliding circuit sets. The main tool for constructing such examples are ouroboroi. Finally, in Section~\ref{S:Conjectures}, which is more speculative, we present some ideas on the structure of sliding circuit sets. This will put into context our Commutativity Conjecture.
Also, the conjectured structure could be helpful in attempts to find a \emph{uniform} polynomial time solution to the conjugacy problem.
\section{Garside theory and ouroboroi}\label{S:GarsideAndOuroboroi}
In this section we recall some important known results about the Garside-theoretic approach to the conjugacy problem in braid groups. Note that in this paper we will be interested in the case where the braids are assumed to be pseudo-Anosov, so this also will be the focus in this section. We will also give the definition of an \emph{ouroboros}, the new geometric tool which we will be using throughout the paper.
We recall that that every element of the braid group $B_N$ has a unique normal form $x=\Delta^k.x_1.x_2.\ldots. x_\ell$, where
\begin{enumerate}
\item $\Delta$ is the half-twist braid (whose square generates the center of~$B_N$).
\item Every $x_i$ is a \emph{simple} braid (also known as positive permutation braid or Garside braid), i.e. a positive braid such that any two stands cross at most once.
\item Every pair $x_i.x_{i+1}$ is left-weighted, meaning that $x_i$ contains as many crossings as possible, and $x_{i+1}$ as few crossings as possible, among all writings of the braid $x_i x_{i+1}$ as a product of two simple braids.
\end{enumerate}
After multiplying $x$ by an element of the center $\langle \Delta^2\rangle$, we can assume that $k=0$ or $k=1$, and in particular that $x$ is positive. For the rest of the paper we will only talk about positive braids.
The \emph{supremum} of such a braid is $\sup(x)=k+\ell$. We will often denote it~$L$, because in our context it is just the length of the braid word; and by the ``length'' of a braid we will always mean this Garside-length.
Next we recall the definition of a \emph{rigid} braid.
\begin{itemize}
\item If $k$ is even, we say $x$ is rigid if the writing $x_\ell.x_1$ is left-weighted, as well (i.e., if the \emph{cyclic} word $x_1.x_2.\ldots.x_\ell.$ is left-weighted everywhere) .
\item If $k$ is odd, we say $x$ is rigid if the writing $x_\ell.\Delta^{-1}x_1\Delta$ is left-weighted. (Note that $\Delta^{-1}x_1\Delta$ is again a simple braid)
\end{itemize}
Among the set of all positive braids on $N$ strands, there is a large class of braids~$y$, including all pseudo-Anosov braids, with the following property: they have a power~$y^k$ (with $k\leqslant \left(\frac{N(N-1)}{2}\right)^3$) which is conjugate to a rigid braid \cite{BirGebGM1}.
Moreover, there is an algorithm which, for any given~$y$, finds the appropriate power~$k$ and a rigid conjugate of $y^k$, if it exists. This algorithm works in polynomial time in the length of~$y$ (for fixed~$N$)~\cite{CalvezPolynTime}.
For a positive braid~$x$ which has a rigid conjugate (e.g.\ for $x=y^k$), the \emph{sliding circuit set} $SC(x)$ is the set of all rigid conjugates of~$y$. (This is actually a theorem of Gebhardt and Gonz\'alez-Meneses~\cite{Gebhardt-GM}, but for the purposes of the present paper, we can use this as the definition of the sliding circuit set.) The sliding circuit set is always finite, and in fact it is a subset of the well-known \emph{super summit set} of \cite{ElrifaiMorton}.
If we want to solve the conjugacy problem, then we need a computable, complete invariant of conjugacy classes. If $x$ is a positive braid with a rigid conjugate, then $SC(x)$ is such an invariant.
Unfortunately, calculating the full sliding circuit set (not just one of its elements) may a priori be difficult.
For solving the conjugacy problem in braid groups in polynomial time (at least in the pseudo-Anosov case), it would be sufficient to place a polynomial bound on the number of elements in the sliding circuit set, as a function of the length of the input braid:
\begin{question}\label{Q:SCpolynBound?}
Is it true that for every integer $N$ (with $N\geqslant 5$) there exists a polynomial~$P_N$ such that every positive rigid braid $x\in B_N$ with $\sup(x)=L$ satisfies
$$ |SC(x)| \leqslant P_N(L) \ ?$$
\end{question}
In the next section, we will present some examples of braids where the sliding circuit set is relatively large -- still of polynomially bounded size, but remarkably large nevertheless. We will show that in each case, the braid has a very particular structural feature, which we call an \emph{ouroboros}. In order to explain this choice of words we recall that an ``ouroboros'' usually means a dragon or a snake biting its own tail -- these creatures appeared in mythologies of several cultures.
Before defining an ouroboros, we recall a result of Bernardete, Nitecki and Guti\'errez~\cite{BNG}. We denote $D^2$ the unit disk in~$\mathbb C$, and $D_N$ the same disk, but with $N$ punctures lined up on the real line. We say a simple closed curve in~$D_N$ (or its isotopy class) is \emph{round} if it is isotopic to a circle in~$D_N$ that contains at least two, but not all the punctures. If the action of a braid~$x$ sends a round curve~$c$ to a round curve~$c'$, and if the Garside normal form of~$x$ is $\Delta^k x_1.\ldots.x_\ell$, then, according to~\cite{BNG}, every prefix $\Delta^k x_1.\ldots.x_{\tilde \ell}$ (with $\tilde \ell < \ell$) also sends $c$ to some round curve.
In the following definition we take $x$ to be a braid with normal form $\Delta^k x_1\ldots x_\ell$, where $k\in\{0,1\}$. We denote $L$ the total number of factors: $L=k+\ell=\sup(x)$.
We take $x$ to be realised as a braid in the solid cylinder $D^2\times [0,L]$, with the factor $x_i$ living in $D^2\times [i,i+1]$. The closure $\hat x$ is realised in the solid torus $(D^2\times [0,L])/\sim$, where $\sim$ is the equivalence relation $(x,0)\sim (x,L)$ for all $x\in D^2$.
We are going to define two types of ouroboroi, \emph{round} and \emph{eccentric} ones. Before we can do so, we have to define their \emph{base curves}.
\begin{definition}
The \emph{base curve of a round ouroboros} with $m$ strands and \emph{head-tail} in the $i$th factor is a round curve~$c$ containing $m$~punctures in its interior, and which is sent to a round curve by the action of $x_{i+1} x_{i+2} \ldots x_\ell \Delta^k x_1 x_2 \ldots x_{i-1}$.
Moreover, we require that the intersection of the disks bounded by $c$ and by $c.x_{i+1} \ldots x_\ell \Delta^k x_1 \ldots x_i$ consists of a single disk which contains at most~$m-2$ punctures.
The \emph{base curve of an eccentric ouroboros} with $m$ strands and \emph{head-tail} in factors $x_i$ and $x_{i+1}$ is a round curve~$c$ containing $m$ punctures in its interior, such that the curve $c.x_{i+2}\ldots x_\ell \Delta^k x_1\ldots x_{i-1}$ is again round. Moreover, we require that the disks bounded by the curves $c.x_{i+1}^{-1}$ and $c.x_{i+2}\ldots x_\ell \Delta^k x_1\ldots x_i$ contain the same punctures.
\end{definition}
By~\cite{BNG}, the image of~$c$ after each intermediate factor of the normal form is again a round curve. Thus the curve~$c$ induces a round tube going almost completely around the braid. Only in the one or two head-tail factors does the shape of the tube get slightly more complicated.
This tube is what we will call an ouroboros.
Notice that the tube does not close up into a torus -- if it did, the braid would be reducible. So having an ouroboros is very close to being reducible. Here is the formal definition:
\begin{definition}
A \emph{round ouroboros} of~$x$, with $m$ strands and \emph{head-tail} in the $i$th factor is a cylinder which is properly embedded in $(D^2\times ([i+1,L]\cup [0,i]))/\sim$, disjointly from the braid, and whose intersection with the disk $D^2\times \{i+1\}$ is a base curve~$c$ of a round ouroboros as defined above.
(Its intersection with $D^2\times \{i\}$ is thus the round curve $c.x_{i+1} x_{i+2} \ldots x_\ell \Delta^k x_1 x_2 \ldots x_{i-1}$.)
An \emph{eccentric ouroboros} of~$x$ is defined analogously, as a cylinder properly embedded in $(D^2\times ([i+2,L]\cup [0,i]))/\sim$.
\end{definition}
\begin{figure}[htb]
\centering
\def12cm{12cm}
\input{OurobPic.pdf_tex}
\caption{The head-tail factors of a round (left) and an eccentric (right) ouroboros. The base curves are indicated in red and labelled~$c$. Also, as a visual aid, on the right hand side the factor~$x_{i+1}$ and the corresponding part of the ouroboros are drawn in blue.}
\end{figure}
\begin{remark}
Let us say a curve is \emph{almost round} if it is not round, but it is the image of a round curve under the action of a simple braid. If a braid~$x$ has a round ouroboros, then the image of the round curve~$c$ under the action of $x_{i+1} \ldots x_\ell \Delta^k x_1 \ldots x_i$ is an almost round curve. If $x$ possesses an eccentric ouroboros, then the image of the almost round curve $c.x_{i+1}^{-1}$ under the action of $x_{i+1}\ldots x_\ell \Delta^k x_1\ldots x_i$ is the almost round curve $c.x_{i+2}\ldots x_\ell \Delta^k x_1\ldots x_{i-1}$.
In both cases, a conjugate of $x$~sends some round or almost round curve to an almost round curve. Now, any two almost round curves in~$D_n$ are at distance at most~3 in the curve complex (this is an exercise -- see \cite[Section 4.1]{FMPrimer} for an introduction to the curve complex). Thus, for any braid~$x$ with an ouroboros, the action of~$x$ on the curve complex has translation distance at most~$3$.
\end{remark}
\section{Large sliding circuit sets and ouroboroi: examples}\label{S:Examples}
In this section we present some families of examples of braids with unusually large sliding circuit sets. We show how the presence of multiple ouroboroi allows the braid to have such a large sliding circuit set. On the other hand, we also see that this way of creating large sliding circuit sets can only yield sliding circuit sets whose size grows polynomially with the length of the braid, not exponentially.
The braids presented in this section are the worst we found during large computer-searches for braids with big sliding circuit sets. This may be interpreted as evidence that Conjectures~\ref{C:BoundOnSC} and \ref{C:CommutativityConjecture} are true, and thus that the conjugacy problem from pseudo-Anosov braids can be solved in polynomial time.
The families of braids studied in Subsections~\ref{SS:SimpleEx} and~\ref{SS:NestedOuroboroi} depend on two parameters $N$ (the number of strands) and $L$ (the Garside length), and the family in Subsection~\ref{SS:EccentricExample} on the single parameter~$L$. Our main tool for finding and studying in detail these braids was the computer program~\cite{JuanProgram}. In each of these families of examples, the size of the sliding circuit set can, in principle, be determined by hand, with a formal proof. However, the number of case checks involved is prohibitive.
We take an experimental approach: we calculate the size of the sliding circuit sets for a substantial number of pairs $(N,L)$, trusting the program~\cite{JuanProgram} to give the correct values. Also, from the calculated values we extrapolate, and give general formulae (for all possible values of $N$ and~$L$) for the size of the sliding circuit set. While this is, in principle, an unreliable methodology, we believe that most readers will be convinced by our extrapolations in these three particular cases.
\subsection{A simple example introducing ouroboroi}\label{SS:SimpleEx}
For any odd number~$N$ ($N\geqslant 5$) and any integer $L$ ($L\geqslant 2$) consider the braid with $N$~strands
$$\beta(N,L) = (\sigma_1 \sigma_3 \sigma_5 \ldots \sigma_{N-2}.)^{L+1} \sigma_1 \sigma_2 \sigma_3 \ldots \sigma_{N-1}$$
This braid of length~$L+2$ is not rigid, but it is conjugate to a rigid braid of length~$L$. Hence $SC(\beta(N,L))$ consists exactly of the rigid conjugates of~$\beta(N,L)$, which are all of length~$L$. Since these rigid braids are not powers of any other elements, we have that
$$S(N,L):=\frac{|SC(\beta(N,L))|}{L}$$
is equal to the number of orbits under cycling (cyclic permutation of the factors) of $SC(\beta(N,L))$.
Using the program~\cite{JuanProgram} we calculated the value $S(N,L)$ for many pairs $(N,L)$:
\begin{small}
\begin{center}
\begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c}
$N \backslash L$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline\hline
5 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\
7 & 2 & 6 & 12 & 20 & 30 & 42 & 56 & 72 & 90 & 110 & 132 \\
9 & 4 & 16 & 40 & 80 & 140 & 224 & 336 & 480 & 660 & 880 & 1144 \\
11 & 8 & 40 & 120 & 280 & 560 & 1008 & 1680 & 2640 & 3960 & 5720 & 8008\\
13 & 16 & 96 & 336 & 896 & 2016 & 4032 & 7392 & \small{12672} & & & \\
15 & 32 & 224 & 896 & 2688 & 6720 & 14784 & & & & & \\
\end{tabular}
\end{center}
\end{small}
The numbers occurring in this table are exactly those appearing in the table in A080928 of the Online Encyclopedia of Integer Sequences~\cite{OEIS}. This leads us to conjecture that with $n=\frac{N-3}{2}$, with $\lambda=L-2$, and with $T$ the function defined in A080928 of the OEIS, we have
$$
S(N,L) = T(n+\lambda,\lambda) = {n+\lambda \choose n}\cdot 2^{n-1}={\frac{N-3}{2}+L-2\choose \frac{N-3}{2}}\cdot 2^{\frac{N-5}{2}}
$$
(Reference \cite{OEIS} also gives an amusing recurrence relation: for $N\geqslant 7$ and $L\geqslant 3$,
$
S(N,L)=S(N,L-1)+2\cdot S(N-2,L)
$
where $S(5,L)=L-1$ and $S(N,2)=2^{\frac{N-5}{2}}$.) However, the important conclusion for us is that
for fixed~$N$, the function $S(N,L)$ in the variable $L$ is a polynomial of degree $\frac{N-3}{2}$. Since $|SC(\beta(N,L))|=L\cdot S(N,L)$, this implies:
\begin{observation}\label{O:SchlWie}
Let $N$ be an odd integer with $N\geqslant 5$. Then the size of the sliding circuit set $|SC(\beta(N,L))|$, seen as a function of the variable $L$ (with $L\in\mathbb N$, $N\geqslant 2$), is a polynomial of degree~$\frac{N-1}{2}$.
\end{observation}
For instance, for $N=5$ we obtain that $|SC(\beta(5,L))|=L(L-1)$, similarly $|SC(\beta(7,L))|=L^2(L-1)$, and $|SC(\beta(9,L))|=\frac23 L^2(L+1)(L-1)$.
As mentioned in the introduction to this section, we are not going to give a formal proof of the equality $S(N,L)=T(n+\lambda,\lambda)$ and of Observation~\ref{O:SchlWie}.
However, in order to understand where the polynomial growth comes from, we look at the example $N=7$ and actually prove:
\begin{proposition}\label{P:SWlowerbound}
If $L\geqslant 3$, the sliding circuit set $SC(7,L)$ has at least $\frac{L(L-1)(L-2)}{2}$ elements.
\end{proposition}
\begin{proof}
If $L\geqslant 3$, the sliding circuit set $SC(\beta(7,L))$ contains the elements of length~$L$
$$
\sigma_2 \sigma_1 \sigma_4 \sigma_6. (\sigma_1 \sigma_4 \sigma_6 .)^a \sigma_1 \sigma_4 \sigma_3 \sigma_5 \sigma_6 \sigma_5. ( \sigma_1 \sigma_3 \sigma_5.)^b \sigma_5 \sigma_4 \sigma_3 \sigma_2 \sigma_1 \sigma_6 (. \sigma_2 \sigma_4 \sigma_6)^c
$$
and their cyclic conjugates. Here $(a,b,c)$ are all triples of integers between $0$ and $L-3$ with $a+b+c=L-3$.
(The left hand side of Figure~\ref{F:SchlWieEx} shows the example with $a=2, b=3, c=3$.) There are ${L-1 \choose 2}=\frac{(L-1)(L-2)}{2}$ such triples. If we also count the cyclic conjugates of these braids, which are all different, we find a total of $L\cdot \frac{(L-1)(L-2)}{2}$ elements
\end{proof}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=100mm]{SchlWie2Ex.pdf}
\end{center}
\caption{Slithering ouroboroi in $SC(\beta(7,11))$. The arrow represents conjugation by $(\sigma_4\sigma_6)^3$}\label{F:SchlWieEx}
\end{figure}
Let us study Figure~\ref{F:SchlWieEx}, i.e.\ the example of $\beta(7,11)$, in more detail. This example is very instructive, because it is very easy to see three ouroboroi, shown in blue, red, and green in the figure. Let us denote $\beta$ the braid on the left hand side of the figure, where $a=2, b=3$, and $c=3$. Conjugating $\beta$ by positive or negative powers of $\sigma_2$, $\sigma_4$ and $\sigma_6$ has the effect of varying the coefficients $a$, $b$, and $c$, which, visually, corresponds to ``slithering the ouroboroi''.
For instance, conjugating $\beta$ by $(\sigma_4 \sigma_6)^k$ for $k=1$ or~$2$ slithers the red and green ouroboros upwards simultaneously. We can even conjugate by $(\sigma_4 \sigma_6)^3$, which creates the braid on the right hand side of the figure, where the blue and red head-tail occur in the same factor. The red ouroboros cannot be slithered upwards any further relative to the blue one, because it is blocked above. The green one can slither up three more factors, but a detailed calculation shows that the three head-tails can never occur all in the same factor. Similarly, starting with~$\beta$ and conjugating by $\sigma_4^{-1}$ three times slithers the red ouroboros down relative to the blue and green ones -- at which point the red ouroboros gets stuck against the green one. Intuitively, the two degrees of freedom for the relative position of the ouroboroi yields the quadratic growth of $S(7,L)$.
\begin{remark}\begin{enumerate}
\item The three ouroboroi have exactly the same length. If this requirement is violated, the growth of the sliding circuit set (as a function of the length) goes down. For instance, we can modify the definition of $\beta(7,L)$, and declare $\widetilde\beta(7,L)=(\sigma_1 \sigma_3 \sigma_5.)^L \ \sigma_1 \sigma_3. \ \sigma_1\sigma_2\sigma_3\sigma_4\sigma_5\sigma_6$ -- i.e., remove one crossing from the green ouroboros. Then $|SC(\widetilde\beta(7,L))|$ grows only quadratically with~$L$.
\item For $L=0$, the braids $\beta(N,L)=(\sigma_1 \sigma_3 \sigma_5 \ldots \sigma_{N-2}.)^{L+1} \sigma_1 \sigma_2 \sigma_3 \ldots \sigma_{N-1}$ are conjugate to rigid but \emph{reducible} braids, with remarkably large sliding circuit sets: for $N=5, 7, 9, \ldots, 17$ we find sliding circuit sets of size 4, 2, 30, 294, 2520, 20460, 162162.
For $L=1$ the braids $\beta(N,L)$ are pseudo-Anosov, but not conjugate to rigid braids, and their sliding circuit sets have $(N+3)\cdot 2^{\frac{N-7}2}$ elements.
\end{enumerate}
\end{remark}
\subsection{An example with nested ouroboroi}\label{SS:NestedOuroboroi}
In this section we will exhibit an example of a family of rigid pseudo-Anosov braids whose sliding circuit sets grow remarkably fast as a function of both the number of strands and of the length of the braid. The basic trick will be to pack the ouroboroi extremely tightly by nesting them inside each other.
Our family of braids $\gamma(N,L)$ can have any even number $N\geqslant 4$ of strands and any odd number $L=N-1+2 K$ (with $K\geqslant 0$) of Garside factors. (More precisely, the first Garside factor is $\Delta$, so our braids have $\inf(\gamma(N,L))=1$ and $\sup(\gamma(N,K))=N-1+2 K$.) The main result of this section is Theorem~\ref{T:Main} which we recall now:
\begin{unnumberedtheorem} The braids $\gamma(N,L)$ are rigid pseudo-Anosov braids, and their sliding circuit sets have size
$$
|SC(\gamma(N,L))|=2(L-1)\cdot L^{N-3}
$$
More precisely, there are $L^{N-3}$ orbits under the cycling operation, and each of these orbits has $2(L-1)$ elements. There is one exception: for $N=4, L=3$ a particular symmetry occurs, resulting in $|SC(\gamma(4,3)|=6$, rather than 12.
\end{unnumberedtheorem}
As mentioned in the introduction to this section, the formal proof of this result is tedious, and is omitted.
We will give the results of our calculations using the program~\cite{JuanProgram}, and we assert that the formulae observed in the calculated examples continue to hold for general values of $N$ and~$L$.
We now explain the construction of the braid $\gamma(N,L)$. It has $L=N-1+2K$ factors, which come in three blocks:
\begin{enumerate}
\item first one factor $\Delta$
\item then $N-2$ factors which we call head-tail factors, because they correspond to the head-tails of $N-2$ ouroboroi, and finally
\item two Garside factors, which we call the body-factors because they correspond to the bodies of all the ouroboroi, which are repeated $K$ times.
\end{enumerate}
The first block requires no explanation. In order to present the other blocks, we introduce some notation.
For any fixed integer~$N$, we say a \emph{down-up sequence} is a writing of the numbers $1,2,\ldots,N$ permuted in such a way that it consists of a (possibly empty) decreasing sequence of numbers, followed by a (possibly empty) increasing sequence of numbers. For instance, for $N=10$, the sequences $(8\ 7\ 4\ 3\ 1\ 2\ 5\ 6\ 9\ 10)$ and $(9\ 8\ 5\ 3\ 2\ 1\ 4\ 6\ 7\ 10)$ are down-up sequences, whereas $(10\ 8\ 5\ 7\ 2\ 1\ 3\ 4\ 6\ 9)$ is not. We call the numbers occurring in such a sequence \emph{labels}, because we will use them to label strands of a braid.
For any subset $A\subseteq \{1,2,\ldots N\}$ there is an involution $\phi_A$ on the set of down-up sequences, defined as follows: if $S$ is a down-up sequence, then in the down-up sequence $\phi_A(S)$, every label belonging to~$A$ switches its position relative to all lower labels, whereas all labels in the complement of~$A$ retain their position relative to all lower labels. We say ``All labels in~$A$ switch sides''. For instance,
if $N=8$ and $A=\{2, 4, 6, 8\}$, then
$$
(8\ 7\ 4\ 3\ 2\ 1\ 5\ 6) \ \stackrel{\phi_A}{\longmapsto} \ (7\ 6\ 3\ 1\ 2\ 4\ 5\ 8) \ \stackrel{\phi_A}{\longmapsto} \ (8\ 7\ 4\ 3\ 2\ 1\ 5\ 6)
$$
Now in order to define our braid $\gamma(N,L)$ with $N$ strands and $L=N-1+2K$ factors, we first define a sequence of $L+1=N+2K$ down-up sequences. For the example $N=8$, see Figure~\ref{F:BadSymbolic} (ignoring the fat line segments for the moment).
The first sequence consists of the integers between 1 and $N$ congruent to 0 or 3 modulo~4 in descending order, followed by the remaining integers in ascending order. The second sequence (at the junction between the first block and the second block) is just the reverse of the first sequence (all labels have switched sides).
Concerning the second block: here each sequence is obtained from the preceding one by making all even labels switch sides -- except that in each step, one label behaves in an unexpected way. Specifically, starting from the second sequence, we obtain the third one by making all even labels \emph{except $N$} switch sides. We go from the third to the fourth sequence by making all even labels \emph{and label $N-1$} switch sides, and so on through labels $N, N-1, \ldots, 4, 3$. Thus the $N$th sequence (at the junction between the second and third block) is identical to the first one, except that labels 1 and 2 have exchanged their positions. (We recall that $N$ is even.)
The third block is simple again: we go from each down-up sequence to the next by making all even labels switch sides -- thus we simply go back and forth between two sequences $K$~times. This completes our description of a sequence of down-up sequences.
\begin{figure}[htb]
\input{BadSymbolic}
\caption{A symbolic picture of the braid $\gamma(8,7+2K)$. A box \fbox{\it i} means that the label $i$ is behaving unexpectely: an odd label~$i$ switching sides, or an even label~$i$ \emph{not} switching sides. Here is how to obtain a braid from this picture: the strands of the braid connect equal labels, except where indicated by bold lines.}\label{F:BadSymbolic}
\end{figure}
How do we obtain a braid from this sequence of down-up sequences? We recall that positive permutation braids (our generating set of the braid group) are in bijection with the permutations of $N$ symbols. The basic rule for constructing our braid is the obvious one: in each step from one down-up sequence to the next we use the positive braid which induces the given permutation. In other words, we connect equal labels by a string -- for an example with $N=6$, see Figure~\ref{F:BadBraid} with the values $k_1=K$ and $k_2=k_3=k_4=0$.
In the first and third block, this is indeed precisely how we construct the braid. However, in the second block there is one exception in every factor. Namely, in the factor in which label $i$ is behaving unexpectedly, we choose to have no strand connecting label $i$ to label $i$ and label $i-1$ to label $i-1$, but the two are exchanged: there are two strands connecting labels $i$ and $i-1$. This exceptional behaviour is symbolically indicated in Figure~\ref{F:BadSymbolic}, and it is shown with black strands in Figure~\ref{F:BadBraid} (which should again be taken with $k_1=K$ and $k_2=k_3=k_4=0$). This completes our description of the braid~$\gamma(N,L)$.
\begin{figure}
\input{BadBraid}
\caption{The braid $\gamma(8,7+2K)$ is obtained with $k_1=K$ and $k_2=k_3=k_4=0$.}
\label{F:BadBraid}
\end{figure}
\begin{example} Let us write down the braids $\gamma(N,L)$ explicitly for $N=4$ and $N=6$ using Artin generators. Keeping in mind that $L=N-1+2K$, we obtain
\begin{itemize}
\item $\gamma(4,L)=\Delta_4$ . $\sigma_1 \sigma_3$ . $\sigma_1 \sigma_2 \sigma_3 \sigma_2 \sigma_1$ . $(\sigma_1 \sigma_2 \sigma_3 \sigma_2$ . $\sigma_2 \sigma_3 \sigma_2 \sigma_1 .)^K$
\item $\gamma(6,L)=\Delta_6$ . $\sigma_1 \sigma_3 \sigma_5 \sigma_4 \sigma_3$ . $\sigma_1 \sigma_2 \sigma_3 \sigma_2 \sigma_1 \sigma_4 \sigma_3 \sigma_2 \sigma_1 \sigma_5 \sigma_4 \sigma_3$ . $\sigma_1 \sigma_3 \sigma_5 \sigma_4 \sigma_3 \sigma_2 \sigma_1$~. $\sigma_1 \sigma_2 \sigma_1 \sigma_3 \sigma_2 \sigma_4 \sigma_3 \sigma_2 \sigma_1 \sigma_5$ . $(\sigma_1 \sigma_2 \sigma_3 \sigma_2 \sigma_5 \sigma_4 \sigma_3 \sigma_2 \sigma_1$ . $\sigma_1 \sigma_2 \sigma_3 \sigma_2 \sigma_4 \sigma_3 \sigma_2 \sigma_1 \sigma_5 .)^K$
\item For the sake of brevity, we are not discussing the case of an odd number of strands in this paper. There is, of course, an analogue construction. In the five strand case, for instance, we set $\gamma(5,L)=\Delta_5$ . $\sigma_1 \sigma_2 \sigma_3 \sigma_2 \sigma_1$ . $(\sigma_1 \sigma_3 \sigma_2 \sigma_1$~. $\sigma_1 \sigma_2 \sigma_1 \sigma_3 .)^\kappa$ $\sigma_1 \sigma_3 \sigma_2 \sigma_1 \sigma_4$ (with $L=3+2\kappa$). It can be shown that the sliding circuit set of $\gamma(5,L)$ has again exactly $2(L-1)L^{N-3}=2(L-1)L^2$ elements.
\end{itemize}
\end{example}
Next we have to investigate the properties of our braids $\gamma(N,L)$.
Here is a table, containing for each value of $N$ and $L$ which we have checked, the size of the sliding circuit set $|SC(\gamma(L,C))|$, as determined by the program~\cite{JuanProgram}. We did not check for $N=8$ with larger values of~$L$, or for any larger values of~$N$, because the calculations become too long. Apart from the previously mentioned exceptional pair $(N=4, L=3)$, our calculations confirm that the sliding circuit set has $2(L-1)L^{N-3}$ elements, consisting of $L^{N-3}$ cycling-orbits.
\begin{center}
\begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|}
$N \backslash L$ & \ 3 \ & 5 & 7 & 9 & 11 & 13 & 15 & 17\\ \hline\hline
4 & (6) & 40 & 84 & 144 & 220 & 312 & 420 & 544 \\
6 & -- & 1000 & 4116 & 11664 & 26620 & 52728 & 94500 & 157216 \\
8 & -- & -- & 201684 & 944784 & & & \\
\end{tabular}
\end{center}
While the precise formulae for these sizes may be difficult to prove formally, there is a nice argument, involving subsurfaces and ouroboroi, for a lower bound:
\begin{lemma}\label{L:SClowerbound}
$$|SC(\gamma(L,C))|\ \geqslant \ 2(L-1)\cdot {K+N-3 \choose N-3} \ = \ 2(L-1)\cdot {\frac{L-5+N}{2} \choose N-3}$$
(recalling that $L=N-1+2K$). Thus for any fixed~$N$ and as a function of~$L$, the sliding circuit set grows at least like a polynomial of degree~$N-2$.
\end{lemma}
\begin{proof}
The key observation is that many other elements of the sliding circuit set of $\gamma(N,L)$ can be constructed by hand, in the following way. Let us look at the sequence of down-up sequences in Figure~\ref{F:BadSymbolic}, which illustrates the case $N=8$. We recall that the $N$th row is a down-up sequence, and that the following two down-up sequences are obtained from this one by making all even labels switch sides, twice. The resulting two Garside-factors are repeated $K$ times.
We can modify this symbolic picture:
in a similar way, we can replace rows number 3, 4, ... , $N-1$ each with three rows, where each row is obtained from the previous by making all even labels switch sides. Also, the resulting two Garside-factors can be repeated any number of times -- we will denote the number of repeats $k_{N-2}, k_{N-3},\ldots, k_2$.
For instance, the fourth row of labels in Figure~\ref{F:BadSymbolic} can be replaced ($\rightsquigarrow$) as follows:
\begin{tikzpicture}[scale=0.4]
\draw[line width = \Bdiag] (2,2+.5) -- (2+0.7,2+.5+0.3);
\draw[line width = \Bdiag] (3,2+.5) -- (3+0.7,2+.5+0.3);
\draw (1,2) node{8};
\draw (2,2) node{7}; \draw (2,2)+(-\bx,-.5) rectangle +(\bx,.5);
\draw (3,2) node{6}; \draw (3,2)+(-\bx,-.5) rectangle +(\bx,.5);
\draw (4,2) node{5};
\draw (5,2) node{2};
\draw (6,2) node{1};
\draw (7,2) node{3};
\draw (8,2) node{4};
\draw[line width = \Bdiag] (3,2-.5) -- (3,2-.5-0.3);
\draw[line width = \Bdiag] (4,2-.5) -- (4-0.6,2-.5-0.3);
\draw (10,2) node{$\rightsquigarrow$};
\draw[line width = \Bdiag] (13,4+.5) -- (13+0.7,4+.5+0.3);
\draw[line width = \Bdiag] (14,4+.5) -- (14+0.7,4+.5+0.3);
\draw (12,4) node{8};
\draw (13,4) node{7}; \draw (13,4)+(-\bx,-.5) rectangle +(\bx,.5);
\draw (14,4) node{6};
\draw (15,4) node{5};
\draw (16,4) node{2};
\draw (17,4) node{1};
\draw (18,4) node{3};
\draw (19,4) node{4};
\draw (20,3) node[right]{Even labels switch sides};
\draw (12,2) node{7};
\draw (13,2) node{5};
\draw (14,2) node{4};
\draw (15,2) node{1};
\draw (16,2) node{2};
\draw (17,2) node{3};
\draw (18,2) node{6};
\draw (19,2) node{8};
\draw (20,1) node[right]{Even labels switch sides};
\draw (12,0) node{8};
\draw (13,0) node{7};
\draw (14,0) node{6}; \draw (14,0)+(-\bx,-.5) rectangle +(\bx,.5);
\draw (15,0) node{5};
\draw (16,0) node{2};
\draw (17,0) node{1};
\draw (18,0) node{3};
\draw (19,0) node{4};
\draw[line width = \Bdiag] (14,0-.5) -- (14,0-.5-0.3);
\draw[line width = \Bdiag] (15,0-.5) -- (15-0.6,0-.5-0.3);
\draw[decorate, decoration=coil] (30.5,-0.2) -- node[right=0.1cm]{$k_5\times$} (30.5,4);
\end{tikzpicture}
As a further modification, the variable $K$ counting the repeats of the last two Garside factors will be replaced by $k_1$. The result of this modification, in the case $N=6$, is shown in Figure~\ref{F:BadBraid}. We will denote $\gamma(N,k_1,\ldots,k_{N-2})$ the resulting braid.
\begin{claim}
If $k_1,\ldots,k_{N-2}$ and $k'_1, \ldots, k'_{N-2}$ are non-negative integers with $k_1 + \ldots + k_{N-2} = k'_1 + \ldots + k'_{N-2}$, then the braids $\gamma(N,k_1,\ldots,k_{N-2})$ and $\gamma(N,k'_1,\ldots,k'_{N-2})$ are conjugate.
\end{claim}
Before proving the claim, we observe an immediate consequence: $SC(\gamma(N,N-1+2K))$ has at least as many elements as there are ways of writing $K$ as a sum of $N-2$ non-negative integers, i.e.
${K+N-3 \choose N-3}$ elements. Moreover, applying the cycling operation to these elements repeatedly, we cycle through the $L-1$ non-$\Delta$ factors twice before returning to the starting point. Therefore we obtain $2(L-1){K+N-3 \choose N-3}$ elements of the sliding circuit set. This completes the proof of Lemma~\ref{L:SClowerbound}, modulo the claim.
In order to prove the claim, we observe that $\gamma(N,k_1,\ldots,k_{N-2})$ has $N-2$ ouroboroi, which are nested inside each other. This is best seen in Figure~\ref{F:BadBraid}. The innermost ouroboros contains the arcs of the braid connecting labels 1 or 2 (the red arcs in the figure), and its head-tail is the last factor of the ``body-block'' (also indicated in the figure). The next ouroboros out contains arcs connecting labels 1, 2 and 3 (red and yellow arcs in the figure), and its head-tail is indicated in the figure. Note that the head-tail of the ``red'' ouroboros is entirely contained within the ``red-yellow'' ouroboros. And so on, through to the ``red-yellow-green-blue'' ouroboros.
Now ``slithering ouroboroi'' corresponds to modifying the coefficients $k_1,\ldots,k_{N-2}$ while keeping their sum constant. How can we modify these coefficients using conjugations? Let's look at Figure~\ref{F:BadBraid} again.
Conjugating $\gamma(6,k_1,k_2,k_3,k_4)$ by $\sigma_3^2$ yields $\gamma(6, k_1+1, k_2-1, k_3, k_4)$. That is, conjugating by a positive full twist involving the strands inside the innermost ouroboros increases $k_1$ by~1 and decreases $k_2$ by~1. Similarly, we observe that conjugating $\gamma(6,k_1,k_2,k_3,k_4)$ by $(\sigma_2\sigma_3\sigma_2)^{-2}$ yields $\gamma(6,k_1, k_2+1,k_3-1,k_4)$. That is, conjugating by the \emph{negative} full twist involving the strands inside the second-slimmest ouroboros increases $k_2$ by~$1$ and decreases $k_3$ by~1. More generally, conjugating $\gamma(N,k_1,\ldots,k_{N-2})$ by the full twists involving the strands inside the $i$th-innermost ouroboros either has the effect of increasing $k_i$ by~1 and decreasing $k_{i+1}$ by~1, or the opposite effect, depending on the parity of~$i$.
This completes the proof of the claim, and hence of Lemma~\ref{L:SClowerbound}.\end{proof}
\subsection{An example with eccentric ouroboroi}\label{SS:EccentricExample}
Consider the rigid pseudo-Anosov braid on five strands, with infimum~0 and supremum~$L$ (where $L$ is an odd number with $L\geqslant 3$)
$$\delta(5,L)=\sigma_1 \sigma_3 . \sigma_1 \sigma_2 \sigma_3 \sigma_2 \sigma_1 \sigma_4 . \sigma_1 \sigma_2 \sigma_4 \sigma_3 \sigma_2 \sigma_1 (. \sigma_1 \sigma_2 \sigma_3 \sigma_2 . \sigma_2 \sigma_3 \sigma_2 \sigma_1)^{\frac{L-3}{2}}$$
Computer calculations show that
$$
|SC(\delta(5,L))| \ = \ 2L^3
$$
We won't prove this fact, but we will prove that the sliding circuit set has at least cubic growth as a function of $L$.
Indeed, it contains in particular the braids
\begin{align*}
\delta_{a,b,c}:= \ & \sigma_1 \sigma_2 \sigma_3 \sigma_2 \sigma_4 . \sigma_2 \sigma_4 \sigma_3 \sigma_2 \sigma_1 . (\sigma_1 \sigma_2 \sigma_3 \sigma_2 . \sigma_2 \sigma_3 \sigma_2 \sigma_1 .)^a \sigma_1 \sigma_3 .\\
& (\sigma_1 \sigma_2 \sigma_3 \sigma_2 . \sigma_2 \sigma_3 \sigma_2 \sigma_1 .)^b \sigma_1 \sigma_2 \sigma_3 \sigma_2 \sigma_1 . \sigma_1 \sigma_2 \sigma_3 \sigma_2 \sigma_1 (. \sigma_1 \sigma_2 \sigma_3 \sigma_2 . \sigma_2 \sigma_3 \sigma_2 \sigma_1)^c
\end{align*}
for all triples of integers $(a,b,c)$ with $a,b,c\geqslant 0$ and $2a+2b+2c=L-5$, and all their cyclic conjugates. This yields a lower bound of
$$
|SC(\delta(5,L))|\ \geqslant \ L\cdot{\frac{L-5}{2}+2 \choose 2} \ = \ \frac{L^3-3L+2L}{8}
$$
In order to simplify notation in the following discussion, let us rewrite
$$\delta_{a,b,c} \ = \ X \ . \ S^a \ . \ Y \ . \ S^b \ . \ Z \ . \ S^c$$
Now, one can inspect $\delta_{a,b,c}$ for the presence of ouroboroi. One finds that there are three ouroboroi which are again nested.
There is an innermost, eccentric, one, involving the third and fourth strands, with its head-tail in the factor~$Z$. The next one out involves strands number 2, 3, and 4, and it is round with its head-tail in the factor~$Y$. The outermost one is eccentric again, involves strands 1, 2, 3, and 4, and has its head-tail in the factor~$X$.
We did not manage to generalise this construction, in order to obtain a family of braids with $N$~strands and sliding circuit sets of size $2L^{N-2}$.
\subsection{Absence of ouroboroi: counterexamples for short braids}\label{SS:NoOuroboroi}
We recall the optimistic intuition underlying our Commutativity Conjecture: ``Having a large sliding circuit set is a \emph{geometric} property, it comes from slithering ouroboroi''.
In this subsection we give some examples to show that the most simple-minded interpretations of this intuition cannot be true.
\begin{example}
There is one fairly obvious way of showing the limits of the intuition: if $x$ is a rigid braid, then for any positive integer~$p$ we have $|SC(x^p)|\geqslant |SC(x)|$, because the $p$th power of any element of $SC(x)$ belongs to~$SC(x^p)$. In fact, for all the examples discussed in previous sections, $|SC(x^p)|= |SC(x)|$. Now, for $p\geqslant 2$, the braid~$x^p$ has no ouroboroi. Indeed, one naturally obtains not a snake biting its own tail, but a snake biting the tails of a second snake, and so on until the $p$th snake bites the first snake's tail again.
\end{example}
\begin{example}
Here as some examples of rigid pseudo-Anosov braids with unexpected, and more worrying, behaviour.
In each case, we have a fairly large sliding circuit set, but no, or not enough, ouroboroi. Now, in each case, there \emph{are} some features reminiscent of ouroboroi -- different parts of the braid commuting past each other, and low translation length in the curve complex. Also, all these examples occur for fairly short braids, and in each case there is no obvious way to construct from them infinite families of arbitrarily long braids with huge sliding circuit sets. Still, these examples pose definite challenges to our Commutativity Conjecture. (For better readability, we are suppressing the letter~$\sigma$ from our notation.)
\begin{itemize}
\item for $N=5, L=1$: $3 2 1 4 3$.
Here $|SC|=2$. The definition of an ouroboros doesn't really make sense for such a short braid.
\item for $N=5, L=2$: $1 4 . 1 2 3 4 3 2 1$. Here $|SC|=10$ (with 5 orbits under cycling). It is also hard to see anyting like an ouroboros.
\item for $N=5, L=3$: $\Delta . 2 4 . 2 1 3 2 4 3 2 1$. Here $|SC|=36$ (with 9 orbits under cycling). We see one ouroboros containing four strands, but not enough in order to explain the large sliding circuit set. Moreover, in many elements of the sliding circuit set, e.g. $\Delta . 1 2 3 2 1 4 3 . 3 2 4$, we do not see any ouroboroi.
\item for $N=5, L=4$: $1 2 1 3 2 4 3 . 3 4 3 . 3 2 1 4 3 2 1 . 1 2 1$. Here $|SC|=100$ (with 25 orbits under cycling). There are no ouroboroi. (There are several subsurfaces with roughly ouroboros-like behaviour, but nothing that would really make us expect such a large sliding circuit set.)
\item for $N=6, L=3$: $3 2 1 4 3 5 . 1 2 3 2 1 4 3 2 5 . 2 1 3 2 4 3 5 4 3$. Here $|SC|=234$ (with 78 orbits under cycling)
This is one of the very few examples of rigid pseudo-Anosov braids which we know of where the bound $|SC|\leqslant 2L^{N-2}$ does not hold -- indeed, $234>162=2L^{N-2}$; worse, there are no ouroboroi. This example is very strange.
\item similarly, for $N=7, L=2$: $2 1 3 2 1 6 5 . 1 2 1 3 2 4 5 4 3 2 6$. Here $|SC|=92$ (with 46 orbits under cycling), and there are no ouroboroi. Notice that $92>2L^{N-2}=64$.
\end{itemize}
\end{example}
\section{Conjectures concerning the structure of sliding circuit sets}\label{S:Conjectures}
\subsection{Polynomial bounds on the sliding circuit set}
We recall that the remaining difficulty of the conjugacy problem stems essentially from the fact that we do not know a polynomial bound for the size of the sliding circuit set.
For many rigid braids, the sliding circuit set contain only the braid itself, plus the obvious conjugates (those obtained by cyclic permutation of the factors and by conjugation by Garside's~$\Delta$). However, sometimes the sliding circuit set is much larger, as seen in the examples in the previous section. As an answer to Question~\ref{Q:SCpolynBound?} we propose:
\begin{conjecture}\label{C:deltaWorst}
The family of braids~$\gamma(N,L)$ exhibited in Section~\ref{SS:NestedOuroboroi} is the worst possible:
there exists a constant $C$ such that for any rigid braid with $N$ strands and Garside-length~$L$, the sliding circuit set has at most $C\cdot L^{N-2}$ elements. Moreover, this bound holds with $C=2$ for
sufficiently large values of~$L$.
\end{conjecture}
Note that, for braids with \emph{four} strands, we already know from work of Sang Jin Lee~\cite{SJLeeThesis} that even the \emph{super summit set} of four strand braids (which contains the sliding circuit set) has quadratically bounded size, even for non-pseudo-Anosov braids. (The statement that the sliding circuit set of four-strand braids, in the \emph{dual} Garside-structure, is quadratically bounded was also proved independently by Calvez and Wiest~\cite{CalvezWiestPA}.)
\subsection{Commutativity conjecture}
This is the main conjecture of this paper: large sliding circuit sets should come from some kind of internal commutativity of the braid, and this internal commutativity is a geometric fact. Our definition of an ouroboros is our attempt to capture this idea in a precise mathematical framework.
Now
while the examples in Section~\ref{SS:NoOuroboroi} are worrying, we only found such bad behaviour for very short braids, despite checking many millions of randomly generated examples (using random walks in the Cayley graph with Artin generators). Obviously, this may just mean that the really bad examples get exceedingly rare as the length increases. Still, it is tempting to believe that for sufficiently long braids, the property of having very big sliding circuit sets is geometric:
\begin{conjecture}\label{C:BigSCisGeometric}
For every~$N$ (with $N\geqslant 5$), there exists an integer $L_N$ with the following property:
if~$x\in B_N$ is a braid which maximises $|SC(x)|$ among all rigid pseudo-Anosov braids in~$B_N$
of length~$L$, with $L\geqslant L_N$, then $x$ has $N-2$ ouroboroi, and $SC(x)$ is obtained from $x$ by sliding ouroboroi.
\end{conjecture}
\begin{remark}
\begin{enumerate}
\item Conjecture~\ref{C:BigSCisGeometric} is still a little bit vague -- we haven't properly defined the property of $SC$ being obtained by sliding ouroboroi. Nevertheless, a proof of any reasonable interpretation of it would imply Conjecture~\ref{C:deltaWorst}.
\item The Commutativity Conjecture is an incarnation of the general philosophy that Garside theory is a quasi-geometric theory. Another incarnation of this principle is the conjecture of Calvez-Wiest~\cite{CalvezWiestCAL} that the additional length complex of~$B_N$, equipped with its standard Garside structure, is quasi-isometric to the curve complex of the $N$-punctured disk.
\end{enumerate}
\end{remark}
\subsection{Cubing conjecture}
We finish with some comments on the \emph{uniform} conjugacy problem.
One step in this direction might be to find an algorithm for the conjugacy problem whose computational complexity depends polynomially not only on the length~$L$ but also on the number of strands~$N$.
The classical Garside-theoretic approach is hopeless for this purpose, as we know that the size of the sliding circuit set \emph{can} depend exponentially on~$N$.
However, calculating the full sliding circuit set may not be necessary for solving the conjugacy problem.
In~\cite{BirGebGM2}, the Ultra Summit Set of a braid~$x$ (which, for rigid braids with at least two non-$\Delta$ factors coincides with the sliding circuit set~\cite{Gebhardt-GM}) is considered as a graph, with vertices corresponding to elements of $USS(x)$ and edges corresponding to minimal conjugations.
We wish to understand the overall shape of this space -- let us call it the sliding circuit set graph.
Unfortunately, the deep results from the paper~\cite{BirGebGM2} are not sufficient for this purpose. We point out one partial result:
\begin{proposition}[Linearly bounded diameter]\label{L:LinBoundedDiam}
For any fixed number of strands~$N$, there is a constant $C_N$ such that any two elements of~$SC(x)$ (for $x\in B_N$) are related by a sequence of at most $C_N\cdot length(x)$ elements of $SC(x)$, with each element obtained from its predecessor by conjugation by a simple element.
\end{proposition}
\begin{proof}
This is an immediate consequence of the linearly bounded conjugator length in mapping class groups \cite{MasurMinsky2,Tao} and the convexity of the sliding circuit set~\cite[Proposition 7]{Gebhardt-GM}.
\end{proof}
Also, the procedure of slithering multiple ouroboroi so as to change their relative position corresponds to conjugations by mutually commuting braids. In other words, slithering ouroboroi translate to (1-skeleta of) cubes in the sliding circuit set graph. In the examples presented in Section~\ref{S:Examples}, we have many cubes gluing together to form large cubes -- the side-length of these cubes grows with~$L$, and their dimension grows with~$N$.
Our previous observations and conjectures suggest that the overall structure of the sliding circuit set is essentially that of a big cube, or a limited number of big cubes. This idea is certainly simplistic, as we know from examples presented in~\cite{Gebhardt-GM}, or from Proposition~5.3 of \cite{CalvezWiestPA} that there may be \emph{some} branching in the sliding circuit set. Nevertheless, the following vague conjecture should have some truth to it:
\begin{conjecture}[Cubing conjecture]
If the sliding circuit set is large, then large regions of the sliding circuit set graph should be cubical.
\end{conjecture}
A very fast algorithm for solving the conjugacy problem should pick out some small preferred subset of the sliding circuit set. It could do so by holding one ouroboros in place, and slithering the other ouroboroi upwards until they are completely jammed against the fixed one, in a corner of a large cube of the sliding circuit set graph. Alternatively, one could end up in a different cubical region, or in a smaller, more interesting, region of the sliding circuit set.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,310
|
{"url":"http:\/\/openstudy.com\/updates\/560d6e97e4b08c8b2a018e30","text":"## anonymous one year ago The midpoint of line UV is (5, -11) The coordinates of one endpoint are U(3, 5) Find the cordinated of endpoint V. (((PLEASE HELP IVE BEEN STUCK ON THIS FOR 20 MINS)))\n\n1. anonymous\n\n@AlexandervonHumboldt2 @I_Luv_Skittles @Defiance @freckles\n\n2. anonymous\n\n3. anonymous\n\nwill medal and fan.\n\n4. freckles\n\nso the midpoint we need to think averages the average of two numbers is the sum of those numbers divided by 2 the x coordinate of the midpoint is equal to the sum of the x-coordinates of the endpoint divided by 2 the y coordinate of the midpoint is equal to the sum of the y-coordinates of the endpoint divided by 2\n\n5. freckles\n\n$5=\\frac{3+x}{2} \\\\ -11=\\frac{5+y}{2}$\n\n6. anonymous\n\nWould help but seems freckles has it ;).\n\n7. freckles\n\nsolve the equations\n\n8. freckles\n\nI forgot to make the word endpoint plural oh well\n\n9. anonymous\n\nwhat is x??? how do I do 3+x if I dont know what x is where did you get x from??\n\n10. freckles\n\ndid you read the descriptions above?\n\n11. freckles\n\nthe \"x-coordinate of the midpoint\" is equal to the \"sum of the x-coordinates of the endpoints\" \"divided by 2\"\n\n12. freckles\n\nV is a point in the form (x,y)\n\n13. freckles\n\nthe x-coordinate of V is x the y-coordinate of V is y since I chose V to be the point (x,y)\n\n14. anonymous\n\nI get that now but I dont understand the divison part??? can you just write everything out for me to read please\n\n15. freckles\n\nI'm sorry I thought I did... Can you tell me what you don't understand? Like the division? To find the average of two numbers... You do the sum of those numbers and then you divide 2.\n\n16. anonymous\n\ntheyre numbers on a graph tho so I dont get it\n\n17. freckles\n\nSo if I told you the average of x and 3 is 5 ? This doesn't make sense how I got this: $\\frac{x+3}{2}=5?$\n\n18. anonymous\n\nYoure saying the answer is 5?\n\n19. anonymous\n\nIm really bad at this kinda stuff\n\n20. freckles\n\nno you are looking for V and V is a point on the cartesian plane\n\n21. freckles\n\nyou are looking for the point (x,y)\n\n22. anonymous\n\nso the x cord is 5?\n\n23. freckles\n\nyou are given by your question the following to find x and y: The average of x and 3 is 5. The average of y and 5 is -11.\n\n24. freckles\n\nhow did you x is 5?\n\n25. anonymous\n\nso its (5, -11) ?\n\n26. freckles\n\nthe midpoint is not what we are looking for\n\n27. freckles\n\nI already said we are looking for V , one of the endpoints which I called (x,y)\n\n28. anonymous\n\nthis is taking so long.. can you just explain everything and help me with the answer\n\n29. freckles\n\nI'm trying too... so what do you not understand about the following equations: $5=\\frac{3+x}{2} \\\\ -11=\\frac{5+y}{2}$\n\n30. freckles\n\nall i did there was apply the average thing I mentioned\n\n31. anonymous\n\nhow do I add 3 to x and 5 to y..\n\n32. freckles\n\nbecause that is all the midpoint is is averaging\n\n33. freckles\n\nyou solve for x and y\n\n34. freckles\n\nmultiply both sides by 2\n\n35. anonymous\n\nso its 3 and 12.5?\n\n36. freckles\n\ndon't understand how you got that did you try multiplying both sides of the equation(s) by 2?\n\n37. anonymous\n\nI did then divided them\n\n38. freckles\n\n$5=\\frac{3+x}{2} \\\\ \\text{ multiply both sides by 2 } \\\\ 10=3+x \\\\ \\text{ now isolate } x$\n\n39. anonymous\n\ncan you tell me the answer I dont get it my teacher will help me tomorrow\n\n40. freckles\n\nso you don't want to try to subtract 3 on both sides to isolate x?\n\n41. anonymous\n\nit would just be x over 2 ....\n\n42. freckles\n\nyou do understand we had 5=(3+x)\/2 then I asked you to multiply 2 on both sides giving us 10=3+x the last step to find the x-coordinate of v is to subtract 3 on both sides 10-3=x why are you doing something different from what I suggest?\n\n43. freckles\n\nnow try solving for y coordinate of V -11=(5+y)\/2\n\n44. freckles\n\nit is the same kind of steps we went through to solve for x\n\n45. anonymous\n\nmy computer stopped working but can you just tell me it\n\n46. anonymous\n\nIve spent over an hour on one problem...\n\n47. anonymous\n\n@freckles\n\n48. anonymous\n\nANYONE PLEASE :( IVE SPENT AN HOUR+ ON THIS QUESTION.... @*louisa* @perii224 @AlexandervonHumboldt2 @mrqueest\n\n49. anonymous\n\n@Vocaloid\n\n50. anonymous\n\n51. Vocaloid\n\n52. anonymous\n\ncan you just give me the answer :\/ i really dont get it my teacher will help me tomorrow. Ive spent over an hour on this...\n\n53. Vocaloid\n\nstart by multiplying both sides by 2 (x+3)\/2 * 2 = ?\n\n54. anonymous\n\n4\n\n55. Vocaloid\n\nnope, not quite... hint: A\/2*2 = A (x+3)\/2*2 = ?\n\n56. anonymous\n\n2*2 is 4\n\n57. Vocaloid\n\ndoes this make it more clear?|dw:1443724308435:dw|\n\n58. anonymous\n\nI DONT KNOW WHAT THIS MEANS AHHHHHH\n\n59. Vocaloid\n\n60. anonymous\n\num I dont know what youre talking about can you just tell me it or Ill get it wrong I dont have the time to spend any more time on this one question...\n\n61. anonymous\n\n@Vocaloid","date":"2016-10-22 21:44:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6760321259498596, \"perplexity\": 1826.9496264211477}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988719045.47\/warc\/CC-MAIN-20161020183839-00525-ip-10-171-6-4.ec2.internal.warc.gz\"}"}
| null | null |
{"url":"https:\/\/codeforces.com\/blog\/entry\/103029","text":"Antor_1809032's blog\n\nBy\u00a0Antor_1809032, history, 7 weeks ago,\n\nCodeforcess Round #788 (Div. 2) Problem B.\n\nI have used O(n) complexity code in my code, but I got TLE on test 35.\n\nWhile my friend during the contest solved the problem has a total number of test cases 34.\n\nThis is my submission and I got TLE on test case no. 35.\n\nAnd this is the last case for my friend's submission.\n\nWhat is the problem??\n\n\u2022 0\n\n \u00bb 7 weeks ago, # | \u00a0 0 In my submission, the 35th test is:t = 10^5and, i guess, every test case looks the same: 2ab26 a b c d e f g h i j k l m n o p q r s t u v w x y z\n \u00bb 7 weeks ago, # | \u2190 Rev. 2 \u2192 \u00a0 +3 HelloI tried a lot with your code , but nothing workedI finally removed the \"scanf\" and \"printf\" and added \"cin\" and \"cout\" with the fast i\/o method , and it worked ! .So either use (cin,cout) with fast i\/oor (scanf,printf) without fast i\/ofast i\/o : ios::sync_with_stdio(NULL); cin.tie(NULL);cout.tie(NULL); \n \u00bb 7 weeks ago, # | \u00a0 +8 more likely skill issue","date":"2022-07-05 22:44:51","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3027936518192291, \"perplexity\": 3004.1046902043786}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104628307.87\/warc\/CC-MAIN-20220705205356-20220705235356-00054.warc.gz\"}"}
| null | null |
Tumor necrosis factor ligand superfamily member 9 also known as 4-1BB ligand or 4-1BBL or CD137L is a protein that in humans is encoded by the TNFSF9 gene.
4-1BBL is a type 2 transmembrane glycoprotein receptor that is found on APCs (antigen presenting cells) and binds to 4-1BB (also known as CD137). The 4-1BB/4-1BBL complex belongs to the TNFR:TNF superfamily, which is expressed on activated T Lymphocytes.
Structure of 4-1BB/4-1BBL complex
The 4-1BB/4-1BBL complex consists of three monomeric 4-1BBs bound to a trimeric 4-1BBL. Each 4-1BB monomer binds to two 4-1BBLs via cysteine-rich domains (CRDs). The interaction between 4-1BB and the second 4-1BBL is required to stabilize their interactions. The link with 4-1BBL is largely made up of amino acids from the dynamic loops of the CRD2 and the β sheet of CRD3 of 4-1BB, according to a detailed study of the binding between the 4-1BB and 4-1BBL interface. CRD2 amino acids (T61, Q67, and K69) interact with the AA′ loop (Y110 and G114) and the intra-H-strand loop (Q227 and Q230) of 4-1BBL to form various hydrogen bond interactions.
Application to cancer immunotherapy
Studies on the poorly immunogenic Ag104A sarcoma and the extremely tumorigenic P815 mastocytoma provided the first systematic proof that anti-4-1BB antibodies have potent anti-tumor effects. Anti-4-1BB administration to mice with the aforementioned tumors was shown to substantially inhibit tumor growth by increasing CTL activity. In the years to come, more studies verified and legitimized the effect of 4-1BB signaling to inhibit tumor growth.
The interaction between 4-1BB and 4-1BBL provide costimulatory signals to a variety of T cells, which can be used to discover cancer immunotherapy. The 4-1BB/4-1BBL complex together with a signal provided by a T-cell receptor can provide costimulatory signals to CD4+ and CD8+ T cells in mice, leading to the activation of CD4+ and CD8+ T cells. The activation of CD8+ T cells is essential in antitumor immunity. The 4-1BB/4-1BBL complex with the help of T-cell receptor signals can co-stimulate human CD28− T cells and trigger the increase in CD28− T cells. Unlike the activation of CD8+ T cells, the proliferation of CD28− T cells can negatively affect cancer state and other diseases. Therefore, this pathway can be targeted for immunotherapy.
See also
CD137
References
External links
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,598
|
We endorse selected projects and organisations which take a..
We carry out conferences and high-level science-policy dialogues on topics related to urban health and wellbeing. Depending on the thematic focus, various stake..
Your research project, programme or institute can partner with us and our network. To be associated with or endorsed by our programme contact us.
The Science-Policy workshop, Modeling urban health and wellbeing: communicating results for policy and action, organized by the ICSU-UNU-IAMP Programme ..
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 259
|
Dysschema mariamne is een beervlinder uit de familie van de spinneruilen (Erebidae). De wetenschappelijke naam van de soort is voor het eerst geldig gepubliceerd in 1838 door Geyer.
Dysschema
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 263
|
The Dennis R series is a rear-engined coach chassis built by Dennis (and later TransBus/Alexander Dennis), it was unveiled in 1999.
External links
Product description in Alexander Dennis Website
R
Bus chassis
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,790
|
'He's going for the ball!': AFL pair divided over brutal bump and how to exile it
by David Zita
David King and Leigh Montagna have debated how to penalise the head-high bump in footy.Source: FOX SPORTS
Two-time premiership player David King and two-time All Australian Leigh Montagna have clashed over the intentions of St Kilda's Ben Long before his brutal bump on Fremantle's Sean Darcy, as well as what an adequate deterrent is for the action.
Long will face the AFL Tribunal after the head-high bump was deemed rough conduct, careless, severe impact and high contact.
With several instances of the head-high bump on the weekend, King implored the AFL to crackdown on the action once the Round 6 reports had been resolved.
"Somethings got to give with the AFL table regarding the bump if we are serious about changing something," he said on First Crack.
"The table about working out if you've got one week or a fine or four weeks, because it is out of kilter with what the actual demands of the game are and what we should demand of the MRO.
"If you have a look at the hits on the weekend and there was some severe head trauma on the weekend and I'm massive on this.
"If you look at any sport around on the world, the investment on this now into concussion it would be the No. 1 spend. In the NFL they allocated half a billion dollars for half a decades worth of compensation for past players who are struggling with things like memory loss, confusion, personality changes, suicidal tendencies.
"I just think we wait to see what the outcome is all the time, but the player is still getting head trauma... you've got to start penalising the action.
Following King's comments, Montagna took issue with the logic behind the idea, which led to a lengthy debate.
READ THE FULL TRANSCRIPT BELOW:
Leigh Montagna: Ben Long and Dylan Shiel aren't going well I'm going to try hit someone in the head because I'm only going to get two or three weeks... isn't it a deterrent that they're already missing two or three weeks?
David King: Two weeks is not changing behaviour, we had five actions this weekend, this is when we're saying we're clamping down, the head's sacrosanct, we've adjusted and amended the table. To what?
LM: So you think it's gotta be more the action than the actual outcome? Which I agree with.
DK: If you're not serious about it, don't worry about it. But this will cost us hundreds of millions of dollars down the track. Mark my words this will be the most expensive thing in AFL football in ten years time if we do nothing about it now. Either make a choice that we make change, and I'm not saying before these guys go up, we don't have a scapegoat lets wait until they're past, but something has to give. If you don't want to change it then wear what comes.
LM: Ben Long was going for that ball right until the last second... it's still an instinctive thing.
DK: They make a choice to go past the football and make contact, at that point is where if they make that choice then you're out of the game for three weeks. If you look at the Ben Long one again, he sets his feet and almost lifts ...
LM: He's going for the ball there.
DK: Not a chance, his hands aren't even out.
LM: Kingy, that's the perfect technique for a ground ball, we're teaching players ...
DK: Watch him lift!
LM: I agree with you the action should be suspended, I just don't think they should be getting four, five or six weeks. You've still gotta give the benefit of the doubt that it's a split second action. I just don't think four weeks versus two is going to change behaviour... you can make it 10 weeks and you're still going to have blokes in the spur of the moment.
DK: So you're in the camp that I'm comfortable with concussion being part of the game.
LM: (scoffs) No, I'm not saying that.
DK: The only way to run it out of the game is to stamp it out with a huge penalty so players stop the action.
LM: They're getting suspended anyway, why is six weeks going to change behaviour?
DK: Because coaches are going to say guys, you cant go past the football, we can't have you on the sidelines, we're trying to win a premiership here.
LM: I just don't think two weeks or six weeks is going to change behaviour.
DK: I'm not saying six weeks by the way.
LM: You're disincentivized to do it anyway but you've still got to give that slight benefit of the doubt, that's why it's not intentional, it's careless.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 789
|
\section{Introduction}
The development of computer science
has transformed the practice of mathematics.
The practical algorithms designed by computer scientists have
profoundly changed how many mathematical conjectures are proposed,
studied, and resolved.
For example, the fields of
\emph{satisfiability checking} and \emph{symbolic computation}
have each been paradigm-shifting in this way. They have allowed
mathematicians the ability to solve problems much larger than
ever dreamt of in the past, the ability to pose and solve
entirely new kinds of mathematical conjectures, and the ability to
verify their solutions to unprecedented levels.
Despite a common background and over a hundred years of combined successful progress,
these two fields have developed mostly
independently of each other and have little common overlap~\cite{abraham2015building}.
It is in the interest of the working mathematician or computer scientist
to have familiarity with the techniques of these fields, as they
have broad (and often surprising) applicability.
This article provides an overview of these fields with an emphasis on how the
techniques of each field have been applied to resolve
mathematical conjectures---and how combining the techniques of each field
has resolved conjectures and solved problems that were out of reach of both fields.
\paragraph{Satisfiability checking}
The Boolean satisfiability (SAT) problem asks if it is possible
assign the variables in a Boolean logic expression in such a way
that the expression becomes true. In the 1970s, the Cook--Levin
theorem demonstrated that the SAT problem is NP-complete resulting
in a pessimism that SAT problems are infeasible to solve in practice.
Despite this, research in the engineering of SAT solvers has
discovered algorithms and heuristics capable of solving enormous SAT instances
that cannot currently be solved by any other method. This
``SAT revolution'' has had dramatic consequences for hardware and
software designers who now use SAT solvers on a daily basis~\cite{vardi2014boolean}.
In fact, SAT solvers have become so successful
that Heule, Kullmann, and Marek~\cite{heule2017solving}
call them the ``best solution in most cases''
for performing large combinatorial searches.
Recently SAT solvers have been spectacularly applied to a number of
long-standing mathematical problems including
the Erd\H os discrepancy conjecture (open for 80 years)~\cite{konev2015computer},
the Boolean Pythagorean triples conjecture (open for 30 years)~\cite{heule2017solving},
and the determination of the fifth Schur number (open for 100 years)~\cite{heule2018schur}.
We briefly outline how SAT solvers were successful on these problems
in Section~\ref{sec:sat}.
Despite these successes, SAT solvers are known to not perform well
for all kinds of combinatorial searches such as those
that require advanced mathematics. For example,
Arunachalam and Kotsireas~\cite{arunachalam2016hard}
have shown that searching for mathematical
objects defined by autocorrelation
relationships are hard for current SAT solvers.
Similarly, Van Gelder and Spence~\cite{van2010zero} have shown
that proving the nonexistence of certain combinatorial designs
(even some that have intuitively very easy nonexistence proofs)
produce small but very difficult instances for SAT solvers.
\paragraph{Symbolic computation}
Symbolic computation or computer algebra is the branch of computer science concerned
with manipulating algebraic expressions and other mathematical objects.
It has been studied for over sixty years and its successes has lead
to the development of computer algebra systems (CASs) that can
now automatically solve many theoretical and practical mathematical
problems of interest.
For example, a modern computer algebra system
has functionality for things such as
Gr\"obner bases, cylindrical algebraic decomposition,
lattice basis reduction, linear system solving,
arbitrary and high precision arithmetic,
interval arithmetic, linear and nonlinear optimization,
Fourier transforms, Diophantine solving,
computing automorphism groups, graph algorithms like
determining if a graph has a Hamiltonian cycle, and many other basic
operations like computing the derivative of a function.
Computer algebra is widely used in engineering and science. For example,
the 1999 Nobel prize in physics was awarded to Gerardus 't Hooft
and Martinus J.~G.~Veltman for using computer algebra to place
particle physics on ``a firmer mathematical foundation''.
Computer algebra has also been used to resolve a number of
long-standing mathematical conjectures. Three well-known
examples of this are the alternating sign matrix conjecture
(open for 15 years)~\cite{zeilberger1996proof},
the Mertens conjecture (open for 100 years)~\cite{odlyzko1985disproof},
and the Kepler conjecture (open for nearly 400 years)~\cite{lagarias2011kepler}.
We briefly discuss how computer algebra
was used to solve them in Section~\ref{sec:cas}.
Despite these successes, computer algebra systems are not optimized
for all types of problems. In particular, they are typically
not optimized to perform the kind of general-purpose search with
learning that SAT solvers excel at. In other words, problems
that require searching through a large combinatorial space
will probably not be solved most effectively
by a computer algebra system.
\paragraph{The best of both worlds}
In this paper we overview the new ``SAT+CAS''
paradigm that harnesses the search power of SAT solvers and the
mathematical abilities of CASs.
This approach provides the best aspects of both the SAT and
CAS approaches while minimizing the weaknesses of each respective
tool. For example, one of the primary drawbacks of SAT solvers
is that they lack mathematical expressiveness---many mathematical concepts
are difficult or even impossible
to efficiently encode in Boolean logic.
On the other hand, a huge variety of mathematical concepts
can easily be expressed in a CAS. Thus, the SAT+CAS paradigm
combines the search power of a SAT solver with the expressive
power of a CAS.
Recently the SAT+CAS paradigm has been used to make progress
on a number of conjectures from combinatorics, graph theory,
and number theory. In particular, it has
verified a conjecture of Craigen, Holzmann, and Kharaghani,
found three new counterexamples to the good matrix conjecture,
verified the smallest counterexample of the Williamson conjecture,
and is responsible for the current best known results
in the even Williamson, Ruskey--Savage, Norine,
and best matrix conjectures.
We give an overview of these conjectures and how our SAT+CAS
system MathCheck (available at \href{https://uwaterloo.ca/mathcheck}{\nolinkurl{uwaterloo.ca/mathcheck}})
was used to produce these results in Section~\ref{sec:sat+cas}.
A high-level diagram of how MathCheck combines SAT solvers with CASs is shown in Figure~\ref{fig:sat+cas}.
We also briefly discuss how Heule, Kauers, and Seidl have recently used the
SAT+CAS paradigm to find numerous new ways of multiplying $3\times3$
matrices~\cite{heule2019new}.
Finally, we summarize
the kinds of problems for which individually the SAT and CAS paradigms
are insufficient but for which
the SAT+CAS paradigm has been successful in Section~\ref{sec:conclusion}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[align=center,node distance=2.5em]
\node(input){\clap{SAT encoding that Williamson}\\{matrices of order $n$ exist}};
\node[below=of input,text width=5.5em,minimum height=3em,rectangle,draw](sat){Split into subproblems};
\node[node distance=8em,right=of sat,text width=5.5em,minimum height=3em,rectangle,draw](cas){CAS};
\node[below=of sat,text width=5.5em,minimum height=3em,rectangle,draw](sat2){SAT solver};
\node[node distance=8em,right=of sat2,text width=5.5em,minimum height=3em,rectangle,draw](cas2){CAS};
\node[below=of sat2,text width=10em](output){{Williamson matrices}\\{or counterexample}};
\draw[->](input)--(sat);
\draw[->](sat)--(sat2);
\draw[->](sat2)--(output);
\draw[->,transform canvas={yshift=0.6em}](sat)--node[above,text width=6em]{\footnotesize\clap{SAT instances}}(cas);
\draw[<-,transform canvas={yshift=-0.6em}](sat)--node[below,text width=6em]{\footnotesize\clap{inequivalent instances}}(cas);
\draw[->,transform canvas={yshift=0.6em}](sat2)--node[above,text width=6em]{\footnotesize\clap{partial satisfying}\\assignment\\}(cas2);
\draw[<-,transform canvas={yshift=-0.6em}](sat2)--node[below,text width=6em]{\footnotesize\clap{conflict clause}}(cas2);
\end{tikzpicture}
\end{center}
\caption{A diagram outlining how the SAT+CAS paradigm
is applied to the Williamson conjecture.}\label{fig:sat+cas}
\end{figure}
\section{Prior Work}\label{sec:prevwork}
In this section we overview the fields of satisfiability checking, symbolic computation,
and the kinds of conjectures resolved using the tools of these fields.
As we will see, these fields have been applied to resolve an impressive variety of conjectures.
Satisfiability checking is particularly good at solving conjectures that can be expressed
only using simple constraints but require an enormous search, while symbolic computation
is particularly good at solving conjectures that require a lot of
complicated mathematical calculations but not a lot of search.
\subsection{SAT solving}\label{sec:sat}
The techniques developed by the field of satisfiability checking
has recently allowed SAT solvers to resolve mathematical conjectures
requiring enormous searches. In this section we discuss
three of these conjectures.
\paragraph{Erd\H os discrepancy conjecture}
In the 1930s, the prolific mathematician Paul Erd\H os
conjectured that for any infinite $\{\pm1\}$-sequence $X=(x_1,x_2,\dotsc)$
the quantity $D_X(n,k)\coloneqq\abs[\big]{\sum_{i=1}^{n}x_{ki}}$
can be made arbitrarily large by choosing appropriate~$n$ and~$k$.
In 2010, the Polymath project studied the conjecture
and discovered many sequences~$X$ of length 1124 with $D_X(n,k)$ at most~$2$
for all choices of~$n$ and~$k$ for which this quantity is defined.
The sequences were found using a custom computer program and despite
expending a lot of computing effort no longer sequences with this property
were found. Fields medalist Timothy Gowers would later say
``That was enough to convince me that 1124 was the correct bound
[for the length of sequences~$X$ with $D_X(n,k)$ at most~$2$].''
In 2014, Konev and Lisitsa~\cite{konev2015computer} showed that 1124 was not the
correct bound by using a SAT solver to find a sequence of
length 1160 with $D_X(n,k)$ at most~$2$ for all~$n$ and~$k$. Furthermore,
they showed that such a sequence of length 1161 could not exist,
thereby resolving the smallest open case of the
Erd\H os discrepancy conjecture. The full conjecture was resolved
the next year by Terence Tao~\cite{tao2016erdos}, building on results of the Polymath project.
\paragraph{Boolean Pythagorean triples conjecture}
In the 1980s, mathematician Ronald Graham offered a \$100 prize for an answer
to the Boolean Pythagorean triples problem:
Is it possible to split the natural numbers $\{1,2,\dotsc\}$ into two parts
so that all triples $(a,b,c)$ with $a^2+b^2=c^2$ are separated?
In 2008, Cooper and Poirel~\cite{cooper2008pythagorean} found a partition of the natural numbers up to 1344
into two parts with no Pythagorean triple in the same part---this required a custom
computer program and hundreds of hours of computing time.
In 2016, Heule, Kullmann, and Marek~\cite{heule2017solving} used a SAT solver to find a partition of
the natural numbers up to 7824 into two parts that separated all Pythagorean triples.
Furthermore, they showed that it was not possible to improve this bound---%
there is no 2-partition of the natural numbers up to 7825 that separates
all Pythagorean triples. The proof found by the SAT solver
was over 200 terabytes and was verified in about 4 CPU years.
Ronald Graham accepted this as a resolution of the Boolean
Pythagorean triples conjecture and awarded his \$100 prize.
\paragraph{Schur number five}
In the 1910s, Issai Schur~\cite{schur1917kongruenz} proved that
for any $k\geq1$ there
exists a largest set $\{1,\dotsc,m\}$ that
can be partitioned into~$k$ parts
such that all triples $(a,b,c)$ with $a+b=c$ are separated.
The value of~$m$ in the above is known as the \emph{Schur number} $S(k)$.
It is possible to check that $S(1)=1$, $S(2)=4$, $S(3)=13$ by hand,
and Baumert and Golomb~\cite{golomb1965backtrack} showed that $S(4)=44$ by a computer search in 1965.
Furthermore, Exoo~\cite{exoo1994lower} showed that $S(5)\geq160$ in 1994
using a combinatorial optimization algorithm.
In 2017, Heule~\cite{heule2018schur} used a SAT solver to show that any partition of
$\{1,\dotsc,161\}$ into 5 parts will not separate all triples
$(a,b,c)$ with $a+b=c$ and therefore showed that $S(5)=160$.
The proof produced by the SAT solver was two petabytes in size
and was verified by a formally-verified proof checker using
about 36 CPU years.
\subsection{Computer algebra}\label{sec:cas}
The techniques developed in the field of computer algebra have been applied
to a huge number of engineering, scientific, and mathematical problems.
In this section we discuss three conjectures where techniques from computer algebra
were essential in the resolution of the conjecture.
\paragraph{Mertens conjecture}
In 1885, Thomas Stieltjes conjectured
(and later independently by F. Mertens) what is
now known as the Mertens conjecture.
The Mertens function is defined by $M(x)\coloneqq\sum_{n\leq x}\mu(n)$
where $\mu(n)\coloneqq(-1)^k$ if the prime factorization of~$n$
consists of~$k$ distinct prime factors
and $\mu(n)\coloneqq0$ if a prime factor appears more than once in the
prime factorization of~$n$.
The Mertens conjecture is that $\abs{M(x)}<\sqrt{x}$ for all~$x>1$.
In the 1970s, the Mertens conjecture was shown to hold for all $x\leq7.8\cdot10^9$.
In 1985, Odlyzko and te Riele~\cite{odlyzko1985disproof} showed that the Mertens conjecture was false.
Their method used lattice basis reduction and arbitrary-precision arithmetic
from the Brent MP package.
The smallest counterexample is still unknown but it is known to be larger than $10^{14}$ and
smaller than $\exp(1.59\cdot10^{40})$.
\paragraph{Alternating sign matrix conjecture}
In the 1980s, Mills, Robbins, and Rumsey~\cite{mills1983alternating} studied alternating sign matrices---%
square $\{0,\pm1\}$-matrices whose rows and columns sum to $1$
and whose nonzero entries alternate sign in each row and column.
They noticed that the number of alternating sign matrices of order~$n\leq10$
was $\prod_{k=0}^n(3k+1)!/(n+k)!$ and conjectured that this
relationship held for all~$n$.
The conjecture was proven by Doron Zeilberger~\cite{zeilberger1996proof}
in the 1990s, crucially relying on the combinatorial functions of the computer
algebra system Maple. In fact, a Maple package was distributed with
the paper that empirically (and in some cases rigorously) verified
every nontrivial fact in the paper.
\paragraph{Kepler conjecture}
In 1661, the astronomer and mathematician Johannes Kepler conjectured
that the most efficient way of packing spheres in three dimensions is
to stack them in a pyramid shape. It was still unsolved in 1900
and David Hilbert included it in his famous list of unsolved problems.
In 1998, the mathematician Thomas Hales and his student
Samuel Ferguson~\cite{lagarias2011kepler} proved the Kepler conjecture
using a variety of tools such as global optimization, linear programming,
and interval arithmetic. Many of the computations
in the proof were performed using Mathematica's arbitrary-precision
arithmetic and double-checked using Maple. Because of the complexity
of the calculations a team of at least thirteen referees could not be certain
of the proof's correctness after four years. This lead Hales to
start a project to complete a formal verification of the proof; it
completed in 2014 after a decade of work~\cite{hales2017formal}.
\section{SAT+CAS Paradigm}\label{sec:sat+cas}
As we saw in Section~\ref{sec:prevwork}, the satisfiability checking
and symbolic computation approaches have been applied to resolve a variety of mathematical
conjectures---but each approach has its own advantages and disadvantages.
On the one hand, satisfiability checking is good at solving problems with
enormous search spaces and simple constraints.
On the other hand, symbolic computation is good at solving problems with
sophisticated mathematical calculations.
When a search space becomes too large the overhead associated with
a computer algebra system becomes more pronounced, necessitating the usage
of a more efficient solver.
Currently, SAT solvers are probably the best tools currently available for general purpose
search; they are very difficult to beat because of the decades of engineering
effort that has been aimed at making them efficient.
Given this, Zulkoski, Ganesh, and Czarnecki in 2015 proposed~\cite{zulkoski2015mathcheck} (and independently
by \'Abrah\'am~\cite{abraham2015building}) the \emph{SAT+CAS paradigm} of combining SAT solvers and
CASs to solve conjectures that require both
efficient search and advanced mathematics. In this section we overview
and explain the major successes of the SAT+CAS paradigm over the last four years.
\subsection{Williamson conjecture}\label{sec:Williamson}
In 1944, the mathematician J. Williamson studied the
Hadamard conjecture from combinatorial design theory. This conjecture says that
square $\{\pm1\}$-matrices with
with pairwise orthogonal rows exist in all orders $4n$. He defined a new
class of matrices now known as Williamson matrices that he used
to construct Hadamard matrices of order~$4n$ for certain small values
of~$n$. Symmetric $\{\pm1\}$-matrices $A$, $B$, $C$, $D$
form a \emph{set of Williamson matrices} (each individual matrix itself
being \emph{Williamson}) if they
are circulant (each row is a cyclic shift of the previous row)
and if $A^2+B^2+C^2+D^2$ is the scalar matrix $4nI$.
It was once considered likely that Williamson matrices exist for all~$n$
and therefore Williamson matrices could provide a route to proving the
Hadamard conjecture~\cite{golomb1963search}. The conjecture that
Williamson matrices exist in all orders~$n$ has since become known
as the Williamson conjecture.
The hopes that Williamson matrices exist in all orders
were dashed in 1993, when D. \v Z. \DJ okovi\'c~\cite{dokovic1993williamson}
showed that Williamson matrices of order~35 do not exist
by an exhaustive computer search. \DJ okovi\'c noted that this was
the smallest \emph{odd} counterexample of the Williamson conjecture
but did not specify if it was truly the smallest counterexample.
In 2006, Kotsireas and Koukouvinos~\cite{kotsireas2006constructions}
found no counterexamples in the even orders $n\leq 22$
using the CodeGeneration package
of the computer algebra system Maple. In 2016,
using an off-the-shelf SAT solver, Bright et~al.~\cite{bright2016mathcheck} found
no counterexamples in the even orders $n\leq30$.
Despite these successes, both the SAT-only and CAS-only
approaches failed to find the smallest counterexample of the Williamson conjecture.
Not only did the SAT+CAS approach successfully find the smallest counterexample, it blew
the other approaches out of the water by exhaustively solving all even orders up to
seventy~\cite{bright2018sat,bright2019applying}.
The search space up to order $70$ is an astronomical \emph{twenty-five orders of magnitude}
larger than the search space up to order $30$ because
the search space for Williamson matrices grows exponentially in~$n$.
Williamson matrices were found to exist in all even orders $n\leq70$, leading
to the \emph{even Williamson} conjecture that Williamson matrices exist
in all even orders.
The SAT+CAS approach is able to search such large spaces
by exploiting mathematical properties of Williamson matrices that dramatically shrink the search space.
In particular, the most important known filtering property is the \emph{power spectral density (PSD) criterion} that
says that if $A$ is a Williamson matrix of order $n$
with first row $[a_0,\dotsc,a_{n-1}]$ then
\[ \PSD_A(k) \coloneqq \abs[\Big]{\sum_{j=0}^{n-1}a_j e^{2\pi ijk/n}}^2 \leq 4n \]
for all integers~$k$.
This is an extremely strong filtering condition;
a random circulant and symmetric $\{\pm1\}$-matrix $A$ will almost certainty
fail it. Thus, a solver that is able to effectively exploit the PSD criterion will easily
outperform a solver that does not know about this property.
However, to effectively use it we need
\begin{enumerate}
\item an efficient method of computing the PSD values; and
\item an efficient method of searching while avoiding matrices that fail the filtering criteria.
\end{enumerate}
The fundamental reason for the success of the SAT+CAS paradigm
in regard to the Williamson and even Williamson conjectures is that
CASs excel at~(1) and SAT solvers excel at~(2).
The manner in which the SAT and CAS are combined is demonstrated in Figure~\ref{fig:sat+cas}.
As the SAT solver completes its search it sends to a CAS the matrices $A$, $B$, $C$,
$D$ from partial solutions of the SAT instance. The CAS then ensures that the matrices
pass the PSD criterion. If a matrix fails the PSD criterion then a \emph{conflict clause}
is generated encoding that fact. The SAT solver adds the conflict clause into its
learned clause database, thereby blocking the matrix from being considered in the future.
The search was also parallelized by splitting the search space into many independent subspaces. Each subspace
had a separate SAT instance generated for it and the SAT instances were solved in parallel. The CAS was also
useful in the splitting phase by removing instances that were found to be equivalent to other instances
under the known equivalence operations of Williamson matrices.
In the end, our SAT+CAS system MathCheck found over 100,000 new sets of Williamson matrices among
all even orders $n\leq70$, a new set of Williamson matrices in the odd order~$63$,
and verified that $n=35$ is the smallest counterexample of the Williamson conjecture.
\subsection{Good and best matrix conjectures}
Many variants of Williamson matrices exist; two variants are known as \emph{good matrices}
(introduced by J. Seberry Wallis~\cite{wallis1970combinatorial})
and \emph{best matrices} (introduced by Georgiou, Koukouvinos, and Seberry~\cite{georgiou2001circulant}).
There are several slightly different definitions for such matrices,
but for our purposes we define them
to be circulant matrices $A$, $B$, $C$, $D\in\{\pm1\}^{n\times n}$ that satisfy
$AA^T+BB^T+CC^T+DD^T=4nI$ where $A$ is skew ($A+A^T=2I$) and $D$ is symmetric ($D=D^T$).
Additionally, $B$ and $C$ are skew (for best matrices) or symmetric (for good matrices).
It is known that if good matrices exist of order~$n$ exist then $n$ must be of the form $2r+1$ (i.e., odd)
and if best matrices of order~$n$ exist then $n$ must be of the form $r^2+r+1$. The good and best matrix
conjectures state that good and best matrices exist in \emph{all} orders of these forms.
In 2002, the good matrix conjecture was shown to hold for all $n\leq39$~\cite{georgiou2002good}
and in 2001 the best matrix conjecture was shown to hold for all $n\leq31$~\cite{georgiou2001circulant}.
In 2018, the best matrix conjecture was shown to hold for all $n\leq43$ and the counterexamples
$n=41$, $47$, and $49$ were found to the good matrix conjecture~\cite{djokovic2018goethals}.
MathCheck has also been applied to the good and best matrix conjectures~\cite{bright2019good,bright2019best}
using a similar method as described in Section~\ref{sec:Williamson} with some encoding adjustments
that are specific to good or best matrices.
For example, if $[d_0,\dotsc,d_{n-1}]$ is the first row of a symmetric best matrix
then it is known that $d_{n/3}=d_0$ when $n$ is a multiple of~$3$.
MathCheck found two new sets of good matrices (for $n=27$ and~$57$) and three new counterexamples
of the good matrix conjecture ($n=51$, $63$, and~$69$). MathCheck also found
three new sets of best matrices in order $57$ and showed that the best matrix conjecture holds
for all $n\leq57$ (the best currently known result).
\subsection{Craigen--Holzmann--Kharaghani conjecture}
In 2002, Craigen, Holzmann, and Kharaghani~\cite{craigen2002complex} studied
\emph{complex Golay pairs} which are polynomials $f$, $g$ with $\{\pm1,\pm i\}$
coefficients such that $\abs{f(z)}^2+\abs{g(z)}^2$ is constant on the unit circle.
This implies that~$f$ and~$g$ have
the same number of terms and this quantity is known as the
\emph{length} of the polynomial.
Craigen, Holzmann, and Kharaghani performed an exhaustive search for all
complex Golay pairs up to length~$19$ and a partial search up to length~$23$.
They found no complex Golay pairs of length~$23$ and conjectured that they
did not exist. An exhaustive search was performed by F. Fiedler in
2013~\cite{fiedler2013small} that did not find any complex Golay pairs
of length~$23$, though no implementation was provided
making it difficult to verify his search.
MathCheck can be used to independently verify the results of Fiedler's
searches~\cite{bright2018enumeration,bright2018complex}.
The first step is to find all single polynomials $f$ that could
appear as a member of a complex Golay pair. A number of known properties
of complex Golay pairs are used to cut down the search space,
the most important one being that $\abs{f(z)}^2\leq 2n$ where
$n$ is the length of $f$ and $z$ is on the unit circle.
Given a potential $f$ we solve the nonlinear optimization problem
of maximizing $\abs{f(z)}^2$ subject to $\abs{z}=1$ (see
Maple's command \textsc{NLPSolve}) and discard the $f$ whose
maximum is greater than $2n$. Secondly, we use the known fact that
if $(f,g)$ is a complex Golay pair then
$N_g(s)=-N_f(s)$ for $s=1$, $\dotsc$, $n-1$ where
$N_g$ is the nonperiodic autocorrelation function of $g$.
Once $f$ is known and enough of $g$ is known so that
$N_g(s)\neq-N_f(s)$ can be determined
then a conflict clause is learned blocking the partial
solution from ever being tried again. This filtering theorem
is very powerful because it often works when only a few coefficients
of $g$ are known. For example, the SAT solver is able to learn
to never assign both the first and last entries of $g$ to be $1$
at the same time.
\subsection{Ruskey--Savage conjecture}
In 1993, Ruskey and Savage~\cite{ruskey1993hamilton}
asked if every matching (a set of edges without common vertices)
of the hypercube graph with $2^n$ vertices can be extended into
a Hamiltonian cycle of the graph.
In 2007, Fink~\cite{fink2007perfect} noted that this property holds
in the hypercube graphs for $n=2$, $3$, and~$4$ and he proved a weaker form of the
conjecture that he attributes to Kreweras~\cite{kreweras1996matchings}.
In 2015, MathCheck was used to show for the first time that the
Ruskey--Savage conjecture held for the hypercube graph with
$2^5=32$ vertices~\cite{zulkoski2015mathcheck}. This was accomplished
by using a SAT solver to exhaustively enumerate the matchings of the
hypercube graph and then verifying with a CAS that each matching
extends to a Hamiltonian cycle. Certain kinds of matchings could
be ignored; for example, the SAT solver
only enumerates maximal matchings (those which cannot be increased
in size while remaining matchings) because if a maximal matching
extends to a Hamiltonian cycle then so do all subsets
of the matching.
Once the CAS verifies that a given matching extends to a Hamiltonian
cycle, a conflict clause is learned that blocks that Hamiltonian cycle (and all subsets
of it) from being considered in the search again. Furthermore,
it is also effective to have the CAS apply automorphisms of
the hypercube graph to the Hamiltonian cycle it finds
to generate additional Hamiltonian cycles to be blocked~\cite{zulkoski2017combining}.
\subsection{Norine conjecture}
Consider a 2-colouring of the edges of a hypercube graph such
that edges directly opposite each other have opposite colours.
Serguei Norine conjectured that in such a colouring
it is always possible to find two directly opposite vertices
that are joined by a path of edges of a single colour~\cite{norine2008edge}.
In 2013, Feder and Subi reported that the conjecture had been verified for hypercube
graphs with $n=2$, $3$, $4$, and $5$, and proved the conjecture
for a special class of edge colourings~\cite{feder2013hypercube}.
In 2015, MathCheck was used to show for the first time that the Norine conjecture
held for the hypercube graph with $2^6=64$ vertices~\cite{zulkoski2015mathcheck}.
This was accomplished by using a SAT solver to exhaustively enumerate the edge
colourings for which the conjecture was not already known to hold.
Once an edge colouring was found by the SAT solver it was passed to a CAS to
verify that the colouring contains at least two directly opposite vertices
that are connected by a path of a single colour. If such vertices do
not exist then this colouring forms a counterexample to the conjecture;
otherwise, a conflict clause is generated that blocks this colouring
from appearing in the search again.
In fact, any colouring that includes the monochromatic path that was found
by the CAS can be blocked, since all such colourings cannot be counterexamples
to the Norine conjecture.
Similar to in our work on the Ruskey--Savage conjecture,
it is also effective to have the CAS apply automorphisms of
the hypercube graph to the path that it finds
to generate additional colourings to be blocked~\cite{zulkoski2017combining}.
\subsection{3 by 3 matrix multiplication}
The classical way of multiplying two $2\times2$ matrices uses eight
scalar multiplications; in 1969, Strassen discovered a way to do it
using just seven scalar multiplications~\cite{strassen1969gaussian}.
Two years later, Winograd showed that
it is not possible to do it with six multiplications~\cite{winograd1971multiplication}
and de Groot~\cite{de1978varieties} showed there is essentially one
optimal algorithm.
The optimal algorithm for multiplying $3\times3$ matrices is still unknown
and the best known algorithm uses 23 multiplications~\cite{laderman1976noncommutative}.
Previously, four inequivalent algorithms were known with this complexity.
Recently, Heule, Kauers, and Seidl~\cite{heule2019new} found
over 13,000 additional inequivalent algorithms that use 23 multiplications.
This was achieved using the SAT+CAS paradigm in a multistage process.
In the first stage, they reduce the problem of finding a matrix multiplication
algorithm using 23 scalar multiplications to solving $3^6=729$ cubic equations
in $23\cdot 3^3=621$ variables.
A SAT instance is generated from these equations by reducing them modulo~$2$.
A solution of the SAT instance
then provides a way to multiply $3\times3$ matrices over the
finite field $F_2=\{0,1\}$.
By using various simplifications they found over 270,000 solutions of the
SAT instance. They then used the computer algebra system Mathematica to determine
that over 13,000 of those solutions are inequivalent. Finally, they use a
Gr\"obner basis calculation in the computer algebra system Singular to
lift the solutions found for the field $F_2$ to an arbitrary ring.
They report that a small number of solutions over $F_2$ cannot be lifted
in such a way but in most cases each solution provides a new $3\times3$
matrix multiplication algorithm that works in any ring.
None of the algorithms they found could be simplified to use only $22$
multiplications making it tempting to conjecture that such an algorithm
does not exist.
\section{Conclusion}\label{sec:conclusion}
In this article we have surveyed the SAT+CAS paradigm of combining
SAT solvers and computer algebra systems aimed at resolving
mathematical conjectures. It is illuminating to contrast the kind of problems that have been solved
by the SAT and CAS paradigms individually, with those that have been
solved by the combined SAT+CAS paradigm.
We discussed three long-standing mathematical problems in Section~\ref{sec:sat}
for which SAT solvers have been used.
For each problem, attempts to use custom-purpose search code or optimization methods
ultimately proved to not be as successful as using a SAT solver. This is due to the many efficient search heuristics that have been incorporated in modern solvers, as well as the years of refinements that have gone into these solvers. These heuristics have broad applicability for problems from diverse domains.
Additionally, we saw three long-standing conjectures in Section~\ref{sec:cas}
that CAS methods were used to resolve. In each case, very efficient mathematical
calculations were necessary but efficient search routines were not
the bottleneck in the solutions. These conjectures would not be a
good fit for SAT solvers because these problems do not admit
natural encodings into Boolean logic.
Note that the eight conjectures from Section~\ref{sec:sat+cas} would be
difficult to resolve using \emph{either} SAT solvers or CASs alone.
In each case, the problems have both a significant search component
(an exponentially growing search space) and a significant mathematical
component (e.g., requiring knowledge of the power spectral density of a circulant
matrix or the automorphism group of a graph). As we've seen, the SAT+CAS paradigm
is effective at pushing the state-of-the-art in such conjectures.
Simply put, the SAT+CAS paradigm allows the mathematician to solve problems
that have search spaces too large for CASs and
require mathematical calculations too sophisticated for SAT solvers.
\bibliographystyle{abbrv}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,351
|
Onocolus biocellatus är en spindelart som beskrevs av Cândido Firmino de Mello-Leitão 1948.
Onocolus biocellatus ingår i släktet Onocolus och familjen krabbspindlar. Artens utbredningsområde är Guyana. Inga underarter finns listade i Catalogue of Life.
Källor
Krabbspindlar
biocellatus
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,993
|
est une compilation de Led Zeppelin sortie le , dont les titres ont été sélectionnés par les trois membres survivants du groupe : Jimmy Page, Robert Plant et John Paul Jones.
Titres
CD 1
Good Times Bad Times (Led Zeppelin) (Page/Jones/Bonham)
Communication Breakdown (Led Zeppelin) (Page/Jones/Bonham)
Dazed and Confused (Led Zeppelin) (Page)
Babe I'm Gonna Leave You (Led Zeppelin) (Anne Bredon/Page/Plant)
Whole Lotta Love (Led Zeppelin II) (Page/Plant/Jones/Bonham/Willie Dixon)
Ramble On (Led Zeppelin II) (Page/Plant)
Heartbreaker (Led Zeppelin II) (Page/Plant/Jones/Bonham)
Immigrant Song (Led Zeppelin III) (Page/Plant)
Since I've Been Loving You (Led Zeppelin III) (Page/Plant/Jones)
Rock and Roll (Led Zeppelin IV) (Page/Plant/Jones/Bonham)
Black Dog (Led Zeppelin IV) (Page/Plant/Jones)
When the Levee Breaks (Led Zeppelin IV) (Page/Plant/Jones/Bonham/Memphis Minnie)
Stairway to Heaven (Led Zeppelin IV) (Page/Plant)
CD 2
The Song Remains the Same (Houses of the Holy) (Page/Plant)
Over the Hills and Far Away (Houses of the Holy) (Page/Plant)
D'yer Mak'er (Houses of the Holy) (Page/Plant/Jones/Bonham)
No Quarter (Houses of the Holy) (Page/Plant/Jones)
Trampled Under Foot (Physical Graffiti) (Page/Plant/Jones)
Houses of the Holy (Physical Graffiti) (Page/Plant)
Kashmir (Physical Graffiti) (Page/Plant/Bonham)
Nobody's Fault but Mine (Presence) (Page/Plant)
Achilles Last Stand (Presence) (Page/Plant)
In the Evening (In Through the Out Door) (Page/Plant/Jones)
All My Love (In Through the Out Door) (Plant/Jones)
DVD
Édition limitée DVD : extraits de Led Zeppelin DVD.
We're Gonna Groove (King/Bethea) (Royal Albert Hall - )
I Can't Quit You Baby (Dixon) (Royal Albert Hall - )
Dazed and Confused (Page) (Royal Albert Hall - )
White Summer (traditionnel, arr. Page) (Royal Albert Hall - )
What Is and What Should Never Be (Page/Plant) (Royal Albert Hall - )
How Many More Times (Page/Jones/Bonham) (Royal Albert Hall - )
Moby Dick (Bonham/Jones/Page) (Royal Albert Hall - )
Whole Lotta Love (Page/Bonham/Plant/Jones) (Royal Albert Hall - )
Communication Breakdown (Page/Jones/Bonham) (Royal Albert Hall - )
Bring It on Home (Page/Plant) (Royal Albert Hall - )
Immigrant Song (Page/Plant) (Sydney Showground - )
Black Dog (Page/Plant/Jones) (Madison Square Garden - 27, 28, et )
Misty Mountain Hop (Page/Plant/Jones) (Madison Square Garden - 27, 28, et )
Going to California (Page/Plant) (Earls Court - )
In My Time of Dying (Bonham/Jones/Page/Plant) (Earls Court - )
Stairway to Heaven (Page/Plant) (Earls Court - )
Rock and Roll (Page/Plant/Jones/Bonham) (Knebworth - )
Nobody's Fault but Mine (Page/Plant) (Knebworth - )
Kashmir (Bonham/Page/Plant) (Knebworth - )
Whole Lotta Love (Page/Bonham/Plant/Jones) (Knebworth - )
Certifications
Notes et références
Album certifié disque de platine en Australie
Album certifié disque de platine en Italie
Album certifié disque de platine en Pologne
Album certifié disque d'or au Danemark
Album certifié disque d'or au Japon
Album certifié disque d'or en Autriche
Album certifié disque d'or en Belgique
Album certifié disque d'or en Finlande
Album certifié disque d'or en France
Album certifié disque d'or en Suisse
Album certifié double disque de platine aux États-Unis
Album certifié triple disque de platine au Royaume-Uni
Album certifié triple disque de platine en Irlande
Album certifié triple disque de platine en Nouvelle-Zélande
Album certifié triple disque d'or en Allemagne
Album de Led Zeppelin
Album numéro un au UK Rock and Metal Chart
Album numéro un en Norvège
Album numéro un en Nouvelle-Zélande
Album produit par Jimmy Page
Album publié par Atlantic Records
Compilation musicale sortie en 2007
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,348
|
{"url":"https:\/\/eprints.soton.ac.uk\/421332\/","text":"The University of Southampton\nUniversity of Southampton Institutional Repository\n\n# Distributionally robust shortfall risk optimization model and its approximation\n\nGuo, Shaoyan and Xu, Huifu (2018) Distributionally robust shortfall risk optimization model and its approximation. Mathematical Programming, 1-26.\n\nRecord type: Article\n\n## Abstract\n\nUtility-based shortfall risk measures (SR)have received increasing attention over the past few years for their potential to quantify the risk of large tail losses more effectively than conditional value at risk.In this paper, we consider a distributionally robust version of the shortfall risk measure (DRSR) where the true probability distribution is unknown and the worst distribution from an ambiguity set of distributions} is used to calculate the SR. We start by showing that the DRSR is a convex risk measure and under some special circumstance a coherent risk measure.We then move on to study an optimization problem with the objective of minimizing the DRSR of a random function and investigate numerical tractability of the optimization problem with the ambiguity set being constructed through $\\phi$-divergence ball and Kantorovich ball. In the case when the nominal distribution in the balls is an empirical distribution constructed through iid samples,we quantify convergence of the ambiguity sets to the true probability distribution as the sample size increases under the Kantorovich metric and consequently the optimal values of the corresponding DRSR problems. Specifically, we show that the error of the optimal value is linearly bounded by the error of each of the approximate ambiguity sets and subsequently derive a confidence interval of the optimal value under each of the approximation schemes. Some preliminary numerical test results are reported for the proposed modeling and computational schemes.\n\nText\nDRSR_MPB_r2 - Accepted Manuscript\nText\nGuo-Xu2018_Article_DistributionallyRobustShortfal - Version of Record\n\nAccepted\/In Press date: 29 May 2018\ne-pub ahead of print date: 7 June 2018\n\n## Identifiers\n\nLocal EPrints ID: 421332\nURI: http:\/\/eprints.soton.ac.uk\/id\/eprint\/421332\nISSN: 0025-5610\nORCID for Huifu Xu: orcid.org\/0000-0001-8307-2920\n\n## Catalogue record\n\nDate deposited: 01 Jun 2018 16:30\n\n## Contributors\n\nAuthor: Shaoyan Guo\nAuthor: Huifu Xu","date":"2021-02-26 07:27:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6647121906280518, \"perplexity\": 1309.1627015292672}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178356232.19\/warc\/CC-MAIN-20210226060147-20210226090147-00377.warc.gz\"}"}
| null | null |
\section{Introduction}
In recent years, a new form of (special) relativity has been
introduced under the names deformed special relativity or doubly (or
triply) special relativity (DSR, TSR) \cite{dsr,dsr2,tsr}. It is
motivated from a desire to incorporate additional invariant
(dimensional) parameters into the theory {beyond the speed of
light}, particularly an invariant quantum scale such as the Planck
scale. (This idea {actually} dates back to a paper by Snyder
\cite{S} which is also a precursor to the idea of non-commutative
geometry.) As a result, the new relativity can really be thought of
as the quantum relativity. Relativity, of course, involves the
behavior of frames of reference in physics and the relativity
algebra is the algebra of transformations of the reference frames
which also reflects the algebra of ``space-time symmetry".
Therefore, one can view the algebra of quantum relativity as the
algebra of transformations of quantum reference frames or the
symmetry algebra of quantum space-time. We note that some discussion
about quantum frames of reference already exists in literature and
we want to refer the reader particularly to Refs.\cite{AK,R}. The
first of the two references discusses such issues within the context
of non-relativistic quantum mechanics while the second concerns
theories under the influence of gravity. The shared conclusion of
the two papers is that a quantum frame of reference has to be
characterized by its mass. In the case of gravity, it is illustrated
that the gravitational properties of the reference frame itself need
to be taken into account in order to define local gauge invariant
observable. On the other hand, there is also the very intriguing
notion of a quantum space-time structure. Space-time structure
beyond a certain microscopic scale certainly cannot be physically or
operationally defined as the commutative geometry of (Einstein)
special relativity. One hopes that understanding the (special)
quantum relativity would pave the way to understanding the quantum
structure of space-time, and eventually even a quantum theory of
gravity may be constructed as the general theory of quantum
relativity. The present article is an attempt in this direction.
Even when one knows the mathematical description of the
transformations, the physical interpretation may be much harder to
come by. As we know, while Lorentz had already studied the (Lorentz)
transformations, it was Einstein who {brought out} the correct
physical meaning {of} these transformations\cite{Mil}. In trying to
obtain the correct physical picture from a mathematical description,
one has to have an open mind for unconventional perspectives (that
may arise) on some of the most basic notions about physics, in
general, and space-time structure, in particular. Most of the
discussions of the (new) relativity so far have focused on
nonlinear realizations of the symmetry algebra that arises. There is
still no agreement among different authors on what the ultimate
algebra of quantum relativity {is}. Here we will focus on {\small
\boldmath $SO(1,5)$} as the ultimate Lie algebra of the symmetry of
quantum relativity which has essentially been advocated within the
context of triply special relativity (TSR) \cite{tsr}, although
most of our discussion on doubly special relativity (DSR) \cite{dsr}
as an immediate structure will still be valid if that corresponds to
quantum relativity, namely, if the intermediate structure in our
discussions coincides with the final. From the point of view of Lie
algebra deformation, {\small \boldmath $SO(1,5)$} has actually been
identified essentially {with} the natural {stabilizer} of the
``Poincar\'e + Heisenberg" symmetry \cite{spha}. Physical
observations with limited precision can never truly confirm an
unstable algebra as the symmetry. Correspondingly, this perspective
is strongly suggestive of taking {\small \boldmath $SO(1,5)$} as the
natural candidate for the symmetry algebra of quantum relativity. It
is also very reassuring that this symmetry {algebra} arises
naturally from the perspective of quantum relativity as a
deformation {of} special relativity \cite{tsr,CO}.
We take the structure of the Lie algebra seriously as {denoting} the
symmetry of ``space-time" and focus on a linear realization with a
classical or commutative geometry as the background of quantum
relativity. Such a linear realization has been discussed to some
extent in the literature, but neither systematically nor in detail.
Our approach here is to try to understand the true physical
implications of quantum relativity directly from a study of the
transformations of the quantum reference frames and try to deduce
the {underlying} geometric structure as an extension of the
conventional space-time. We consider such an analysis as {being}
complementary to the earlier studies involving nonlinear
realizations. We note that our approach is largely inspired by
Ref.\cite{GL} which, in our opinion, has brought up {some}
interesting perspectives related to linear realization without
putting all of them on more solid foundation. In trying to do this,
we arrive at some very interesting and unexpected results. The most
interesting among them is the extension of Einstein space-time
structure into a higher dimensional geometry which is {\rm not} to
be interpreted as an extended space-time in the usual sense. This
result is unconventional, but is of central importance to our
discussions. We obtain the symmetry of quantum relativity through
the approach of deformations and look for direct implications. We
ask for sensible interpretations of mathematical results, and make
suggestions along the {way}. Our analysis should be thought of as an
initial attempt, rather than a final understanding. We believe that
there are still a lot more questions to be understood than the ones
we discuss with suggested possible answers. We have put forward
ideas here which seem to fit the physical problem at hand. Some of
this is unconventional, but we think that they are quite reasonable
and plausible and are in the right direction.
The article is organized as follows. In the next section, we write
down explicitly the two-step deformation procedure to arrive at the
quantum relativity, more or less following Ref.\cite{GL}. In
Sec.III, we focus only on the first deformation introducing the
invariant ultraviolet scale (the Plank scale), which gives a DSR
structure. Here, the linear realization necessitates the
introduction of a new geometric dimension along with the 4D
space-time. The corresponding new coordinate has {the canonical}
dimension of time over mass while having a spacelike geometric
signature. It also suggests a new definition of energy-momentum as
{a} coordinate derivative in the ``nonrelativistic limit". This
section has the most dramatic or radical results {and all of this}
fits in well with the notion of quantum frames of reference. In
Sec.IV, we discuss relations to noncommutative or quantum (operator)
realization of 4D space-time. In Sec.V, we focus on the geometric
structure of the last deformation introducing an invariant infrared
scale (the cosmological constant). {Subsequently} we address some
important issues about the quantum relativistic momentum in Sec. VI
before we conclude the article in the last section.
\section{Quantum Relativity through Deformations}
Let us start by writing down the Lie algebra of {\small \boldmath
$SO(m,n)$} (with signature convention for the metric starting with a
+)
\begin{equation} \label{so}
[J_{\scriptscriptstyle A\!B}, J_{\scriptscriptstyle L\!N}] = i\, ( \eta_{\scriptscriptstyle
B\!L} J_{\scriptscriptstyle A\!N} - \eta_{\scriptscriptstyle A\!L} J_{\scriptscriptstyle B\!N} + \eta_{\scriptscriptstyle
A\!N} J_{\scriptscriptstyle B\!L} -\eta_{\scriptscriptstyle B\!N} J_{\scriptscriptstyle A\!L}) \;,
\end{equation} where
indices $A,B,L,N$ take values from $0$ to $d-1$. For $d=4$, we have
the familiar algebra {\small \boldmath $SO(1,3)$} describing
special relativity. However, we would like to start our discussion
with {\small \boldmath$SO(0,3)$} (which coincides with {\small
\boldmath $SO(3)$}) as a relativity algebra. As we know, Newtonian
physics is described on {a} three-dimensional space. {The} symmetry
algebra for the rotational invariance represented by {\small
\boldmath $SO(3)$} {with the corresponding generators given by}
\begin{equation}
M_{ij} = i \, (x_i \, \partial_j - x_j \, \partial_i ), \quad
i,j=1,2,3 \;.
\end{equation}
In this case, there is no index 0 and the metric
has the (special) signature $\eta_{ij}= (-1,-1,-1)$. We have the
coordinate representation of the 3-momentum given by
$p_i= i\hbar \, \partial_i = {i\hbar \, \frac{\partial}{\partial x^i}}$
(we take $\hbar=1$ in the following). The rotations can be augmented by {the}
three-dimensional translations to {define} the complete symmetry
group of 3D space. An arbitrary symmetry transformation can be taken
as a transformation between two (inertial) frames of reference. In
this case there is an alternative special way of getting a
translation, namely,
\begin{equation}
x^i \to x^i + \Delta x^i\;, \quad \Delta
x^i(t) = v^i t\;,
\label{translation} \end{equation}
where $t$ denotes a
parameter outside of the three-dimensional manifold with $v^{i}$
given by $\frac{dx^{i}}{dt}$, the velocity. The parameter $t$ is
identified with the (absolute) {\em time} and such translations are
known as Galilean boosts. To the extent that {\em time} is just an
external parameter, the Galilean boosts do not distinguish
themselves from translations. The relevant symmetry group describing
the admissible transformations between reference frames is the group
{\small\boldmath $ISO(3) \equiv SO(3) \otimes_{s} R^3 $} where
{\small \boldmath $\otimes_{s}$} represents the semi-direct product.
The generators of Galilean boosts (or special translations) can be
denoted by $N_i$ and satisfy \begin{equation} \label{bst} [M_{ij}, N_k] = i\,
(\eta_{jk} N_i - \eta_{ik} N_j) \;. \end{equation} Note that much like the
momentum, we can have the coordinate representation $N_i = i \,
\partial_i$ which satisfies Eq.(\ref{bst}). Of course, $N_i$'s
commute among themselves and we have Galilean relativity.
Within the framework of Galilean relativity, the speed of a particle
(as well as the speed of inertial reference frames) can take any
value. Einstein realized that one has to go beyond Galilean
relativity in order to accommodate an invariant speed of light, $c$.
From the present perspective, one can extend the three dimensional
manifold to a four dimensional one by including (the external
parameter) $t$ so that $x^{\mu} = (x^{0}, x^{i}) = (c \, t, x^{i}),
\mu = 0,1,2,3$. Furthermore, if one introduces the velocity
four-vector on this manifold as $u^{\mu} = (u^{0}, u^{i})$ with $u^0
= c/\sqrt{c^2-v^2} =\gamma, u^i= v^i/\sqrt{c^2-v^2} = \gamma
\beta^i$ and $\beta^i = v^i/c$ ($c$ represents the speed of light)
{so} that
\begin{equation}
\eta_{\mu\nu} u^{\mu}u^{\nu} = (u^0)^{2} - (u^i)^{2} =
1 \; ,\label{4}
\end{equation}
then Eq.(\ref{4}) equivalently leads to
\begin{equation}
-
\eta_{ij} v^i v^j = v^2 = c^2 \left(1-\frac{1}{\gamma^2}\right) \leq
c^2,\label{5}
\end{equation}
with equality attained only in the limit $\gamma
\to \infty$. With {this} constraint the velocity (speed) takes
values on the coset space $v^i \in${\small \boldmath
$SO(1,3)/SO(3)$} in contrast to the case of Galilean relativity
where $v^i \in${\small\boldmath $R^3 \equiv ISO(3)/SO(3)$}.
Furthermore, extending the manifold to a four dimensional one, we
obtain a linear realization of the transformation group of reference
frames which is deformed to {\small \boldmath $SO(1,3)$}, the
Lorentz group. The deformation of the algebra is given by
\begin{equation} [N_i,
N_j] \longrightarrow -i\, M_{ij} \; ,
\end{equation}
and the $N_i$'s can now
be identified with {the} $M_{{\scriptscriptstyle 0}i}$'s in this extended
``space", {the} space-time {manifold}, so that the full set of six
$M_{\mu\nu} (= J_{\mu\nu}$) satisfying Eq.(\ref{so}) can be written
as
\begin{equation} \label{lor}
M_{\mu\nu} = i (x_\mu \partial_\nu -x_\nu \,
\partial_\mu) \; .
\end{equation}
Furthermore, adding the four-dimensional translations with
$p_{\scriptscriptstyle 0} = E/c = i\, \partial_{\scriptscriptstyle 0} = i\,
\frac{\partial}{\partial x^{\scriptscriptstyle 0}}$ ($\hbar=1$), we obtain the
full symmetry for Einstein special relativity described by the
Poincar\'e group, {\small\boldmath $ISO(1,3) \equiv SO(1,3)
\otimes_{s} R^4 $} which represents the complete transformation
group of inertial reference frames.
The idea of quantum relativity is to introduce further invariant(s)
into the physical system. In special relativity, for example, we
note that the momentum four vector can take any value. An invariant
bound for the four-momentum (energy-momentum four-vector) that
serves as a bound for elementary quantum states is one such example
and such a generalization is commonly {referred to as} ``doubly
special relativity" (DSR) \cite{dsr}. One can follow the example of
Einstein relativity and derive this generalization as follows. Let
us consider a parameter $\sigma$ outside of the four-dimensional
space-time manifold and consider special translations in this
manifold of the form (similar to Galilean boosts, see
Eq.(\ref{translation}))
\begin{equation}
x^{\mu}\to x^{\mu} + \Delta x^{\mu}\;,\quad \Delta x^{\mu} (\sigma)
= V^{\mu} \sigma\;,\label{vmu}
\end{equation}
where we have identified $V^{\mu} = \frac{dx^{\mu}}{d\sigma}$. The
generators $O_{\!\mu}$'s of these translations have the same
commutation relations as the conventional four-dimensional
translations. Furthermore, let us extend the 4D space-time manifold
by adding the coordinate $\sigma$ as $x^{\scriptscriptstyle A} = (x^{\mu}, x^{4})
= (x^{\mu}, \kappa c\,\sigma), A=0,1,2,3,4$, where $\kappa$ denotes
the Planck mass and $c$ the speed of light. With this particular
choice of the extra coordinate, we recognize that $\sigma$ has the
dimensions of time/mass. As a result $V^{\mu}$ in Eq.(\ref{vmu}) has
the dimension of a momentum and we identify $V^{\mu} = p^{\mu}$. (We
will discuss more on this identification in the next section.) It is
clear that like the velocity in the case of Galilean relativity,
here $p^{\mu}$ can take any value. However, following the discussion
in the case of Einstein relativity, we see that on this
five-dimensional manifold, if we define a momentum 5-vector
$\pi^{\scriptscriptstyle A}$ as
\footnote{We scale by a factor $\kappa$ relative
to the common notation as first introduced by Snyder \cite{S}.}
\begin{equation}
\label{pi}
\pi^{\scriptscriptstyle A} = \left(\pi^{\mu}, \pi^{\scriptscriptstyle 4}\right) =
\left(\frac{p^{\mu}}{\sqrt{\kappa^{2}c^{2} - p_{\mu}p^{\mu}}},
\frac{\kappa c}{\sqrt{\kappa^{2}c^{2} - p_{\mu}p^{\mu}}}\right) =
\left(\Gamma \alpha^\mu ,\Gamma \right) \; ,
\end{equation}
with
$\alpha^{\mu} = p^{\mu}/\kappa c$, this will satisfy
\begin{equation} \label{pi2}
\eta_{\scriptscriptstyle A\!B} \pi^{\scriptscriptstyle A} \pi^{\scriptscriptstyle B} = \eta_{\mu\nu}
\pi^{\mu}\pi^{\nu} - \left(\pi^{\scriptscriptstyle 4}\right)^{2} = -1 \; .
\end{equation}
This, in turn, would imply {that} (see Eq.(\ref{5}))
\begin{equation}
\label{Gamma}
p_{\mu}p^{\mu} = \eta_{\mu\nu} p^\mu p^\nu = \kappa^2
c^2 \left(1-\frac{1}{\Gamma^2}\right) \leq \kappa^2 c^2 \;,
\end{equation}
where as we have said earlier $\kappa$ stands for the Planck mass.
Similar to the earlier discussion on Einstein relativity, we see
{that in this case} the four momentum lives on the coset space
$p^\mu \in ${\small \boldmath $SO(1,4)/SO(1,3)$} instead of $p^\mu
\in ${\small \boldmath $R^4$} and the de Sitter group {\small
\boldmath $SO(1,4)$} corresponds to the symmetry of the deformed
relativity here.
In this extended (five-dimensional) manifold, the extra generators
needed to complete {\small \boldmath $SO(1,4)$} and {to} lead to a
linear realization can now be taken as
\begin{equation}
O_{\!\mu} \equiv
J_{\mu\scriptscriptstyle 4} = i\, (x_\mu \partial_{\scriptscriptstyle 4} -x_{\scriptscriptstyle 4}\,
\partial_\mu) \;.\label{O}
\end{equation}
Like the {conventional 4D} translation generators, the
generators $O_{\!\mu}$'s also satisfy (we note the identification
made earlier $M_{\mu\!\nu} = J_{\mu\!\nu}$)
\begin{equation} [M_{\mu\!\nu},
O_{\!\lambda}] = i \,(\eta_{\nu\!\lambda} O_{\!\mu} -
\eta_{\mu\!\lambda} O_{\!\nu}) \;. \end{equation}
However, with the
identification in Eq.(\ref{O}), the algebra is deformed to {\small
\boldmath $SO(1,4)$} with
\begin{equation} [O_{\!\mu}, O_{\!\nu}] \longrightarrow
i\, M_{\mu\nu} \;.\label{O1}
\end{equation}
We call the transformations
generated by $O_{\!\mu}$'s as de Sitter momentum boosts, or simply
as momentum boosts. {Adding the five-dimensional translations, the
full symmetry group of this manifold becomes {\small\boldmath $ISO
(1,4) \equiv SO (1,4) \otimes_{s} R^{5}$.}} We want to emphasize
here that although we have a natural five-dimensional Minkowski
geometry to realize the new relativity, the fifth dimension here
should be considered neither as space nor time ($\sigma$ has {the}
dimension of time/mass). In the following section, we will try to
explore the physics meaning of this extra coordinate from two
different points of view.
Before ending this section, let us consider, for reasons to be
clarified below, one more deformation in relativity by imposing a
third invariant. Here, there is even less physical guidance on what
should be the appropriate quantity to consider, but an infrared
bound seems to be quite meaningful \cite{tsr,spha}. As we will
argue, it can, for example, end the many possible iterative
deformations that can be introduced along these lines. As in the
earlier discussions, let us introduce a parameter $\rho$ external to
the five-dimensional manifold that we have been considering {and}
for which the coordinates are $(x^{0}=c\,t, x^{i}, x^{4}=\kappa
c\,\sigma)$. Following the discussion of Galilean boost, let us
introduce special translations of the form
\begin{equation} x^{\scriptscriptstyle A}\to x^{\scriptscriptstyle
A} + \Delta x^{\scriptscriptstyle A}\;,\quad \Delta x^{\scriptscriptstyle A} (\rho) = {\mathcal
V}^{\scriptscriptstyle A} \rho \;, \label{vA}
\end{equation}
where we have identified
${\mathcal V}^{\scriptscriptstyle A} = \frac{dx^{\scriptscriptstyle A}}{d\rho}$. The
generators, $O_{\!\scriptscriptstyle A}^{\prime}$, of these translations will obey
the same commutation relations as the conventional five-dimensional
translation generators. Furthermore, let us extend the 5D manifold
by including this new coordinate $\rho$ as $x^{\scriptscriptstyle\mathcal M} =
(x^{\scriptscriptstyle A}, \ell\rho), {\mathcal M} = 0,1,2,3,4,5$ where $\ell$ is
an invariant length. With this choice of the fifth coordinate we see
that the new coordinate $\rho$ is dimensionless. As a result,
${\mathcal V}^{\scriptscriptstyle A}$ defined in Eq.(\ref{vA}) has the dimension
of length and we identify this with a coordinate vector of
translation ${\mathcal V}^{\scriptscriptstyle A} = z^{\scriptscriptstyle A}$. (The meaning of
this vector will be discussed in Sec.V.) As in Galilean relativity,
we see that $z^{\scriptscriptstyle A}$ can take any arbitrary value. However,
following the earlier discussion on special relativity, let us
introduce a coordinate vector on this six dimensional manifold as
\begin{equation}
X^{\!\scriptscriptstyle\ssc\mathcal M} = (X^{\!\scriptscriptstyle A}, X^{\scriptscriptstyle 5})
= \left(\frac{z^{\scriptscriptstyle A}}{\sqrt{\ell^{2} - \eta_{\scriptscriptstyle A\!B}z^{\scriptscriptstyle A}z^{\scriptscriptstyle B}}},
\frac{\ell}{\sqrt{\ell^{2} - \eta_{\scriptscriptstyle A\!B}z^{\scriptscriptstyle A}z^{\scriptscriptstyle B}}}\right)
= \left(G\gamma^{\scriptscriptstyle A}, G\right)\;, \label{X0}
\end{equation}
with $\gamma^{A} = z^{A}/\ell$. It is clear that this coordinate vector will satisfy the condition
\begin{equation}
\eta_{\scriptscriptstyle\mathcal M\!\mathcal N} X^{\!\scriptscriptstyle\mathcal M}X^{\!\scriptscriptstyle\mathcal N} = \eta_{\scriptscriptstyle A\!B} X^{\!\scriptscriptstyle A}X^{\!\scriptscriptstyle B}
- \left(X^{\!\scriptscriptstyle 5}\right)^{2} = -1 \;, \label{X}
\end{equation}
which, in turn, will imply that
\begin{equation}
\eta_{\scriptscriptstyle A\!B} z^{\scriptscriptstyle A}z^{\scriptscriptstyle B} = \ell^{2} \left(1 - \frac{1}{G^{2}}\right) \leq \ell^{2} \;.
\label{zbound}
\end{equation}
The constraint introduces an invariant length scale into the
problem. This construction also makes clear that the special
five-dimensional translations $z^{\scriptscriptstyle A}$ live on the coset
$z^{\scriptscriptstyle A} \in ${\small \boldmath $SO(1,5)/SO(1,4)$}, instead of
$z^{\scriptscriptstyle A} \in ${\small\boldmath $R^5$} and the enlarged de Sitter
group {\small \boldmath$SO(1,5)$} corresponds to the symmetry of
this deformed relativity. The new translations can now be thought
of as a new kind of boost described by the generators
\begin{equation} \label{tboost} J_{\!\scriptscriptstyle A5} \equiv O_{\!\scriptscriptstyle A}^{\prime} = i\,
(x_{\!\scriptscriptstyle A}
\partial_{\scriptscriptstyle 5} -x_{\scriptscriptstyle 5}\, \partial_{\!\scriptscriptstyle A}) \;, \end{equation} with
\begin{equation} [J_{\!\scriptscriptstyle A\!B}, O_{\!\scriptscriptstyle C}^{\prime} ] =
i \,(\eta_{\scriptscriptstyle B\!C} O_{\!\scriptscriptstyle A}^{\prime} - \eta_{\scriptscriptstyle A\!C} O_{\!\scriptscriptstyle B}^{\prime} ) \;.
\end{equation}
The deformation relative to the {\small\boldmath $ISO(1,4)$} algebra is now obtained to be
\begin{equation}
[O_{\!\scriptscriptstyle A}^{\prime} , O_{\!\scriptscriptstyle B}^{\prime} ] \longrightarrow i\, J_{\scriptscriptstyle A\!B} \;,\label{O'}
\end{equation}
for $A,B=0, 1,2,3,4$. Indeed, all the symmetry generators can be written as
\begin{equation} \label{15}
J_{\scriptscriptstyle\mathcal M\!\mathcal N}
= i\, (x_{\!\scriptscriptstyle\mathcal M} \,\partial_{\!\scriptscriptstyle\mathcal N}
-x_{\!\scriptscriptstyle\mathcal N}\, \partial_{\!\scriptscriptstyle\mathcal M}) \;,
\end{equation}
for ${\mathcal M,\mathcal N}= 0,1,2,3,4,5$, giving the linear realization of the {\small \boldmath $SO(1,5)$}
symmetry. This is what we consider to be the true (full) symmetry of quantum relativity.
Discussions in the rest of the paper will {attempt to} justify this choice as a sensible one.
We note that because of the constraint of Eq.(\ref{X}), the relevant
part of the six-dimensional geometry is actually a five-dimensional
hypersurface given by \begin{equation} \eta_{\scriptscriptstyle\mathcal M\!\mathcal N}
X^{\!\scriptscriptstyle\mathcal M}X^{\!\scriptscriptstyle\mathcal N} = -1\;. \label{X1}
\end{equation} (As we discuss later in Sec.V, the coordinates $x^{\scriptscriptstyle\mathcal
M}$ and $X^{\!\scriptscriptstyle\mathcal M}$ give different parametrizations of
a point in this six dimensional manifold.) The five-dimensional
hypersurface does not admit simple translational symmetry along any
of the six coordinates anymore. Hence, the above scheme of
relativity deformations naturally ends here. To be more specific, it
ends once we put in a deformation that imposes an invariant bound on
the displacement vector generalizing the space-time coordinate
itself. The transformations generated by operators in
Eq.(\ref{tboost}) are isometries of the 5D hypersurface mixing
$x^{\scriptscriptstyle 5}$ with the other coordinates. We call them (de Sitter)
translational boosts. Taking this as the quantum relativity forces
us to consider the 5D hypersurface $\mbox{dS}_{\scriptscriptstyle 5}$, which is a
de Sitter space compatible with a positive cosmological constant, as
the arena for (quantum) space-time. However, we not only have
$\mbox{dS}_{\scriptscriptstyle 5}$ having one more dimension than the
$\mbox{dS}_{\scriptscriptstyle 4}$ curved space-time conventionally considered in
the cosmology literature, but quantum relativity also suggests that
the extra coordinates $x^{\scriptscriptstyle 4}$ and $x^{\scriptscriptstyle 5}$ are quite
different from the conventional (spacelike) space-time coordinates.
\section{Some Physics of the Momentum Boosts and the $x^{\scriptscriptstyle 4}$ coordinate}
In this section, we focus on the relativity with only one extra
invariant scale, $\kappa c$. Discussions here can be considered as
{relevant} only to the intermediate case without the last
deformation involving the invariant length $\ell$. The deformed
relativity up to the level of {\small \boldmath $SO(1,4)$} is
essentially the same as the DSR constructions, with a different
{parametrization for} the energy-momentum surface defined by
Eq.(\ref{pi2}) \cite{KNds}. The linear realization of the
transformations presented here has been discussed implicitly, most
notably in Ref.\cite{GL}, although the physics involved is not
clearly discussed . We view the transformations here as what they
should be, namely, transformations of (quantum) reference frames, in
order to extract a better understanding of a sensible interpretation
of physics issues involved. We want to emphasize right away that
the deformation, introducing the new momentum boosts as distinct
from the Lorentz boosts in the linear realization through the
5-geometry, is characterized by a central idea contained essentially
in the defining relation $p^\mu\equiv\frac{dx^\mu}{d\sigma}$. This
is nothing less than introducing {\em a new definition of the
energy-momentum} 4-vector, whose implications will be discussed
{later} in this section. To emphasize, we note that $\sigma$ (or
$x^{\scriptscriptstyle 4}$) as a coordinate is external to the four-dimensional
space-time, and hence $p^\mu$ so defined is different from the old
definition of (Einstein) energy-momentum in the 4D space-time. {\em
In fact, we consider it necessary to take special caution against
thinking of $x^{\scriptscriptstyle 4}$ as simply an extra space-time coordinate.
}
The new relativity (in this intermediate case) only adds a new
dimension parametrized by $x^{\scriptscriptstyle 4}$ and we note that the set of
Lorentz transformations continue to be a part of the isometry group
of the extended 5D manifold characterizing rotations within any 4D
space-time sub-manifold. However, there are also new symmetry
transformations in this 5D manifold. These are the momentum boosts.
To better appreciate the physics of the momentum boosts generated by
the $O_{\!\mu}$'s, let us analyze the finite transformations under
such boosts and, in particular, examine the transformation of the
energy-momentum 4-vector. To keep the discussion parallel to what we
know in Einstein relativity, let us summarize some of the essential
formulae from the latter.We recall that in an inertial frame
characterized by the velocity $\vec{\beta} = \vec{v}/c$ (note that
we use the $\vec{\cdot}$ notation in this paper to denote a generic
vector defined on a manifold of any dimension; whenever {ambiguities
are likely to arise} , we will use a notation {such as}
$\vec{\cdot}^{\,\,\scriptscriptstyle n}$ to {define} explicitly the
{dimensionality of the} vector), the coordinates transform as
\begin{eqnarray}
x^{\prime\scriptscriptstyle\, 0} & = & \gamma \left(x^{\scriptscriptstyle 0} - \vec{\beta}\cdot \vec{x}\right), \nonumber\\
\vec{x^{\prime}} & = & \vec{x} + \vec{\beta}
\left(\frac{(\gamma - 1)}{\beta^{2}} \vec{\beta}\cdot \vec{x} - \gamma x^{\scriptscriptstyle 0}\right)\;, \label{boost}
\end{eqnarray}
where $\beta^{2} = \vec{\beta}\cdot \vec{\beta}$ and $\gamma = 1/\sqrt{1-\beta^{2}}$
denotes the Lorentz contraction factor. Furthermore, if we have a particle moving with a
velocity $\vec{\beta_{\scriptscriptstyle 1}} = \vec{v_{\!\scriptscriptstyle 1}}/c$, then in the boosted frame, it will have
a velocity given by
\begin{equation}
\vec{\beta_{1}^{\,\prime}} = \frac{\gamma^{-1}}{1 - \vec{\beta}\cdot \vec{\beta_{\scriptscriptstyle 1}}} \left[\vec{\beta_{\scriptscriptstyle 1}}
+ \vec{\beta}\left(\frac{(\gamma - 1)}{\beta^{2}} \vec{\beta}\cdot \vec{\beta_{\scriptscriptstyle 1}} - \gamma\right)\right]\;.
\label{velocity}
\end{equation}
This gives the formula for the composition of velocities and, in
particular, when $\vec{\beta} = \beta (1,0,0)$ and $\vec{\beta_{\scriptscriptstyle
1}} = \beta_{\scriptscriptstyle 1} (1,0,0)$, reduces to the well-known formula
\begin{equation}
\beta_{\scriptscriptstyle 1}^{\,\prime} = \frac{\beta_{\scriptscriptstyle 1} - \beta}{1 - \beta\beta_{\scriptscriptstyle 1}}\;,
\end{equation}
which can also be written as
\begin{equation}
v_{\!\scriptscriptstyle 1}^{\prime} = \frac{v_{\!\scriptscriptstyle 1} - v}{1 - \frac{vv_{\!\scriptscriptstyle 1}}{c^{2}}}\;. \label{velocity1}
\end{equation}
The momentum boosts of the five-dimensional geometry generated by
$O_{\!\mu}$ can also be understood along the same lines. If we
consider an inertial frame characterized by the momentum 4-vector
$\vec{\alpha} = \vec{p}/\kappa c$, then the 5D coordinates will
transform as
\begin{eqnarray}
x^{\,\prime\, \scriptscriptstyle 4} & = & \Gamma \left(x^{\scriptscriptstyle 4} - \vec{\alpha}\cdot \vec{x}\right)\;, \nonumber\\
x^{\,\prime\, \mu} & = & x^{\mu} + \alpha^{\mu}\left(\frac{(\Gamma - 1)}{\alpha^{2}}
\vec{\alpha}\cdot \vec{x} - \Gamma x^{\scriptscriptstyle 4}\right)\;, \label{pboost}
\end{eqnarray}
where $\vec{\alpha}\cdot \vec{x}= \eta_{\mu\nu} \,\alpha^{\mu}x^{\nu}$,
$\alpha^{2} = \vec{\alpha}\cdot \vec{\alpha}$, and $\Gamma = 1/\sqrt{1 - \alpha^{2}}$
is the analogous ``contraction factor". It can be checked easily that under these
transformations, the metric of the manifold remains invariant, namely,
\begin{equation}
\eta^{\prime\,\scriptscriptstyle A\!B} = \eta^{\scriptscriptstyle A\!B}\;,\qquad \eta_{\scriptscriptstyle A\!B}^{\prime} = \eta_{\scriptscriptstyle A\!B}\;,
\end{equation}
so that the new transformations correspond to isometries of the manifold.
Furthermore, if we have a particle moving with a momentum
$\vec{{\alpha}_{\!\scriptscriptstyle 1}} = \vec{{p}_{\!\scriptscriptstyle 1}}/\kappa c$, then in
the momentum boosted frame it will have a momentum (see
Eq.(\ref{velocity}))
\begin{equation}
\vec{\alpha^{\prime}_{\!\scriptscriptstyle 1}} = \frac{\Gamma^{-1}}{1 - \vec{\alpha}\cdot \vec{\alpha_{\!\scriptscriptstyle 1}}}
\left[\vec{\alpha_{\!\scriptscriptstyle 1}} + \vec{\alpha}
\left(\frac{(\Gamma - 1)}{\alpha^{2}} \vec{\alpha}\cdot \vec{\alpha_{\!\scriptscriptstyle 1}} - \Gamma\right)\right]\;. \label{alpha}
\end{equation}
This gives the formula for the composition of momentum under momentum boosts
and can also be written equivalently as
\begin{equation}
\vec{p^{\prime}_{\!\scriptscriptstyle 1}} = \frac{\Gamma^{-1}}{1- \frac{\vec{p}\cdot \vec{p_{\!\scriptscriptstyle 1}}}{\kappa^{2}c^{2}}}
\left[ \vec{p_{\!\scriptscriptstyle 1}} + \vec{p} \left(\frac{(\Gamma - 1)}{p^{2}} \vec{p}\cdot \vec{p_{\!\scriptscriptstyle 1}} - \Gamma\right)\right]\;. \label{p}
\end{equation}
In particular, if we consider a momentum boost along the $x^{\scriptscriptstyle
0}$ direction generated by $O_{\!\scriptscriptstyle 0}$ characterized by $\vec{p}
= p (1,0,0,0)$, then the composition of the momentum given by
Eq.(\ref{p}) leads to
\begin{eqnarray}
p_{\!\scriptscriptstyle 1}^{\prime\, \scriptscriptstyle 0} & = & \frac{p_{\!\scriptscriptstyle 1}^{\scriptscriptstyle 0} - p}{1 - \frac{pp_{\!\scriptscriptstyle 1}^{\scriptscriptstyle 0}}{\kappa^{2}c^{2}}}\;,\nonumber\\
\vec{p^{\prime}_{\!\scriptscriptstyle 1}}^{\scriptscriptstyle 3} & = &
\frac{\sqrt{1 - \frac{p^{2}}{\kappa^{2}c^{2}}}}{1 - \frac{pp_{\!\scriptscriptstyle 1}^{\scriptscriptstyle 0}}{\kappa^{2}c^{2}}}\, \vec{p_{\!\scriptscriptstyle 1}}^{\scriptscriptstyle 3} \;, \label{0boost}
\end{eqnarray}
which can be compared with the {formula Eq.(\ref{velocity1}) for}
velocity composition in Einstein relativity. Furthermore, if we
assume that the particle characterized by a rest mass $m_{\scriptscriptstyle 1}$
is in its rest frame so that $\vec{p_{\!\scriptscriptstyle 1}} = m_{\scriptscriptstyle 1}c
(1,0,0,0)$ and the momentum boost of the form $\vec{p} = m
c(1,0,0,0)$, then Eq.(\ref{0boost}) leads to the composition law
\begin{equation} \label{m}
m_{\scriptscriptstyle 1}^{\prime} = \frac{m_{\scriptscriptstyle 1} - m}{1 - \frac{mm_{\scriptscriptstyle 1}}{\kappa^{2}}} \;.
\end{equation}
An Einsteinian particle with the rest mass $m_{\scriptscriptstyle 1}$ has momentum
that can be parametrized as $\vec{p_{\!\scriptscriptstyle 1}} = (\gamma m_{\scriptscriptstyle
1}\,c, \gamma m_{\scriptscriptstyle 1}\,c \, \beta_{\scriptscriptstyle 1}^{\,i})$ which
satisfies the on shell condition (a terminology of relativistic
quantum field theory)
\[
\eta_{\mu\!\nu} p_{\!\scriptscriptstyle 1}^{\mu}p_{\!\scriptscriptstyle 1}^{\nu} = m_{\scriptscriptstyle 1}^{2}c^{2} \;,
\]
in any Lorentzian frame. In particular, there is the particle rest
frame in which we have $\vec{p_{\!\scriptscriptstyle 1}} {=} (m_{\scriptscriptstyle
1}\,c,0,0,0)$. We {normally} think this {as} the reference frame
defined by the particle itself; or the {the frame where} particle is
the observer. {As a result,} the particle does not see its own
motion, but does see its own mass or energy. The introduction of the
momentum boosts relating {different} reference frames generalizes
that perspective. Just as $\beta_{\scriptscriptstyle 1}^2$ is not invariant under
Lorentz boosts, {$p_{\!\scriptscriptstyle 1}^2$} is {also} not invariant under the
momentum boosts. {Furthermore, just as} there is the preferred rest
frame {with $\beta_{\scriptscriptstyle 1}^2=0$,} or equivalently {with} the
4-velocity characterized by {$\vec{u_{\scriptscriptstyle 1}} = (1,0,0,0)$},
{similarly} the linearly realized DSR introduces a preferred
``particle" frame with {$p_{\!\scriptscriptstyle 1}^2=0$}, {which is equivalently}
characterized by the 5-momentum {$\vec{\pi_{\!\scriptscriptstyle 1}}
=(0,0,0,0,1)$} as {obtained from Eq.(\ref{m}) by} setting
{$p_{\!\scriptscriptstyle 1}^{\mu} = p^{\mu}$}. This is the ``true particle
frame" in which the ``particle" does not see itself, neither its
motion nor its mass/energy. The rest frame of Einstein relativity is
only the frame that has no relative motion {with respect} to the
particle.
As we have mentioned earlier, we would like to view the momentum
boosted frames as quantum reference frames. We will see here that
the interpretation is in a way necessary, as we look into how other
momentum 4-vectors look like in such a reference frame. Looking at
Eq.(\ref{0boost}) we see that
\begin{equation}
\eta_{\mu\!\nu} \,p_{\!\scriptscriptstyle 1}^{\prime\mu} p_{\!\scriptscriptstyle 1}^{\prime\nu}
= \frac{1}{\left(1 - \frac{pp_{\!\scriptscriptstyle 1}^{\scriptscriptstyle 0}}{\kappa^{2}c^{2}}\right)^{2}}
\left[(p_{\!\scriptscriptstyle 1}^{\scriptscriptstyle 0} - p)^{2} - \left(1 - \frac{p^{2}}{\kappa^{2}c^{2}}\right)
(\vec{p_{\!\scriptscriptstyle 1}}^{\scriptscriptstyle 3})^2 \right]
\neq \eta_{\mu\!\nu}\, p_{\!\scriptscriptstyle 1}^{\mu} p_{\!\scriptscriptstyle 1}^{\nu} \;. \label{massshell}
\end{equation}
where $(\vec{p_{\!\scriptscriptstyle 1}}^{\scriptscriptstyle 3})^2$ is the magnitude of the
momentum 3-vector. Because of the complicated dependence on $c$ in
here, we cannot even write $\eta_{\mu\nu} \, p_{\!\scriptscriptstyle 1}^{\prime
\mu} p_{\!\scriptscriptstyle 1}^{\prime\nu} = m_{\scriptscriptstyle 1}^{\prime\,2} \, c^{2}$.
The quantum field theoretical concept of off-shellness is what we
consider applicable here. Quantum states are either on shell or off
shell, as observed from a classical frame. When boosted to a quantum
frame characterized by even an on shell quantum state, the state
does not observe itself, and observes {the} other originally on
shell states as generally off shell. The concept of {an} off shell
{state} is related to the uncertainty principle. Unlike a classical
particle, a quantum state, even on shell, {has associated}
uncertainties. If such a state is to be taken as the reference
frame, or observer (measuring apparatus), it is very reasonable to
expect {that} the apparatus imposes its own uncertainties onto
whatever it {observes/measures}. We have to be cautious here though
about whether a quantum measuring apparatus has any practical
possibility of being realized. After all, the only true practical
observers, {namely, human beings,} are basically classical.
We see that the conceptually small step that we take here is indeed
a bold one. Our explicit formulation of the natural linear
realization of the momentum boosts of DSR looks highly
unconventional. It begs the question if a consistent and viable
phenomenological interpretation exists --- a question {for which} we
are only {going to provide here} some partial {answer in the}
affirmative. In Einstein relativity, we have $\eta_{\mu\!\nu} p^\mu
p^\nu = m^2 c^2$ and $p^\mu=m c\, u^\mu$. In quantum physics, we are
familiar with the concept of off shell states which violate the
first equation. Our explicit analysis of the momentum boost
illustrates that the on shell condition is not preserved under a
momentum boost. The momentum boost analyzed above is one of the
simplest (namely, a boost along the $x^{\scriptscriptstyle 0}$ axis), but our
conclusions obviously hold for any other more complicated momentum
boost. In fact, it is easily derived from Eq. (\ref{p}) that an
arbitrary momentum boost leads to
\begin{equation}
\eta_{\mu\nu} \, p_{\!\scriptscriptstyle 1}^{\prime\,\mu} p_{\!\scriptscriptstyle 1}^{\prime\,\nu}
= \frac{1}{\left(1 - \frac{\vec{p}\cdot \vec{p}_{\!\scriptscriptstyle 1}}{\kappa^{2}c^{2}}\right)^{2}}
\left[(\vec{p}_{\!\scriptscriptstyle 1} - \vec{p})^{2} + \frac{1}{\kappa^{2}c^{2}}
\left((\vec{p}\cdot \vec{p}_{\!\scriptscriptstyle 1})^{2} - p^{2} p_{\!\scriptscriptstyle 1}^{2}\right)\right]
\neq \eta_{\mu\nu} p_{\!\scriptscriptstyle 1}^{\mu} p_{\!\scriptscriptstyle 1}^{\nu} \;.
\end{equation}
Rather the condition $p^\mu=m c\, u^\mu$ is actually given up right at the beginning
of the formulation. The linear realization of momentum boosts as distinct from velocity
(Lorentz) boosts introduces the defining relation
$p^{\mu} = \frac{dx^{\mu}}{d\sigma} = \kappa c \frac{dx^{\mu}}{dx^{\scriptscriptstyle 4}}$
{in contrast to the Einstein relativity} limit of $m c\, u^\mu = m \frac{dx^{\mu}}{dx^{\tau}}$
as $m c \frac{dx^{\mu}}{dx^{\scriptscriptstyle 0}}$.
So far, we have not clarified the nature of the extra coordinate
$\sigma$. This leads to $x^{\scriptscriptstyle 4} = \kappa c \sigma$ which is
necessitated by the desire to have a bound on the energy-momentum
4-vector. This new coordinate has a spacelike signature, but has
the dimensions of time/mass. This suggests that from the physics
point of view it has the character of time as opposed to space
(which its signature will suggest). In fact, $\sigma$ should be
considered neither as space nor as time in this 5D space. In this
sense, the frames in this 5D manifold that we are analyzing should
be considered different from the ones that arise in a naive 5D
extension of the usual 4D space-time by adding an extra spatial
dimension. To appreciate this a bit more and also to bring out the
nature of the coordinate $\sigma$, let us consider the following.
Let us recall that in deforming Galilean relativity to Einstein
relativity, one introduces Lorentz boosts which mix up space and
time and are characterized by a velocity. The {(instantaneous)}
velocity of a particle is defined as $v^{i} = \frac{dx^{i}}{dt} = c
\frac{dx^{i}}{dx^{\scriptscriptstyle 0}}$. This three-dimensional velocity, of
course, does not transform covariantly under a Lorentz boost.
However, the extra coordinate $x^{\scriptscriptstyle 0}$ is in a way already there
as the time parameter $t$ and the velocity is also there as the time
derivative in the 3D Galilean theory. In a parallel manner, in
deforming Einstein relativity to DSR, we introduce momentum boosts
which mix up the extra coordinate with the 4D coordinates and are
characterized by a momentum. In this case, the momentum of a
particle is then to be defined as $p^{\mu} =
\frac{dx^{\mu}}{d\sigma} = \kappa c \frac{dx^{\mu}}{dx^{\scriptscriptstyle 4}}$.
This momentum does not transform covariantly under momentum boosts
(see Eq.(\ref{p})). However, unlike the case of deforming the
Galilean velocity boosts to Lorentzian ones, here the extra
coordinate $x^{\scriptscriptstyle 4}$ or $\sigma$ as a parameter is new, and the
definition of the momentum as a $\sigma$ derivative does not
coincide with the {conventional} definition of the momentum in
Newtonian physics or Einstein relativity. In the latter case, one
defines $p_{\mbox{\tiny (ER)}}^{\mu} = m \frac{dx^{\mu}}{d\tau}$
where $\tau$ represents the proper time. We know that the
{definition of} $p_{\mbox{\tiny (ER)}}^{\mu}$
and its Newtonian limit are valid concepts. So, we do need to reconcile
the two definitions of momentum {in} the regime of Einstein
relativity. That can be achieved if we identify $\sigma =
\frac{\tau}{m}$ which has the dimension of time/mass as we have
pointed out earlier. With this identification we see that the
$\sigma$ coordinate for a (classical) particle observed from a
classical frame is essentially the Einstein proper time. {(}In
fact, to the extent that {the definition of} $p_{\mbox{\tiny
(ER)}}^{\mu}$ is valid in quantum mechanics, we expect the relation
to be valid to a certain extent even for quantum states observed
from a classical frame.{)} The mass factor, however, gives particles
in the state of motion different $\sigma$ locations. Note that such
an identification holds only for $m\neq 0$ {just as the
identification} $p_{\mbox{\tiny (ER)}}^{\mu} = m
\frac{dx^{\mu}}{d\tau}$ {holds only when} $m\neq 0$. For the
massless photon, for example, $p_{\mbox{\tiny (ER)}}^{\mu}$
{instead}
depends on the photon frequency and wavelength -- {an idea with origin in}
quantum physics.That may actually be taken as a hint that the
classical notion of $p_{\mbox{\tiny (ER)}}^{\mu} = m \frac{dx^{\mu}}{d\tau}$
is not fully valid in true quantum physics.
Finally, we comment briefly on the $O_i$ momentum boosts here. Such
a boost is one characterized, for example, by a relative
energy-momentum $\vec{p}= (0,p,0,0)$ which cannot correspond to an
on shell state observed from the original classical reference frame.
However, if momentum boosts can take us to a quantum reference frame
{and} change the observed on shell nature of states, there is no
reason why one cannot accept the reference frame itself to be
characterized by an off shell energy-momentum vector either. Physics
described from such a frame certainly looks peculiar, though it may
not concern a classical {human} observer.
\section{On the Full Quantum Relativity and Phase-Space Symmetry}
There has already been a lot of work and discussion on the subject of DSR and TSR
(doubly and triply special relativity). In this section, we summarize
some of the results on the symmetry aspects that exist in the literature while
adapting them into our point of view. In particular, we would be
interested in the formulations of TSR in Ref.\cite{tsr,CO} which is
proposed as the full quantum relativity. The results of
Ref.\cite{tsr} start with the ``phase-space" algebra of DSR with
noncommutative space-time coordinate, which can be written according
to our conventions as
\begin{eqnarray}
&& [M_{\mu\nu}, M_{\lambda\rho}] =
i(\eta_{\nu\!\lambda} M_{\mu\rho} - \eta_{\mu\!\lambda} M_{\nu\rho}
+ \eta_{\mu\rho} M_{\nu\lambda} - \eta_{\nu\rho} M_{\mu\lambda})\;,
\nonumber\\
&& [M_{\mu\nu}, \hat{P}_{\!\lambda}] = i \,(\eta_{\nu\!\lambda} \hat{P}_{\!\mu}
- \eta_{\mu\!\lambda} \hat{P}_{\!\nu}) \;,
\nonumber \\
&& [M_{\mu\nu}, \hat{X}_{\!\lambda}] = i \,(\eta_{\nu\!\lambda} \hat{X}_{\!\mu}
- \eta_{\mu\!\lambda} \hat{X}_{\!\nu}) \;,
\nonumber \\
&& [\hat{X}_{\mu}, \hat{X}_{\!\nu}] = \frac{i}{\kappa^2 c^2} M_{\mu\nu} \;,
\nonumber \\
&& [\hat{P}_{\mu}, \hat{P}_{\!\nu}] = 0 \;,
\nonumber \\
&& [\hat{X}_{\mu}, \hat{P}_{\!\nu}] = - i\, \eta_{\mu\nu} \;.
\label{tsr1}
\end{eqnarray}
This is identified as the algebra of DSR phase-space symmetry
and we note that $\hat{X}$ and $\hat{P}$ correspond to generators
(operators) of the algebra in contrast to the $x$ and $p$
coordinates described in the earlier sections. A second deformation
of Eq.(\ref{tsr1}) is considered with a view to implement the third
invariant as a length $\ell$ related to the cosmological constant
($\Lambda=\ell^{-2}$). In this case the commutator of {the} momentum
operators in Eq.(\ref{tsr1}) is deformed to \begin{equation} [\hat{P}_{\mu},
\hat{P}_{\!\nu}] = \frac{i}{\ell^2} M_{\mu\nu} \;. \end{equation} However,
this deformation leads to a violation {of} the Jacobi identity which
{induces} a further modification of the Heisenberg commutator
(between $\hat{X}$ and $\hat{P}$) involving a complicated
(quadratic) expression in the generators. This algebra is then
identified as the quantum algebra (of TSR). We also note here that
it was pointed out in Ref.\cite{tsr} that this algebra can be
represented in terms of coordinates and derivatives of a six-dimensional
manifold. It is important to recognize that in both
cases (DSR and TSR), in addition to the deformations, the usual 4D
coordinates are promoted to generators of the algebra, also to be
interpreted as representing a noncommutative geometry of quantum
space-time.
At this point, it is interesting to compare the original DSR to TSR
deformation of Ref.\cite{tsr} with our formulation of the quantum
relativity algebra. After the deformation from Einstein relativity
to {\small \boldmath $SO(1,4)$}, we have a linear realization of the
algebra at the level of DSR. Our relativity algebra is set on a 5D
commutative manifold. We do not have coordinate operators as
generators of the algebra. However, it is worth noting that the four
generators, $O_{\!\mu}$'s, generating momentum boosts satisfy the
same commutation relations as the $\hat{X}_{\mu}$ operators in
Eq.(\ref{tsr1}) (see, for example, Eqs.(\ref{O}) and (\ref{O1})).
Therefore, formally we can identify $O_{\!\mu}$ as $-{\kappa
\,c}\,\hat{X}_{\mu}$ with the explicit form following from
Eq.(\ref{O}) \begin{equation} \label{Xop} \hat{X}_{\mu} = -\frac{1}{\kappa \,c}\,
i\, (x_\mu \partial_{\scriptscriptstyle 4} -x_{\scriptscriptstyle 4}\, \partial_{\mu}) \;, \end{equation}
which actually seems like a very reasonable ``quantum"
generalization of the classical, or rather Einstein, space-time
position. In the limit ${\kappa \,c}\,\to \infty$ (with $\sigma\to
0$ as $1/\kappa c$) and $ i\,\partial_{\scriptscriptstyle 4}\equiv p_{\scriptscriptstyle
4}=-{\kappa \,c}$ ($p_{\scriptscriptstyle 4}=\eta_{\scriptscriptstyle 4A}p^{\scriptscriptstyle A}$), the
operators reduce to $x_\mu$. Such an identification as in
Eq.(\ref{Xop}), provides a bridge between a noncommutative geometric
description of 4D quantum space-time (details of which
await {further} investigation) and {our perspective of} quantum relativity as linearly realized
on a 5D, or eventually 6D, commutative manifold --- a geometric description
beyond the space-time perspective. We believe the two pictures to be complementary.
The symmetry can now be enlarged to {\small \boldmath $ISO(1,4)$}
by incorporating translations of the 5D manifold. The five of them
are $p_{\!\scriptscriptstyle A}\equiv i\,\partial_{\!\scriptscriptstyle A}$. Dropping $p_{\scriptscriptstyle
4}$ for the moment, they almost satisfy the full ``phase-space"
algebra of DSR. The only problem comes from the Heisenberg
commutator which is no longer canonical. While one can reasonably
argue that our linear realization of the DSR simply suggests that
the Heisenberg commutator should be modified, it is not what we want
to focus on here. We take the full quantum relativity at the next
(TSR) level, namely, with the third invariant $\ell$. Again,
forgetting the $\hat{X}_\mu$-$\hat{P}_{\mu}$ commutator for the
moment, the algebra which is essentially {\small \boldmath
$ISO(1,4)$} is deformed to {\small \boldmath $SO(1,5)$}. In this
case, it is possible to formally identify ${\ell}\,\hat{P}_{\mu}=
O_{\!\mu}^{\prime} \;(=J_{\mu\scriptscriptstyle 5})$ (see Eqs.(\ref{tboost}) and
(\ref{O'})) which gives the explicit linear realization \begin{equation}
\label{Pop} \hat{P}_{\mu} = \frac{1}{\ell}\, i\, (x_\mu
\partial_{\scriptscriptstyle 5} -x_{\scriptscriptstyle 5}\, \partial_{\mu}) \;.
\end{equation}
Once again
this gives a very reasonable ``quantum" generalization of the
``classical" $p_\mu$. This is seen by noting that in the limit $\ell
\,\to \infty$ (with $\rho\to 0$ as $1/\ell$) , $\hat{P}_{\mu}$
reduces to $ i\,\partial_{\mu}\equiv p_{\mu}$. All of these fit in
very nicely, except for the $\hat{X}_\mu$-$\hat{P}_{\mu}$ commutator
and the issue of the extra $\hat{P}_{\scriptscriptstyle 4}$, or rather $O_{\!\scriptscriptstyle
4}^{\prime}$ which has not been addressed so far.
The missing link in the above discussion actually can be obtained
from the analysis in Ref.\cite{CO} which restores the Lie-algebraic
description of the TSR algebra by identifying the right-hand side of
the Heisenberg commutator as a central charge generator $\hat{F}$
of the original algebra with {relevant} commutators
also deformed, yielding
\begin{equation} \label{Fop}
[\hat{X}_{\mu}, \hat{P}_{\!\nu}] = -i \, \eta_{\mu\nu} \hat{F} \;,
\qquad
[\hat{X}_{\mu}, \hat{F}] = \frac{i}{\kappa^2 c^2} \hat{P}_{\mu} \;,
\qquad [\hat{P}_{\mu}, \hat{F}] = -\frac{i}{\ell^2} \hat{X}_{\mu}
\;.
\end{equation}
This identifies the resulting algebra exactly with {\small
\boldmath $SO(1,5)$}, and, {therefore}, $\hat{F}$ loses its
character as a central charge generator. It is also interesting that
this algebra has been identified as the mathematical stabilizer of
the ``Poincar\'e+Heisenberg" symmetry \cite{CO}. The generator
$\hat{F}$ is essentially $O_{\!\scriptscriptstyle 4}^{\prime}$ (or equivalently
$\hat{P}_{\scriptscriptstyle 4}$). Explicitly, $O_{\!\scriptscriptstyle 4}^{\prime}= J_{\!\scriptscriptstyle
45} = -\kappa c \hat{F}$.
Nonlinear realizations of the different versions of quantum
relativity focus on the description of the (four) space-time
geometry as a noncommutative geometry. Here, the nonvanishing
commutator among $\hat{X}_{\mu}$'s may be considered to result from
the curvature of the energy-momentum space [{\it cf.}
Eq.(\ref{pi2})] while the nonvanishing commutators among
$\hat{P}_{\mu}$'s is considered a result of the ``space-time"
curvature \cite{dsr2,M} as characterized by the nonvanishing
cosmological constant. Our formulation through the linear
realization yields an explicit identification of the quantum
operators as generalizations of the familiar phase-space coordinates
variables. This is based on a 6D geometry which is commutative. Both
the extended $x$-space and the $p$-space are copies of $\mbox{dS}_5$
and the isometry of each contains the full phase-space symmetry
algebra of the quantum theory of 4D noncommutative space-time. The
$x^\mu$-variables and the $p^\mu$-variables are on the same
symmetric footing. In our opinion, this is a very {attractive} and
desirable feature. It also clearly suggests that the $x^{\scriptscriptstyle 4}$
and $x^{\scriptscriptstyle 5}$ coordinates should not be interpreted as space-time
coordinates in the usual sense. The generators of the momentum and
translational de Sitter boosts can be viewed as the set of quantum
position and momentum operators.
It is interesting to note that the {\small \boldmath $SO(1,4)$}
algebra, with the $J_{\mu\scriptscriptstyle 4}$ generators taken essentially as
the ``position" variables is identified as the universal basis of a
noncommutative space-time description of various DSR theories
\cite{KNncg}. The full phase-space symmetry algebras, however, {are}
taken to be $\kappa$-Poincar\'e quantum algebras in different bases
\cite{KNncg}. The latter contains $\hat{P}_{\mu}$ generators
extending the {\small \boldmath $SO(1,4)$} algebra with different
deformed commutators from the trivial case of {\small \boldmath
$ISO(1,4)$} corresponding to taking different 4-coordinates for the
coset space {\small \boldmath $SO(1,4)/SO(1,3)$} for the
energy-momentum \cite{KNds}. By sticking to the 5-momentum,
$\pi^{\!\scriptscriptstyle A}$, together with a 5D geometric extension of the
space-time, our linear realization allows us to put in the next
deformation of {\small \boldmath $ISO(1,4)$} to the full quantum
relativity of {\small \boldmath $SO(1,5)$} naturally. The latter
{group} was proposed as the TSR algebra \cite{tsr} only after the
Lie-algebraic interpretation of Ref.\cite{CO}, by pulling out the
algebraic structure basically {from a deformation of} the
phase-space symmetry algebra directly, as discussed above. With our
formulation of the full quantum relativity of {\small \boldmath
$SO(1,5)$}, it is suggested that not only is the 4D space-time
noncommutativity universal, so is that for the 4D energy-momentum
space. But now both the 6-vectors $x^{\scriptscriptstyle\mathcal M}$ and
$\pi^{\scriptscriptstyle\mathcal M}$ should be living on a $\mbox{dS}_5$. It would
be interesting to see how different choices of 5-coordinates on the
two $\mbox{dS}_5$ or canonical coordinates of the ten-dimensional
``phase-space'' match onto the various DSR theories or other similar
structures for the different nonlinearly realized space-time
structures as well as the role of the related quantum algebras.
\section{The Translational Boosts and De Sitter Geometry}
Similar to the Lorentz and the de Sitter momentum boosts discussed earlier, the
introduction of the de Sitter translational boosts is characterized by the 5-vector
$\vec{z}=\frac{d\vec{x}}{d\rho}=\ell \frac{d\vec{x}}{dx^{\scriptscriptstyle 5}}$. We note
that since the new coordinate $\rho$ is dimensionless, the parameter of boost in this
case carries the same dimension as the coordinates themselves. This should be contrasted
with velocity which represents the parameter of transformation in the case of Lorentz
boosts, and the momentum for the momentum boosts. A generic
$O^\prime$ boost characterized by a vector $\vec{z_{t}}$ can be denoted by the
transformation of the coordinates
\begin{eqnarray}
&& x^{\prime\,{\scriptscriptstyle 5}} = G_{\!t} (x^{\scriptscriptstyle 5} - \vec{z_t}\cdot \vec{x})
= G_{\!t} (x^{\scriptscriptstyle 5} - \vec{\gamma_t}\cdot \vec{x}) \;, \nonumber\\
&& \vec{x^{\prime}} = \vec{x} + \vec{\gamma_t}
\left(\frac{G_{\!t} -1}{\gamma_{t}^2} \vec{\gamma_t}\cdot \vec{x} - G_{\!t} \, x^{\scriptscriptstyle 5} \right)\;,
\label{dstb0v}
\end{eqnarray}
where we have identified (as defined earlier)
\begin{equation}
\vec{\gamma_t} = \frac{\vec{z_t}}{\ell}\;,
\quad G_{\!t} = \frac{\ell}{\sqrt{\ell^{2} - z_{t}^{2}}}\;,
\quad \vec{\gamma}_{t}\cdot \vec{x} = \eta_{\scriptscriptstyle A\!B} \gamma_{t}^{\scriptscriptstyle A} x^{\scriptscriptstyle B}\;,
\quad z_{t}^{2} = \ell^{2}\gamma_{t}^{2} = \eta_{\scriptscriptstyle A\!B} z_{t}^{\scriptscriptstyle A} z_{t}^{\scriptscriptstyle B}\;.
\end{equation}
The transformations are analogous to Eqs.(\ref{boost}) and
(\ref{pboost}) representing Lorentz boosts and de Sitter momentum
boosts and the 6D metric tensors $\eta^{\scriptscriptstyle\mathcal M \mathcal N},
\eta_{\scriptscriptstyle\mathcal M \mathcal N }$ are preserved under these
translational boosts.
Any 6-vector and, in particular, ${X}^{\!\scriptscriptstyle\mathcal M}$ defined
in Eq.(\ref{X0}) will transform covariantly under a translation
boost as in Eq. (\ref{dstb0v}). However, the 5D vector $\vec{z}$
will transform like the velocity in Eq.(\ref{velocity}) or like the
momentum in Eq.(\ref{alpha}) as
\begin{equation} \label{ztrans}
\vec{\gamma^{\prime}} = \frac{G_{\!t}^{-1}}{1 - \vec{\gamma}\cdot \vec{\gamma_t}}
\left[ \vec{\gamma} + \vec{\gamma_t}
\left(\frac{(G_{\!t} -1)}{\gamma_{t}^{2}} \vec{\gamma}\cdot \vec{\gamma_t} - G_{\!t}\right)\right] \;.
\end{equation}
Further support for identifying the parameter of translational boost
$z^{\scriptscriptstyle A}$ as a coordinate vector can be obtained as follows. Let us note that a point
$x^{\scriptscriptstyle\mathcal M}$ on the six dimensional manifold satisfying
\begin{equation} \label{ds5}
\eta_{\scriptscriptstyle\mathcal M\!\mathcal N} x^{\scriptscriptstyle\mathcal M} x^{\scriptscriptstyle\mathcal N} = \eta_{\scriptscriptstyle A\!B} x^{\scriptscriptstyle A} x^{\scriptscriptstyle B} - (x^{\scriptscriptstyle 5})^{2}
= - \ell^{2} \;,
\end{equation}
can be parametrized alternatively as (for $x^{\scriptscriptstyle 5} > 0$)
\begin{eqnarray}
x^{\scriptscriptstyle A} & = & \ell \omega^{\scriptscriptstyle A} \sinh \zeta\;, \nonumber\\
x^{\scriptscriptstyle 5} & = & \ell \cosh \zeta\;, \label{alternate}
\end{eqnarray}
where $\omega^{\scriptscriptstyle A}$ denotes a unit vector on the 5D manifold
satisfying $\eta_{\scriptscriptstyle A\!B}\omega^{\scriptscriptstyle A}\omega^{\scriptscriptstyle B} = 1$. It
is clear now that we can identify the components of
$X^{\!\scriptscriptstyle\mathcal M}$ in Eq.(\ref{X0}) as
\begin{equation}
G = \cosh \zeta \;,
\quad z^{\scriptscriptstyle A} = \ell \omega^{\scriptscriptstyle A} \tanh \zeta \;,
\quad \gamma^{\scriptscriptstyle A} = \frac{z^{\scriptscriptstyle A}}{\ell} = \omega^{\scriptscriptstyle A} \tanh \zeta \;,
\quad G^{2} (1 - \gamma^{2}) = 1 \;. \label{alternate1}
\end{equation}
This brings out the character of the (alternative) coordinate vector
$z^{\scriptscriptstyle A}$ as a parameter of boost. (This can be contrasted with the angular
representation of a Lorentz boost which has the form
$\gamma = \cosh \theta, |\vec{\beta}| = \tanh \theta, \gamma^{2} (1 - |\vec{\beta}|^{2}) =1$.
Note the symbol $\gamma$ is used in both cases to represent different
things.) Furthermore, with this identification, we recognize that the 6D vectors $x^{\scriptscriptstyle\mathcal M}$
and $X^{\!\scriptscriptstyle\mathcal M}$ can be related simply as
\begin{equation}
x^{\scriptscriptstyle\mathcal M} = \ell X^{\!\scriptscriptstyle\mathcal M}.
\end{equation}
In fact, with the identifications in Eqs.(\ref{alternate}) and
(\ref{alternate1}), we note that if we identify
\begin{equation}
\omega^{\scriptscriptstyle A} = \frac{x^{\scriptscriptstyle A}}{\sqrt{x^{2}}},
\end{equation}
where $x^{2} = \eta_{\scriptscriptstyle A\!B}x^{\scriptscriptstyle A}x^{\scriptscriptstyle B}$, we can write
\begin{equation}
z^{\scriptscriptstyle A} = \ell\ \frac{x^{\scriptscriptstyle A}}{\sqrt{x^{2}}}\ \tanh \zeta
= \ell\ \frac{x^{\scriptscriptstyle A}}{\ell \sinh \zeta}\ \tanh \zeta = \frac{x^{\scriptscriptstyle A}}{\cosh \zeta}
= \ell\ \frac{x^{\scriptscriptstyle A}}{x^{\scriptscriptstyle 5}} \;,
\end{equation}
which is, of course, the definition of Beltrami coordinates for the
de Sitter manifold dS$_{5}$ (on the Beltrami patch $x^{5}>0$). As
we have mentioned earlier in connection with Eq.(\ref{X1}), the
basic manifold of our theory is a 5D hypersurface dS$_{5}$ of the
6D manifold and can be parametrized by five independent coordinates.
The Beltrami coordinates (also known as gnomonic coordinates)
provide a useful coordinate system which preserve geodesics as
straight lines and, therefore, we can use $z^{\scriptscriptstyle A}, A=0,1,2,3,4$
to parametrize our manifold. We note here that there have been some
studies on de Sitter special relativity \cite{dssr} (which is
Einstein special relativity formulated on a de Sitter, rather than a
Minkowski space-time) where Beltrami coordinates are used. Some of
the results obtained there may be used to shed more light on the
physics of quantum relativity. However, we want to emphasize here
that the (special) quantum relativity that we are discussing here is
{\em not} just a version of de Sitter special relativity. In
particular, as we have discussed, the momentum boost transformations
are expected to relate quantum frames of reference, and one should
be cautious in borrowing physics results from Ref.\cite{dssr}. In
fact, the symmetric role of the relations of $x^{\scriptscriptstyle 4}$ and
$x^{\scriptscriptstyle 5}$ coordinates to the 4D quantum noncommutative position
and momentum operators, respectively, gives a new perspective on
classical de Sitter physics.
Note that the translational boosts are reference frame
transformations that correspond to taking the coordinate origin to a
different location on the dS$_{5}$. The coordinate origin is always
an important part of the frame of reference. The origin is where an
observer measures locations of physical events from. Up to Einstein
relativity, translation of coordinate origin is quite trivial. It
does not change most of the physical quantities like velocity and
energy-momentum measured. However, the situation is a bit different
in the de Sitter geometry. Here, the coordinate origin can be
unambiguously represented by ${z}^{\scriptscriptstyle A}=0$, or location where the
observer measures quantities including location $z^{\scriptscriptstyle A}$ of
events from. The reference frame does not see itself, and must
conclude that its own location is right at the origin of the
coordinate system it measures event locations with. In terms of the
6-coordinate $X^{\!\scriptscriptstyle\mathcal M}$, the origin has a single
nonvanishing coordinate $X^{\scriptscriptstyle 5}=1$. Consider an event seen as at
location given by a nonvanishing ${z}^{\scriptscriptstyle A}_t\ne 0$, say with
zero velocity and momentum for simplification, transforming to the
new reference frame characterized by the event means translating the
coordinate origin to that location, {\em i.e.} translating by
${z}^{\scriptscriptstyle A}_t$. One can check explicitly that the new coordinates
of the event, $X^{\prime\scriptscriptstyle\mathcal M}$ or $z^{\prime\scriptscriptstyle A}$, as
seen from itself to be obtained from Eqs.(\ref{dstb0v}) and
(\ref{ztrans}) are indeed that of an origin. It also follows that
composing translations gives nontrivial relativistic results beyond
simple addition. In fact, comparing with the velocity and momentum
composition formula of the Lorentz and de Sitter momentum boosts,
respectively ({\em cf.} Eqs.(\ref{velocity},\ref{alpha},\ref{p})),
we have the location composition formula
\begin{equation}
\vec{z^{\prime}_{\scriptscriptstyle 1}} = \frac{G^{-1}}{1- \frac{\vec{z}\cdot \vec{z_{\scriptscriptstyle 1}}}{\ell^{2}}}
\left[ \vec{z_{\scriptscriptstyle 1}} + \vec{z} \left(\frac{(G - 1)}{z^{2}} \vec{z}\cdot \vec{z_{\scriptscriptstyle 1}} - G\right)\right]\;.
\end{equation}
The formula gives the new location 5-vector $\vec{z^{\prime}_{\scriptscriptstyle
1}}$ of an original $\vec{z_{\scriptscriptstyle 1}}$ boosted to a new frame
characterized by $\vec{z}$ $\left( G^{-2} = 1 - \frac{\eta_{\!\scriptscriptstyle
A\!B} z^{\!\scriptscriptstyle A}z^{\!\scriptscriptstyle B}}{\ell^{2}}\right)$. In particular,
$\vec{z^{\prime}_{\scriptscriptstyle 1}}$ vanishes for $\vec{z}=\vec{z_{\scriptscriptstyle 1}}$.
On the other hand, simple addition of the 6-vector
${X}^{\!\scriptscriptstyle\mathcal M}$'s does not preserve the de Sitter
constraint of Eq.(\ref{ds5}) characterizing the dS$_5$ hypersurface.
Likewise, conventional 6D translations are not admissible
symmetries.
The dS$_5$ hypersurface is obtained from a 6D manifold with a flat metric,
$\eta_{\scriptscriptstyle\mathcal M\!\mathcal N}$, when described in terms of the six coordinates
$x^{\scriptscriptstyle\mathcal M}$'s. When described in terms of the Beltrami coordinates $z^{\scriptscriptstyle A}$'s,
however, the 5D metric is nontrivial and has the form
\begin{equation} \label{g}
g_{\scriptscriptstyle \!A\!B}= G^2 \eta_{\scriptscriptstyle A\!B} +\frac{G^4}{{\ell}^2} \eta_{\scriptscriptstyle A\!C}
\eta_{\scriptscriptstyle B\!D} z^{\scriptscriptstyle C} z^{\scriptscriptstyle D} \;.
\end{equation}
The generators of the isometry group {\small \boldmath $SO(1,5)$} can be expressed in
terms of these variables as follows. We first note that
$z_{\scriptscriptstyle A}=g_{\scriptscriptstyle \!A\!B}z^{\scriptscriptstyle B}=G^4 \, \eta_{\scriptscriptstyle A\!B} z^{\scriptscriptstyle B},$ (where we have
used the definition $G^{-2} = 1 - \frac{\eta_{\!\scriptscriptstyle A\!B} z^{\!\scriptscriptstyle A}z^{\!\scriptscriptstyle B}}{\ell^{2}}$)
leading to
\begin{equation}
x_{\!\scriptscriptstyle A}=\frac{1}{G^3}z_{\scriptscriptstyle A}\;, \qquad \mbox{and}\qquad
x_{\scriptscriptstyle 5} = -G\,\ell\;.
\end{equation}
Denoting $i\frac{\partial}{\partial z^{\scriptscriptstyle A}}$ by $q_{\scriptscriptstyle A}$, we have
\begin{eqnarray}
p_{\!\scriptscriptstyle A} &=& i\frac{\partial}{\partial x^{\!\scriptscriptstyle A}}
= \frac{i}{G} \frac{\partial}{\partial z^{\scriptscriptstyle A}}
=\frac{1}{G}q_{\scriptscriptstyle A} \;,
\nonumber \\
p_{\scriptscriptstyle 5} &=& i \frac{\partial}{\partial x^{\scriptscriptstyle 5}}
= \frac{\partial z^{\scriptscriptstyle A}}{\partial x^{\scriptscriptstyle 5}} \ i\frac{\partial}{\partial z^{\scriptscriptstyle A}}
= \frac{\partial z^{\scriptscriptstyle A}}{\partial x^{\scriptscriptstyle 5} }\ q_{\scriptscriptstyle A}
=-\frac{1}{G\,\ell} \, q_{\scriptscriptstyle A} z^{\scriptscriptstyle A}\;.
\end{eqnarray}
Introducing a Lorentzian 5-coordinate $Z_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)}= G^{-4}\, z_{\scriptscriptstyle A}
=\eta_{\scriptscriptstyle A\!B} z^{\scriptscriptstyle B}$, we have
\begin{equation}
[ Z_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)}, q_{\scriptscriptstyle B} ] =-i \, \eta_{\scriptscriptstyle A\!B} \;.
\end{equation}
A form of Lorentzian `5-momentum' as generators is given by \cite{Gur}
\begin{equation} \label{P}
P_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)} = \frac{1}{\ell} J_{\!\scriptscriptstyle A5} \;.
\end{equation}
This is in agreement with the noncommutative momentum operator discussed
in Sec.IV above. We have here
\begin{equation}
P_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)} = \frac{1}{\ell}
\left(x_{\!\scriptscriptstyle A}p_{\scriptscriptstyle 5}-x_{\scriptscriptstyle 5}p_{\!\scriptscriptstyle A}\right)
=q_{\scriptscriptstyle A} - Z_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)}
\; \frac{1}{\ell^2} \left(\eta^{\scriptscriptstyle B\!C} Z_{\!\scriptscriptstyle B}^{\scriptscriptstyle (\mathcal L)}q_{\scriptscriptstyle C} \right)\;.
\end{equation}
The other ten generators are then given as
\begin{equation}
J_{\!\scriptscriptstyle A\!B}= Z_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)} q_{\scriptscriptstyle B}
-Z_{\!\scriptscriptstyle B}^{\scriptscriptstyle (\mathcal L)} q_{\scriptscriptstyle A}
= Z_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)} P_{\!\scriptscriptstyle B}^{\scriptscriptstyle (\mathcal L)}
-Z_{\!\scriptscriptstyle B}^{\scriptscriptstyle (\mathcal L)} P_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)} \;.
\end{equation}
In fact, one can also write $J_{\!\scriptscriptstyle A5}$ formally as
$Z_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)} P_{\!\scriptscriptstyle 5}^{\scriptscriptstyle (\mathcal L)}
-Z_{\!\scriptscriptstyle 5}^{\scriptscriptstyle (\mathcal L)} \, P_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)}$ by taking
$P_{\!\scriptscriptstyle 5}^{\scriptscriptstyle (\mathcal L)}$ as zero (since $Z_{\scriptscriptstyle 5}^{\scriptscriptstyle (\mathcal L)} = -\ell$).
If one further writes an analogous relation
$P_{\!\scriptscriptstyle 5}^{\scriptscriptstyle (\mathcal L)}= q_{\scriptscriptstyle 5}- \frac{1}{\ell^2} Z_{\!\scriptscriptstyle 5}^{\scriptscriptstyle (\mathcal L)}
\left(\eta^{\scriptscriptstyle BC}Z_{\!\scriptscriptstyle B}^{\scriptscriptstyle (\mathcal L)}q_{\scriptscriptstyle C} \right)$, it would require that
$q_{\scriptscriptstyle 5}=-\frac{1}{\ell} \left(\eta^{\scriptscriptstyle BC}Z_{\!\scriptscriptstyle B}^{\scriptscriptstyle (\mathcal L)}q_{\scriptscriptstyle C} \right)$.
The latter is, of course, equivalent to $p_{\scriptscriptstyle 5}= \frac{1}{G}\, q_{\scriptscriptstyle 5}$. It is interesting to
note that $\left(\eta^{\scriptscriptstyle BC}Z_{\!\scriptscriptstyle B}^{\scriptscriptstyle (\mathcal L)}q_{\scriptscriptstyle C} \right)$
resembles the conformal symmetry (scale transformation) generator for the 5-geometry
with a Minkowski metric. A further interesting point is that the fifth component of the
`5-momentum' now gives the operator responsible for the new quantum Heisenberg
uncertainty relation as
\[
[\hat{X}_\mu, \hat{P}_{\nu}] = i \, \eta_{\mu\nu} \frac{1}{\kappa\,c\,\ell}
J_{\!\scriptscriptstyle 45} = - i \, \eta_{\mu\nu} \left(-\frac{1}{\kappa\,c}
\, P_{\!\scriptscriptstyle 4}^{\scriptscriptstyle (\mathcal L)}\right) \;,
\]
where $\hat{P}_{\nu}=P_\nu^{\scriptscriptstyle (\mathcal L)}$. This follows from
Eq.(\ref{P}) and the result of Sec.IV, {namely, from}
Eq.(\ref{Fop}) with $O_{\!\scriptscriptstyle 4}^{\prime}= J_{\!\scriptscriptstyle 45} = -\kappa
c \hat{F}$. The standard Heisenberg expression would be obtained for
$P_{\!\scriptscriptstyle 4}^{\scriptscriptstyle (\mathcal L)}$ taking the value
${-}{\kappa\,c}$. The latter is likely to be admissible as an
eigenvalue condition for the operator; and it invites comparison
with the energy-momentum constraint $p_{\scriptscriptstyle 4}={-}{\kappa\,c}$
before the introduction of the deformation with the translational
boosts. In case $P_{\!\scriptscriptstyle 4}^{\scriptscriptstyle (\mathcal L)}$ takes other
eigenvalues, it could be a generalization of the original
energy-momentum constraint. Hence, the spectrum of $P_{\!\scriptscriptstyle
4}^{\scriptscriptstyle (\mathcal L)}$ as an operator is of central importance.
\section{More on the Momentum in Quantum Relativity}
The interesting thing with the Beltrami coordinate formulation is
that the first five coordinates of the 6-geometry $x^{\scriptscriptstyle A}$ and
hence the coordinates $z^{\scriptscriptstyle A} \;(=x^{\scriptscriptstyle A} \frac{\ell}{x^{\scriptscriptstyle
5}})$ transform as components of coordinate vectors on a 5D space
with Minkowski geometry. Similarly, $Z_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal
L)}$, $q_{\scriptscriptstyle A}$, and $P_{\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)}$, can all be
considered Lorentzian 5-vectors. With the 5-coordinate description
of the $\mbox{dS}_5$ geometry, the Lorentzian 5-coordinate $Z_{\scriptscriptstyle
A}^{\scriptscriptstyle (\mathcal L)}$
and the 5-momentum $P_{\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)}$ provide
a very nice representation of the relativity symmetry algebra.
Recall that we get to the relativity formulation through
deformations in two steps. The first deformation is introduced by
imposing the Planck scale as a constraint onto the Einstein
4-momentum (Eq.(\ref{Gamma})). This promotes the 4-momentum into a
5-vector $\pi^{\scriptscriptstyle A}=\Gamma \frac{p^{\scriptscriptstyle A}}{\kappa\,c}$ with
$p^{\scriptscriptstyle 4}=\kappa\,c$. In terms of the original 6-coordinate
$x^{\scriptscriptstyle\mathcal M}$ and the corresponding
$p_{\!\scriptscriptstyle\mathcal M}=i\,\partial_{\!\scriptscriptstyle\mathcal M}$, we also have the nice representation
of the algebra with identification of the noncommutative quantum
space-time position operators within the generators of the algebra
given by Eqs.(\ref{Xop}) and (\ref{Pop}). There, we have Lorentzian
4-coordinate and 4-momentum $\hat{X}^\mu$ and $\hat{P}_\mu$ with a
quite symmetric role, and an extra $J_{\scriptscriptstyle 45}$ characterizing the
quantum uncertainty relation. The rest of the generators in the
algebra are just the 4D Lorentz symmetry generators. The structure
seems to be depicting a 4D quantum space-time. It is also easy to
see that $\hat{P}_{\!\scriptscriptstyle 4}\,={P}_{\!\scriptscriptstyle 4}^{\scriptscriptstyle (\mathcal L)}$
together with the four $\hat{P}_\mu\, =P_{\mu}^{\scriptscriptstyle (\mathcal
L)}$'s transform as the 5-momentum of a 5D Lorentzian symmetry,
with now $\hat{X}^\mu$ forming part of the 5D angular momentum. The
translational boosts mix the new, {sixth momentum component}
$p^{\scriptscriptstyle 5}$ with the other five making a consistent interpretation
of the 5-momentum constraint of Eq.(\ref{pi}) questionable. It is
then very reasonable to expect the momentum constraint to be taken
as one on $P_{\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)}$, which even has a natural
extension to have a vanishing sixth component. Hence, we would like
to think about $P^{\scriptscriptstyle (\mathcal L)^{\scriptscriptstyle A}}=\eta^{\scriptscriptstyle A\!B}
P_{\scriptscriptstyle B}^{\scriptscriptstyle (\mathcal L)}$. However, under the usual framework
of dynamics, taking the operator form of the momentum in the
(coordinate representation) to the momentum variable as (particle)
coordinate derivative requires the equation of motion. Let us take a
look at this issue from a different perspective.
We discuss below momentum as coordinate derivative and conserved
vector corresponds to the relativity symmetry. We start from writing
the ``classical angular momenta" in terms of components of the
relevant Beltrami 5-vectors. All the generators of {\small \boldmath
$SO(1,5)$} are expected to correspond to conserved quantities in the
classical sense. We again start with $L^{\scriptscriptstyle\mathcal M \!\mathcal
N}=x^{\scriptscriptstyle\mathcal M} p^{\scriptscriptstyle\mathcal N} - x^{\scriptscriptstyle\mathcal N}
p^{\scriptscriptstyle\mathcal M}$ where the ``classical" momentum $p^{\scriptscriptstyle\mathcal
M}$ is written as $p^{\scriptscriptstyle\mathcal M}=m_{\scriptscriptstyle
\!\Lambda}\frac{dx^{\scriptscriptstyle\mathcal M}}{ds}$. Here, $m_{\scriptscriptstyle
\!\Lambda}^2$ is a mass-squarelike parameter {\em not} to be taken
directly as the Einstein rest mass-square. It is likely to be some
generalization of the latter. In fact, an apparent natural choice of
the parameter $m_{\scriptscriptstyle \!\Lambda}^2$ is available from the
eigenvalue of the first Casimir operator for the {$SO(1,5)$}
symmetry \cite{Gur,dssr}. And the momentum as a coordinate
derivative has, of course, to be defined with respect to the
invariant line element $ds$. In terms of the Beltrami 5-coordinates,
we have \begin{equation} L^{\!\scriptscriptstyle A5} = - m_{\scriptscriptstyle \!\Lambda} G^2 \ell
\frac{dz^{\scriptscriptstyle A}}{ds} \;. \end{equation} The expression should correspond to
the conserved quantity for the five generators $J_{\!\scriptscriptstyle A5} =\ell
\hat{P}_{\!\scriptscriptstyle A} \; [\mbox{or~} \ell P_{\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal
L)}]$. Hence, we expect the conserved momentum to be given be \begin{equation}
P^{\scriptscriptstyle A}_{\!\scriptscriptstyle (\mathcal L)}= m_{\scriptscriptstyle \!\Lambda} G^2
\frac{dz^{\scriptscriptstyle A}}{ds} \;, \end{equation} {\it i.e.,} $L^{\!\scriptscriptstyle A5} = -\ell \,
P^{\scriptscriptstyle A}_{\!\scriptscriptstyle (\mathcal L)}$. It is interesting to note that
$\frac{dP^{\!\scriptscriptstyle A}_{\!\scriptscriptstyle (\mathcal L)}}{ds} =0$ actually is
equivalent to the geodesic equation within $\mbox{dS}_5$
\cite{dssr}. The apparent ``natural" momentum candidate $q^{\scriptscriptstyle
A}=m_{\scriptscriptstyle \!\Lambda} \frac{dz^{\scriptscriptstyle A}}{ds}$ is related to $P^{\scriptscriptstyle
A}_{\!\scriptscriptstyle (\mathcal L)}$ in a way similar to the relation between
the operator forms of $P_{\!\scriptscriptstyle A}^{\scriptscriptstyle (\mathcal L)}$ and
$q_{\scriptscriptstyle A}$. The coordinate transformation gives $q^{\scriptscriptstyle A}=
\frac{1}{G} p^{\scriptscriptstyle A}$, and hence \begin{equation} P^{\scriptscriptstyle A}_{\!\scriptscriptstyle (\mathcal
L)}= G^2 q^{\scriptscriptstyle A} - m_{\scriptscriptstyle \!\Lambda} z^{\scriptscriptstyle A} G \frac{dG}{ds}
\;. \end{equation} Also we have \begin{equation} L^{\!\scriptscriptstyle A\!B}=z^{\scriptscriptstyle A} G^2 q^{\scriptscriptstyle B}
-z^{\scriptscriptstyle B} G^2 q^{\scriptscriptstyle A} = z^{\scriptscriptstyle A} P^{\scriptscriptstyle B}_{\!\scriptscriptstyle
(\mathcal L)} -z^{\scriptscriptstyle B} P^{\scriptscriptstyle A}_{\!\scriptscriptstyle (\mathcal L)}\;, \end{equation}
with the same form applicable to $L^{\!\scriptscriptstyle A5}$ by taking $P^{\scriptscriptstyle
5}_{\!\scriptscriptstyle (\mathcal L)}=0$ or equivalently $q^{\scriptscriptstyle 5} = m_{\scriptscriptstyle
\!\Lambda} \frac{\ell}{G} \frac{dG}{ds}= \frac{1}{G} p^{\scriptscriptstyle 5}$.
The latter result may be written in a form similar to the operator
$q_{\scriptscriptstyle 5}$ given in the previous section, namely $q^{\scriptscriptstyle 5} =
\frac{G^2}{\ell} \eta_{\scriptscriptstyle A\!B} z^{\scriptscriptstyle A} q^{\scriptscriptstyle B}$. Note that
$g_{\scriptscriptstyle \!A\!B} P^{\scriptscriptstyle A}_{\!\scriptscriptstyle (\mathcal L)} P^{\scriptscriptstyle B}_{\!\scriptscriptstyle
(\mathcal L)}= m_{\scriptscriptstyle \!\Lambda}^2 G^4$.
The above definition of ``classical" momentum $p^{\scriptscriptstyle\mathcal M}$
of course goes in line with the Newtonian/Einstein starting point of
mass times velocity. However, as discussed in some details in
Sec.III, we have to introduce the new nonquantum-relativistic
Einstein momentum defined as $\frac{dx^\mu}{d\sigma}= \kappa\,c
\frac{dx^\mu}{dx^{\scriptscriptstyle 4}}$ instead of the conventional Einstein
proper time derivative $m \frac{dx^\mu}{d\tau}$. Taking $\kappa\,c$
as the natural momentum unit, we have from Eq.(\ref{pi}) the
momentum $\pi^{\scriptscriptstyle A}$ as essentially nothing more than the
derivative $\frac{dx^{\scriptscriptstyle A}}{ds}$. Taking this to the 6-geometry,
we introduce the natural definition $\pi^{\scriptscriptstyle\mathcal
M}=i\,\frac{dx^{\scriptscriptstyle\mathcal M}}{ds}$. The reason for introducing
the $i$ is the following. The coordinate $x^{\scriptscriptstyle 4}$ has the
opposite metric signature relative to $\tau$ or $x^{\scriptscriptstyle 0}$. Only
with the $i$ we can have the result $\eta_{\scriptscriptstyle\mathcal M\!\mathcal
N} \pi^{\scriptscriptstyle\mathcal M} \pi^{\scriptscriptstyle\mathcal N}=-1$. Now, we can go
along the argument as we have done for $p^{\scriptscriptstyle\mathcal M}$. Write
the conserved quantities as \begin{equation} \frac{1}{\kappa\,c} J^{\scriptscriptstyle\mathcal
M\!\mathcal N}=x^{\scriptscriptstyle\mathcal M} \pi^{\scriptscriptstyle\mathcal N} -
x^{\scriptscriptstyle\mathcal N} \pi^{\scriptscriptstyle\mathcal M} =i\,\left(z^{\scriptscriptstyle\mathcal M}
\Pi^{\scriptscriptstyle\mathcal N}_{\scriptscriptstyle (\mathcal L)} - z^{\scriptscriptstyle\mathcal N}
\Pi^{\scriptscriptstyle\mathcal M}_{\scriptscriptstyle (\mathcal L)}\right) \;, \end{equation} where \begin{equation}
\Pi^{\scriptscriptstyle A}_{\scriptscriptstyle (\mathcal L)}= - \frac{1}{\kappa\,c\,\ell}
J^{\scriptscriptstyle{A}5}= i\, G^2 \frac{dz^{\scriptscriptstyle A}}{ds}, \qquad \mbox{and}
\qquad\ \Pi^{\scriptscriptstyle 5}_{\scriptscriptstyle (\mathcal L)}=0 \;. \end{equation} Note that we have
put in the factor ${\kappa\,c}$ to get the unit right. Obviously,
the geodesic equation can also be considered as $\frac{d\Pi^{\!\scriptscriptstyle
A}_{\scriptscriptstyle (\mathcal L)}}{ds} =0$, as $\Pi^{\scriptscriptstyle A}_{\scriptscriptstyle (\mathcal
L)}$ differs from $P^{\scriptscriptstyle A}_{\!\scriptscriptstyle (\mathcal L)}$ only by a
constant factor. The appearance of the $i$ in the above may be taken
as an illustration of the intrinsic quantum nature of the
formulation. In the place of the energy-momentum constraint of
Eq.(\ref{pi2}), we have instead \begin{equation} g_{\scriptscriptstyle \!A\!B} \Pi^{\scriptscriptstyle
A}_{\scriptscriptstyle (\mathcal L)} \Pi^{\scriptscriptstyle B}_{\scriptscriptstyle (\mathcal L)}= -G^4 \;,
\end{equation} which reduces to unity at the coordinate origin $z^{\scriptscriptstyle A}=0$.
\section{Summary}
In this paper, we have proposed a simple linear realization of the
{\small \boldmath $SO(1,5)$} symmetry as the Lie-algebraic
description of quantum relativity. The relativity follows from
successive deformations of Einstein special relativity through the
introduction of invariant bounds on energy-momentum and on extended
geometric (``space-time") interval. The invariants are related
respectively to the Planck scale and the cosmological constant. We
have discussed the logic of our formulation, and plausible physical
interpretations that we consider to be naturally suggested by the
latter. The linear realization has a six-dimensional geometric
description, with the true physics restricted to a dS$_5$
hypersurface embedding the standard four-dimensional Minkowski
space-time. The relativity algebra may be taken as the phase-space
symmetry of the quantum (noncommutative) four-dimensional space-time
with a natural Minkowski limit. We focus mostly on the five or
six-dimensional geometric description with quite unconventional
coordinate(s) (as we have argued) beyond the conventional space-time
ones. There remain several open questions such as a new definition
of energy-momentum in the nonquantum-relativistic limit. Our
analysis aims at taking a first step in an exploration that may
complement the previous approaches on the subject matter. It
certainly raises some interesting questions that we hope to return
to in future publications.
\acknowledgements Otto Kong would like to thank C.-M. Chen and F.-L.
Lin for helpful discussions, and and members at NCU as well as at
Institute of Physics, Academia Sinica for comments and questions.
This work is partially supported by the research Grant No.
94-2112-M-008-009 from the NSC of Taiwan as well as by the US DOE
Grant No. DE-FG 02-91ER40685.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 144
|
Core77 is proud to announce "Sustainable Refrainables," our second Design Arena Community Challenge in association with the San Francisco chapter of AIGA to support their fourth biennial Compostmodern conference.
"Sustainable Refrainables" is a poster design competition celebrating words of persuasion. We're inviting designers to share those mantric phrases they find most powerful in communicating positive action. Maybe the phrase is something as simple as "I never use the word 'sustainability.'" or "The first rule is listen. The second rule is to ignore what you heard and do it better." or "There is no silver bullet, just silver buckshot."
Whatever your magic phrase, design it up in poster form, upload it to the competition site, and comment on your favorites. We're looking for your most graphic, persuasive quotables!
There's five prizes to be won: Jury Prize Winner $500 and a copy of Adobe's Creative Suite 5 Master Collection.
Popular Vote Winner $500 and a copy of Adobe's Creative Suite 5 Master Collection.
Three Runners Up Each receive a copy of Adobe's Creative Suite 5 Master Collection.
All five entries will be printed and displayed in transit shelters for one week prior and during San Francisco Design Week, June 13-19, 2011. Each winner will also receive a printed copy of their poster design.
We are grateful to Adobe for making this competition possible and to Clear Channel for providing transit shelter space for this competition.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,142
|
__author__ = 'Ulric Qin'
import logging
import MySQLdb
from frame import config
def connect_db(cfg):
try:
conn = MySQLdb.connect(
host=cfg.DB_HOST,
port=cfg.DB_PORT,
user=cfg.DB_USER,
passwd=cfg.DB_PASS,
db=cfg.DB_NAME,
use_unicode=True,
charset="utf8")
return conn
except Exception, e:
logging.getLogger().critical('connect db: %s' % e)
return None
class DB(object):
def __init__(self, cfg):
self.config = cfg
self.conn = None
def get_conn(self):
if self.conn is None:
self.conn = connect_db(self.config)
return self.conn
def execute(self, *a, **kw):
cursor = kw.pop('cursor', None)
try:
cursor = cursor or self.get_conn().cursor()
cursor.execute(*a, **kw)
except (AttributeError, MySQLdb.OperationalError):
self.conn and self.conn.close()
self.conn = None
cursor = self.get_conn().cursor()
cursor.execute(*a, **kw)
return cursor
# insert one record in a transaction
# return last id
def insert(self, *a, **kw):
cursor = None
try:
cursor = self.execute(*a, **kw)
row_id = cursor.lastrowid
self.commit()
return row_id
except MySQLdb.IntegrityError:
self.rollback()
finally:
cursor and cursor.close()
# update in a transaction
# return affected row count
def update(self, *a, **kw):
cursor = None
try:
cursor = self.execute(*a, **kw)
self.commit()
row_count = cursor.rowcount
return row_count
except MySQLdb.IntegrityError:
self.rollback()
finally:
cursor and cursor.close()
def query_all(self, *a, **kw):
cursor = None
try:
cursor = self.execute(*a, **kw)
return cursor.fetchall()
finally:
cursor and cursor.close()
def query_one(self, *a, **kw):
rows = self.query_all(*a, **kw)
if rows:
return rows[0]
else:
return None
def query_column(self, *a, **kw):
rows = self.query_all(*a, **kw)
if rows:
return [row[0] for row in rows]
else:
return []
def commit(self):
if self.conn:
try:
self.conn.commit()
except MySQLdb.OperationalError:
self.conn = None
def rollback(self):
if self.conn:
try:
self.conn.rollback()
except MySQLdb.OperationalError:
self.conn = None
db = DB(config)
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,210
|
{"url":"https:\/\/www.ssccglapex.com\/out-of-20-boys-6-are-each-of-1-m-15-cm-height-8-are-of-1-m-10-cm-and-rest-of-1-m-12-cm-the-average-height-of-all-of-them-is\/","text":"### Out of 20 boys, 6 are each of 1 m 15 cm height, 8 are of 1 m 10 cm and rest of 1 m 12 cm. The average height of all of them is :\n\nA. 1 m 12.1 cm B. 1 m 21.1 cm C. 1 m 21 cm D. 1 m 12 cm Answer: Option A\n\n### Solution(By Apex Team)\n\n$\\begin{array}{l}\\text{According to the question,}\\\\ \\text{Height of 6 persons}\\\\ \\text{= 6 \u00d7 1 m 15 cm}\\\\ \\text{6 m 90 cm}\\\\ \\text{Height of 8 persons}\\\\ \\text{= 8 \u00d7 1 m 10 cm}\\\\ \\text{8 m 80 cm}\\\\ \\text{Height of 6 persons}\\\\ \\text{= 6 \u00d7 1 m 12 cm}\\\\ =\\text{6 m 72 cm}\\\\ \\text{Total Height of 20 persons}\\\\ \\text{= 22 m 42 cm}\\\\ \\text{Average}\\\\ =\\Large\\frac{22\\mathrm{~m}42\\mathrm{~cm}}{20}\\\\ =1\\ \\text{m}\\ 12.1\\ \\text{cm}\\end{array}$\n\nA. 20\nB. 21\nC. 28\nD. 32\n\nA. 18\nB. 20\nC. 24\nD. 30\n\nA. 10 years\nB. 10.5 years\nC. 11 years\nD. 12 years","date":"2021-06-20 19:33:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6251600980758667, \"perplexity\": 2786.0125385905076}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623488253106.51\/warc\/CC-MAIN-20210620175043-20210620205043-00409.warc.gz\"}"}
| null | null |
const Koa = require('koa');
const redis = require('redis');
const RedisStore = require('koa-redis');
const session = require('koa-generic-session');
const flash = require('koa-connect-flash');
const convert = require('koa-convert');
const Router = require('koa-router');
const koa404Handler = require('koa-404-handler');
const errorHandler = require('..');
// initialize our app
const app = new Koa();
// define keys used for signing cookies
app.keys = ['foo', 'bar'];
// initialize redis store
const redisClient = redis.createClient();
redisClient.on('connect', () => app.emit('log', 'info', 'redis connected'));
redisClient.on('error', err => app.emit('error', err));
// define our storage
const redisStore = new RedisStore({
client: redisClient
});
// add sessions to our app
app.use(
convert(
session({
store: redisStore
})
)
);
// add support for flash messages (e.g. `req.flash('error', 'Oops!')`)
app.use(convert(flash()));
// override koa's undocumented error handler
// eslint-disable-next-line unicorn/prefer-add-event-listener
app.context.onerror = errorHandler;
// use koa-404-handler
app.use(koa404Handler);
// set up some routes
const router = new Router();
// throw an error anywhere you want!
router.get('/404', ctx => ctx.throw(404));
router.get('/500', ctx => ctx.throw(500));
// initialize routes on the app
app.use(router.routes());
// start the server
app.listen(3000);
console.log('listening on port 3000');
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,546
|
\section{introduction}
Since the discovery of the so called J-band \cite{Jelley,Scheibe},
an unusual sharp absorption band which is characteristic for the aggregation
of the classical sensitizing dye 1,1'-diethyl-2,2'-cyaninchloride
(pseudoisocyanine) a large amount of experimental and theoretical
work addressed the investigation of molecular aggregates and their
red shifted J-bands. At lower concentration blue shifted H-bands were
observed which were attributed to molecular dimers, the smallest possible
aggregates. Molecular modelling of the J-aggregates is difficult due
to the fact that the PIC molecule is a cation and therefore the Coulombic
interactions as well as dielectric shielding have to be taken into
account carefully. For the formation of larger aggregates the counterions
are important whereas this seems not the case for the smaller H-aggregates
\cite{Neumann}. Therefore we started the simulation of PIC aggregates
by a detailed investigation of the dimer. From the analysis of the
experimental spectrum in water \cite{Hallermeier} it was deduced
that both excitonic components contribute with an intensity ratio
of 2:1. This can not be explained\cite{Bird} by dimer models\cite{Hallermeier,Graves}
where the dipole moments are almost parallel or antiparallel as it
is the case for the common brickwork or ladder models which are found
in the literature for the J-aggregate\cite{Nolte,Scheibe2}. Another
focus of our investigations concerns the contribution of local Coulombic
interactions to the inhomogeneous broadening of the site energies
and its importance in comparison to intramolecular vibrations.
\section{methods}
For the classical MD simulation we used the model of rigid rotors
which can be easily combined with quantum calculations to obtain electronic
excitations and coupling matrix elements \cite{mycitation}. Since
also the position of the ethyl groups is fixed we have to distinguish
not only two stereoisomers but also a fully C2-symmetric form and
another form where the ethyl-groups break this symmetry. A possible
interconversion between these conformations was not taken into account.
We simulated a cube containing a pair of PIC (positively charged)
molecules and 2150 TIP5 \cite{TIP} water molecules. We did not use
periodic boundary conditions to avoid artefacts from the Coulombic
interaction with the mirror images. Instead reflecting boundaries
kept the molecules from escaping the box by reversing the normal velocity
component whenever the center of mass of one of the molecules encountered
the boundary . The boundary distance of 36\AA\quad was adjusted to
reproduce the experimental density of water at room temperature. The
equations of motion were solved using an implicit quaternion method
\cite{Quat} for the rotations and a Leap frog method for the translations.
The timestep used was 1 fsec.
In our simulation we neglected electrostatic interactions with solvent
outside the cube. We calculated the missing contribution to the solvation
energy from a simple PCM model\cite{PCM}. The simulated box was put
into a cubic cavity in a dielectric continuum and the contribution
to the solvation energy was calculated from the interaction between
the charges within the box and the induced surface charges. It gave
13\% of the total solvation energy. This value stayed rather constant
along the trajectory. Therefore we assume that the essential changes
of interaction with solvent molecules in the immediate surroundings
are taken into account sufficiently.
The force field was designed to reproduce the local electrostatic
interactions properly, which is especially important for the large
sized PIC molecule with its extended $\pi$-electron system. It is
based on a simplified version of the effective fragment model \cite{EFP,H2O EFP,H2O EFP2}.
The charge distribution is approximated by distributed multipoles
which were calculated with GAMESS on the basis of TZV/HF wavefunctions\cite{Gamess}.
For the simulation only point charges $q_{i}$ and dipoles $\vec{p}_{i}$
were used which are centered at the positions of the nuclei and the
bond centers. The Coulombic interaction energy is
\begin{equation}
V_{ij}^{Coul}=\frac{q_{i}q_{j}}{4\pi\epsilon_{0}R_{ij}}+\frac{\vec{R}_{ij}(q_{i}\vec{p_{j}}-q_{j}\vec{p}_{i})}{4\pi\epsilon_{0}R_{ij}^{3}}+\frac{R_{ij}^{2}\vec{p_{i}}\vec{p}_{j}-3(\vec{R}_{ij}\vec{p}_{i})(\vec{R}_{ij}\vec{p}_{j})}{4\pi\epsilon_{0}R_{ij}^{5}}\end{equation}
The values of point charges and dipoles are given for the symmetry
unique atoms in Table$\,$\ref{table pic}.
The electronic spectra were calculated on the ZINDO/CI-S level\cite{Zerner}
including the point monopoles and dipoles of all water molecules.
The two lowest excited states of the dimer are to a large extent linear
combinations of the lowest monomer excitations with only very small
admixture of higher monomer excitations and of charge resonance states.
Therefore we use a simple 2-state model to analyze the delocalized
dimer states in terms of the local excitations $A*\mbox{ and }B*$.
The interaction matrix is \begin{equation}
\left(\begin{array}{cc}
\overline{E}-\Delta/2 & V\\
V & \overline{E}+\Delta/2\end{array}\right)\end{equation}
with the average monomer transition energy $\overline{E}=(E_{A*}+E_{B*})/2$,
their splitting $\Delta=E_{B*}-E_{A*}$ and the excitonic coupling
$V$ . Its eigenvectors are the two delocalized dimer excitations
which are written with mixing coefficients $\cos\gamma,\sin\gamma$
as\begin{equation}
|1>=\cos\gamma|A*>+\sin\gamma|B*>\quad|2>=-\sin\gamma|A*>+\cos\gamma|B*>\end{equation}
The transition dipoles of these two states
\begin{equation}
\vec{\mu}_{1}=\cos\gamma\vec{\mu}_{A*}+\sin\gamma\vec{\mu}_{B*}\quad\vec{\mu}_{2}=\sin\gamma\vec{\mu}_{A*}-\cos\gamma\vec{\mu}_{B*}\label{tdip}\end{equation}
are linear combinations of the transition dipoles $\vec{\mu}_{A*,B*}$
of the two monomers which are assumed to be independent from the excitonic
interaction. Therefore the mixing angle $\gamma$ can be determined
from a least square fit of the two dimer transition dipoles to (\ref{tdip}).
Then the elements of the interaction matrix are calculated from the
transition energies of the two dimer excitations as
\begin{equation}
\overline{E}=\frac{E_{1}+E_{2}}{2}\end{equation}
\begin{equation}
\Delta=(\cos(\gamma)^{2}-\sin(\gamma)^{2})(E_{2}-E_{1})\end{equation}
\begin{equation}
V=\cos(\gamma)\sin(\gamma)(E_{2}-E_{1})\end{equation}
We want to emphasize that this analysis is based on the delocalized
dimer orbitals. It does not involve any kind of multipole expansion,
especially the calculated excitonic coupling is not of the dipole-dipole
type which would be quite questionable at such short intermolecular
distances. To check the quality of the ZINDO method, we compared the
results for a selected sandwich dimer configuration with a much more
elaborate 631G{*}{*} HF/CI calculation. The calculated transition
dipoles of the two lowest singlet excitations were very similar (3
and 16 Debyes from ZINDO , 2 and 14 Debyes from HF/CI ), the excitonic
splitting was somewhat larger for the HF/CI method (0.54eV as compared
to 0.37eV for ZINDO). Both methods placed the lowest charge resonance
states at about 0.65 eV above the upper excitonic band.
\section{Results}
We determined the vibronic coupling parameters for the PIC monomer
as described in our earlier paper \cite{book}. Application of ab
initio methods \cite{Gamess} improved the quality of the results
so that a direct comparison with the profile of the absorption spectrum
becomes feasible. The normal modes were calculated on the 6-31G/MP2
level and the coupling to the optical transition on the CI/SD level.
Using these couplings and the displaced harmonic oscillator model
the lineshape was calculated as the Fourier transformed time correlation
function. In the low frequency region the largest vibronic couplings
are found for normal modes at 40 and 46 $cm^{-1}$ which contribute
significantly to the broadening of the absorption band. Another important
contribution from modes around 1500 $cm^{-1}$ which are also known
from Raman spectra is the origin of the observed vibrational progression.
Further modes between 50 and 1400 $cm^{-1}$ form a rather dense continuum
of coupling states. The simulated spectrum (fig. \ref{Picabso}) largely
resembles the experimental absorption profile. The width of the simulated
bands is somewhat too small and the intensity of the prominent stretching
modes is slightly overestimated. This could be possibly further improved
by taking frequency changes and mode coupling into account.
The MD simulations were started from several plausible dimer structures.
First the PIC molecules were kept fixed and the solvent was equilibrated
for 50 psec. Then the restraints were removed and the system was simulated
for another 50 psec. The distance and orientation of the two PIC molecules
were analyzed to identify periods of relative stability.
Starting from a sandwich structure, a rather stable structure evolved
within 10 psec (fig.3a). It is not symmetric but still there is almost
no splitting of the calculated site energies (Table 2) which show
rapid fluctuations with components down to 20fsec. Such fast fluctations
are well known from experimental and theoretical work on the dynamics
of dephasing and solvation in molecular liquids\cite{Nibbering}.
They have been attributed to the inertial motion of the solvent molecules,
which show up as the Gaussian shaped rapid initial decay of the solvation
time correlation function\cite{Maroncelli,Carter}. In our simulations
the orientational time correlation function of the water molecules
can be described by a Gaussian with a correlation time of 60 fs at
short times. The time correlation of the electrostatic potential decays
faster. The initial Gaussian decay with 20 fs is very similar to that
of the correlation function of the transition energies. Probably collective
librational motions contribute more efficiently to the electronic
dephasing than the motion of the individual molecules.\cite{Nibbering,Mukamel}
The center of the site energies is shifted by 0.06 eV to lower energies
as compared to a monomer in vacuum and the variance of the site energies
is comparable to that of a monomer. The two transition dipoles are
almost parallel and the lower transition carries only 2\% of the total
oscillator strength. The excitonic coupling shows fluctuations similar
to the site energies. Its variance, however, amounts to only 7\% of
the average value of 0.25eV. Hallermeier et al \cite{Hallermeier}
deduced a smaller excitonic coupling of 0.078 eV. Most probably their
dimer spectrum has some admixture of the monomer spectrum. We assume
that the absorption maximum at 520nm is due to monomers and the maximum
of the real dimer spectrum is at 480nm. This would be consistent with
an H-aggregate with an excitonic coupling of 0.2eV.
We studied also brickwork structures as a model for the J-aggregates
with a red shifted absorption. We found a relative stable structure
which is shown in fig. 3b. The structural fluctuations are much larger
than for the sandwich model but the fluctuations of site energies
and excitonic coupling are even somewhat smaller. The coupling of
-0.064eV is close to the value of -0.078eV which was used to simulate
the vibronic spectrum of the J-aggregates\cite{book}.
\begin{acknowledgments}
This work has been supported by the Deutsche Forschungsgemeinschaft
(SFB 533)
\end{acknowledgments}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,816
|
Q: GDM locale problems I have two problems with GDM on Ubuntu 10.04.
The first is with locales.
In my system I have defined:
$ cat /etc/environment
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games"
LANG="es_ES.UTF-8"
LANGUAGE="es_ES:es:en_US:en"
$ cat /etc/default/locale
LANG="es_ES.UTF-8"
LANGUAGE="es_ES:es:en_US:en"
$ cat /var/lib/locales/supported.d/local
es_ES UTF-8
es_ES.UTF-8 UTF-8
en_US UTF-8
en_US.UTF-8 UTF-8
But when I enter in gnome desktop:
$ locale
LANG=es_ES
LANGUAGE=es_ES:es:en_US:en
LC_CTYPE="es_ES"
LC_NUMERIC="es_ES"
LC_TIME="es_ES"
LC_COLLATE="es_ES"
LC_MONETARY="es_ES"
LC_MESSAGES="es_ES"
LC_PAPER="es_ES"
LC_NAME="es_ES"
LC_ADDRESS="es_ES"
LC_TELEPHONE="es_ES"
LC_MEASUREMENT="es_ES"
LC_IDENTIFICATION="es_ES"
LC_ALL=
I have deleted ~/.dmrc and I have restarted the system but nothing.
GDM login screen also doesn't permit change this setting.
However, in the text terminals (tty1,...):
$ locale
LANG=es_ES.UTF-8
LANGUAGE=es_ES:es:en_US:en
LC_CTYPE="es_ES.UTF-8"
LC_NUMERIC="es_ES.UTF-8"
LC_TIME="es_ES.UTF-8"
LC_COLLATE="es_ES.UTF-8"
LC_MONETARY="es_ES.UTF-8"
LC_MESSAGES="es_ES.UTF-8"
LC_PAPER="es_ES.UTF-8"
LC_NAME="es_ES.UTF-8"
LC_ADDRESS="es_ES.UTF-8"
LC_TELEPHONE="es_ES.UTF-8"
LC_MEASUREMENT="es_ES.UTF-8"
LC_IDENTIFICATION="es_ES.UTF-8"
LC_ALL=
The solution to problem is to edit .drmc file, but I think this isn't the right way.
Why doesn't GDM read/apply the system locales? Why don't I see,
in GDM login screen, the box to change
the locale?
A: The language selector in GDM is gone:
*
*https://bugzilla.gnome.org/show_bug.cgi?id=671528
A: You'll want to ensure you have each locale installed on the machine. Not all locales are bundled with the install - and if the packages (and the packages dependancies) don't exist on the system it will revert to the default first installed locale - en
A: Have you read this : https://help.ubuntu.com/community/Locale
This may help you.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,730
|
package rhomobile.mapview;
import java.util.Hashtable;
import java.util.Vector;
import net.rim.device.api.ui.UiApplication;
import com.xruby.runtime.builtin.ObjectFactory;
import com.xruby.runtime.builtin.RubyArray;
import com.xruby.runtime.builtin.RubyHash;
import com.xruby.runtime.builtin.RubyString;
import com.xruby.runtime.lang.RubyBasic;
import com.xruby.runtime.lang.RubyBlock;
import com.xruby.runtime.lang.RubyClass;
import com.xruby.runtime.lang.RubyModule;
import com.xruby.runtime.lang.RubyConstant;
import com.xruby.runtime.lang.RubyException;
import com.xruby.runtime.lang.RubyNoArgMethod;
import com.xruby.runtime.lang.RubyOneArgMethod;
import com.xruby.runtime.lang.RubyRuntime;
import com.xruby.runtime.lang.RubyValue;
public class MapView extends RubyBasic {
private static class Parent implements MapViewParent {
private static MapViewScreen screen = null;
private static class CallMapViewScreen implements Runnable {
private Parent thiz;
private String provider;
private Hashtable settings;
private Vector annotations;
public CallMapViewScreen(Parent t, String p, Hashtable s, Vector a) {
thiz = t;
provider = p;
settings = s;
annotations = a;
}
public void run() {
if (screen != null)
return;
//Initialize the screen.
screen = new MapViewScreen(thiz, provider, settings, annotations);
UiApplication.getUiApplication().pushModalScreen(screen);
}
};
public void create(String provider, Hashtable settings, Vector annotations) {
UiApplication.getUiApplication().invokeLater(
new CallMapViewScreen(this, provider, settings, annotations));
}
public void close() {
UiApplication.getUiApplication().invokeLater(new Runnable() {
public void run() {
if (screen == null)
return;
screen.close();
screen = null;
}
});
}
public void onChildClosed() {
screen = null;
}
public boolean closed() {
return screen == null;
}
public double getCenterLatitude() {
if (screen == null)
return 0.0;
return screen.getCenterLatitude();
}
public double getCenterLongitude() {
if (screen == null)
return 0.0;
return screen.getCenterLongitude();
}
};
private static Parent parent = new Parent();
public MapView(RubyClass c) {
super(c);
}
public static void initMethods(RubyModule klass) {
klass.getSingletonClass().defineMethod("create", new RubyOneArgMethod() {
protected RubyValue run(RubyValue receiver, RubyValue arg,
RubyBlock block) {
RubyHash settingsHash = null;
RubyArray annotationsArray = null;
RubyValue providerValue = null;
Hashtable settings = new Hashtable();
Vector annotations = new Vector();
RubyHash hash = (RubyHash)arg;
if (hash != null) {
RubyArray arKeys = hash.keys();
RubyArray arValues = hash.values();
for (int i = 0; i != arKeys.size(); ++i) {
RubyValue key = arKeys.get(i);
RubyValue value = arValues.get(i);
if (key == null || value == null)
continue;
String strKey = key.toString();
if (strKey.equals("settings")) {
if (!(value instanceof RubyHash))
throw new RubyException(RubyRuntime.ArgumentErrorClass,
"Wrong 'settings' type, should be Hash");
settingsHash = (RubyHash)value;
}
else if (strKey.equals("annotations")) {
if (!(value instanceof RubyArray))
throw new RubyException(RubyRuntime.ArgumentErrorClass,
"Wrong 'annotations' type, should be Array");
annotationsArray = (RubyArray)value;
}
else if (strKey.equals("provider")) {
if (!(value instanceof RubyString))
throw new RubyException(RubyRuntime.ArgumentErrorClass,
"Wrong 'provider' type, should be String");
providerValue = value;
}
}
}
if (settings != null) {
RubyArray arKeys = settingsHash.keys();
RubyArray arValues = settingsHash.values();
for (int i = 0; i != arKeys.size(); ++i) {
RubyValue key = arKeys.get(i);
RubyValue value = arValues.get(i);
if (key == null || value == null)
continue;
String strKey = key.toString();
if (strKey.equals("map_type")) {
String strValue = value.toString();
if (!strValue.equals("roadmap") && !strValue.equals("satellite")
&& !strValue.equals("terrain") && !strValue.equals("hybrid"))
throw new RubyException(RubyRuntime.ArgumentErrorClass,
"Wrong 'map_type' value: " + strValue);
settings.put(strKey, strValue);
}
else if (strKey.equals("zoom_enabled"))
settings.put(strKey, new Boolean(value.inspect().equalsIgnoreCase("true")));
else if (strKey.equals("scroll_enabled"))
settings.put(strKey, new Boolean(value.inspect().equalsIgnoreCase("true")));
else if (strKey.equals("shows_user_location"))
settings.put(strKey, new Boolean(value.inspect().equalsIgnoreCase("true")));
else if (strKey.equals("region")) {
if (value instanceof RubyArray) {
RubyArray arr = (RubyArray)value;
if (arr.size() == 4) {
Hashtable region = new Hashtable();
double[] cs = {0.0, 0.0, 0.0, 0.0};
for (int k = 0; k != 4; ++k) {
String v = arr.get(k).toString();
try {
cs[k] = Double.parseDouble(v);
}
catch (NumberFormatException e) {
throw new RubyException(RubyRuntime.ArgumentErrorClass,
"Wrong region value: " + v + ", should be Float");
}
}
region.put("latitude", new Double(cs[0]));
region.put("longitude", new Double(cs[1]));
region.put("latDelta", new Double(cs[2]));
region.put("lonDelta", new Double(cs[3]));
settings.put(strKey, region);
}
}
else if (value instanceof RubyHash) {
RubyHash hsh = (RubyHash)value;
RubyArray hKeys = hsh.keys();
RubyArray hValues = hsh.values();
for (int j = 0; j < hKeys.size(); ++j) {
RubyValue hKey = hKeys.get(j);
RubyValue hValue = hValues.get(j);
if (hKey == null || hValue == null)
continue;
String strHKey = hKey.toString();
if (strHKey.equals("center")) {
settings.put("center", hValue.toString());
}
else if (strHKey.equals("radius")) {
String strHValue = hValue.toString();
try {
double radius = Double.parseDouble(strHValue);
settings.put("radius", new Double(radius));
}
catch (NumberFormatException e) {
throw new RubyException(RubyRuntime.ArgumentErrorClass,
"Wrong 'radius' parameter: " + strHValue + ", should be Float");
}
}
}
}
}
}
}
if (annotationsArray != null) {
for (int i = 0; i != annotationsArray.size(); ++i) {
Annotation annotation = new Annotation();
RubyValue val = annotationsArray.get(i);
if (!(val instanceof RubyHash))
throw new RubyException(RubyRuntime.ArgumentErrorClass,
"Wrong annotation value type, should be Hash");
RubyHash ann = (RubyHash)val;
RubyArray arKeys = ann.keys();
RubyArray arValues = ann.values();
for (int j = 0, lim = arKeys.size(); j < lim; ++j) {
RubyValue key = arKeys.get(j);
RubyValue value = arValues.get(j);
if (key == null || value == null ||
key.equals(RubyConstant.QNIL) ||
value.equals(RubyConstant.QNIL))
continue;
String strKey = key.toStr();
String strValue = value.inspect();
if (strKey.equals("latitude")) {
double v;
try {
v = Double.parseDouble(strValue);
}
catch (NumberFormatException e) {
throw new RubyException(RubyRuntime.ArgumentErrorClass,
"Wrong 'latitude' parameter: " + strValue + ", should be Float");
}
if (annotation.coordinates == null)
annotation.coordinates = new Annotation.Coordinates(v, 0);
else
annotation.coordinates.latitude = v;
}
else if (strKey.equals("longitude")) {
double v;
try {
v = Double.parseDouble(strValue);
}
catch (NumberFormatException e) {
throw new RubyException(RubyRuntime.ArgumentErrorClass,
"Wrong 'longitude' parameter: " + strValue + ", should be Float");
}
if (annotation.coordinates == null)
annotation.coordinates = new Annotation.Coordinates(0, v);
else
annotation.coordinates.longitude = v;
}
else if (strKey.equals("title"))
annotation.title = strValue;
else if (strKey.equals("subtitle"))
annotation.subtitle = strValue;
else if (strKey.equals("street_address"))
annotation.street_address = strValue;
else if (strKey.equals("resolved_address"))
annotation.resolved_address = strValue;
else if (strKey.equals("url"))
annotation.url = strValue;
else if (strKey.equals("pass_location"))
annotation.pass_location = strValue.equalsIgnoreCase("true") || strValue.equalsIgnoreCase("1");
}
annotations.addElement(annotation);
}
}
String providerId = "google";
/*
if (providerHash != null) {
RubyArray arKeys = providerHash.keys();
RubyArray arValues = providerHash.values();
for (int i = 0; i != arKeys.size(); ++i) {
RubyValue key = arKeys.get(i);
RubyValue value = arValues.get(i);
if (key == null || value == null)
continue;
String strKey = key.toString();
if (strKey.equals("id"))
providerId = value.toString().toLowerCase();
}
}
*/
if (providerValue != null)
providerId = providerValue.toString().toLowerCase();
parent.create(providerId, settings, annotations);
return RubyConstant.QNIL;
}
});
klass.getSingletonClass().defineMethod("close", new RubyNoArgMethod() {
protected RubyValue run(RubyValue receiver, RubyBlock block) {
parent.close();
return RubyConstant.QNIL;
}
});
klass.getSingletonClass().defineMethod("state_started", new RubyNoArgMethod() {
protected RubyValue run(RubyValue receiver, RubyBlock block) {
return parent.closed() ? RubyConstant.QFALSE : RubyConstant.QTRUE;
}
});
klass.getSingletonClass().defineMethod("state_center_lat", new RubyNoArgMethod() {
protected RubyValue run(RubyValue receiver, RubyBlock block) {
return ObjectFactory.createFloat(parent.getCenterLatitude());
}
});
klass.getSingletonClass().defineMethod("state_center_lon", new RubyNoArgMethod() {
protected RubyValue run(RubyValue receiver, RubyBlock block) {
return ObjectFactory.createFloat(parent.getCenterLongitude());
}
});
klass.getSingletonClass().defineMethod("set_file_caching_enable", new RubyOneArgMethod()
{
protected RubyValue run(RubyValue receiver, RubyValue arg, RubyBlock block)
{
//TODO: set_file_caching_enable
return RubyConstant.QNIL;
}
});
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,181
|
Manufactured, Modular Homes Filling North Dakota Project
By Matthew Silver / Business, Communities, Company News, Factory-Built Homes, home buyers, Manufactured and Modular Housing News, Manufactured Homes, Modular / February 12, 2014 February 12, 2014
Five hundred thirty acres of former farmland in Minot, North Dakota are set to become a new planned community of 2,000 residential units around a hub of commercial and retail space, including a park and trail system. Infrastructure work began last year to support the development, which includes what is now the 350 manufactured home (MH) site Wheatland Village, formerly the Federal Emergency Management Agency's (FEMA) temporary manufactured housing for those left homeless by the 2011 flood, many units of which have been sold to occupants. One hundred sixty of the homesites remain vacant, although Lifeway Homes is selling new MH to residents.
According to minotdailynews.com, Performance Homes of Minot is building 62 modular townhomes on property adjacent to Wheatland Village, which will be priced at $169,000 to $200,000, with the first homes available early this year. MHProNews.com has extensively covered the development of workforce and residential housing in this area dominated by the influx of workers seeking employment in and around the Bakken Oilfield drilling operation.
(Photo credit: Associated Press–modular mancamp in Williston, ND)
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,078
|
\section{Introduction}
\label{sec:intro}
Physical properties of living \cite{McMahon05} but also synthetic systems in condensed \cite{Castelvecchi17} and soft \cite{Senyuk12} matter are determined by the interplay between the physical order parameter, geometry and topology. Specifically to magnetism, magnetization textures and dynamic responses become sensitive to bends and twists in physical space.
Curvature effects emerged as a novel tool in various areas of physics to tailor electromagnetic properties and responses relying on geometrical deformations \cite{Bowick09,Turner10}. Typically, the consideration of curvature-induced effects is based on theories, which involve \emph{local} interactions for the description of molecular alignment in liquid crystals \cite{Napoli12,Napoli13,Napoli18}, physics of superconductors \cite{Vitelli04,Gladilin08,Fomin12}, macromolecular structures \cite{Forgan11} electronic properties of different corrugated films \cite{Cortijo07,Juan11,Yan13a,Gentile15,Ochoa17}. For many systems, if not for all, this local description is incomplete. For instance, in magnetically ordered systems local picture misses to describe most of micromagnetic textures like chiral domain walls, skyrmion-bubbles and vortices. Therefore, the accepted fundamental foundation of modern magnetism necessarily requires both local and nonlocal interactions threatement on equal footing~\cite{Landau35,Brown63,Aharoni96,Hubert09}.
In contrast, the modern theory of curvilinear magnetism is still at the level when local~\cite{Carvalho-Santos08,Kravchuk12a,Gaididei14,Sheka15, Pylypovskyi15b,Sheka15c,Yershov15b,Pylypovskyi16, Yershov16,Moreno17a,Gaididei18a,Volkov18,Kravchuk18a} and non-local~\cite{Landeros07,Landeros10,Sheka13b,Sloika14, Yershov15,Otalora16, Hertel16,Tretiakov17,Otalora17, Sloika17,Otalora18} interactions are treated separately. This makes the description of the systems inherently incomplete as not only important fundamental effects can be amiss but also predictive power of the available theory is limited.
Here, we present a generalized micromagnetic theory of curvilinear magnetism. The theory describes the impact of curvature induced effects, driven by both local and nonlocal interactions, on static and dynamic magnetic texture in curved magnetic thin shells. Fundamentally, we identified new effects, which do no exist in planar magnets. In particular, we demonstrated that the physics of curvilinear systems cannot be described in the frame of the established physical picture relying on surface and volume magnetostatics charges, introduced in a seminal work by W.~F.~\citet{Brown63}. The curvature leads to the appearance of the new magnetostatic charge, determined by local characteristics of the surface. This newcomer is responsible for the appearance of novel fundamental effects like nonlocal anisotropy and nonlocal chiral effects. Furthermore, for more than 70 years there was a firm confidence that the surface and volume magnetostatic charges are decoupled. They were always considered as the two sides of the same coin. We demonstrate that there appears an intimate coupling between these two quantities. As a stark consequence, novel chiral effects emerge in spatially corrugated magnetic thin films.
These new effects are completely unexplored. We are convinced that their analysis will stimulate to rethink the origin of chiral and anisotropy effects in different systems, e.g. in fundamentally appealing and technologically relevant skyrmionic systems in polycrystalline thin films where surface roughness is unavoidable.
On the technical side, we apply a novel mathematical framework based on covariant derivatives formalism which allows to separate explicit curvature effects from spurious effects of the curvilinear reference frame. Relying on symmetry consideration of different interactions we predict and classify possible curvature effects on a equilibrium state magnetic texture in curved magnetic thin shells, which goes beyond the well-accepted linear in film thickness approach.
The impact of this theory goes well beyond the magnetism community. The presented conclusions can be easily extended for studying the evolution of generic vector fields on curved shells in different models of condensed (graphene \cite{Yan13a}, superconductors \cite{Vitelli04}) and soft (nematics \cite{Napoli12}, cholesterics \cite{Napoli13}) matter.
\section{Results}
\label{sec:results}
\begin{figure*
\includegraphics[width=\textwidth]{flat2curv}
\caption{(Color online) \textbf{Impact of curvature on ferromagnetic shell.} (a) Schematics of surface with corresponding Darboux three-frame. Principal directions are shown by dashed lines. Central surface $\vec{\varsigma}$ is extruded along the normal direction $\hat{\vec{n}}$. Top and bottom surfaces have their own normals $\vec{n}_\textsc{s}^\text{top}$ and $\vec{n}_\textsc{s}^\text{bot}$, respectively. (b) \textit{Exchange} interaction is locally defined by tangential derivatives of magnetization $\eth_im_j$ and curvature-driven terms $\kappa_i m_j$. (c) \textit{Intrinsic DMI} is locally defined by $\eth_im_j$ as well as exchange, and by mean curvature-driven term $\mathfrak{S}m_n$. Exchange and intrinsic DMI both contribute to the regular exchange term and chiral term of interfacial DMI type in the total energy as well, as anisotropy of uni- and biaxial type. (d) \textit{Magnetostatics} in general form is expressed through three type of charges: volume $\rho$, surface $\sigma^\pm$ and cuvature-induced one $\mathfrak{S}$. Their combination results in shape anisotropy (in the same way as for flat samples), anisotropic and two chiral terms.
}
\label{fig:schematics}
\end{figure*}
We consider a curved ferromagnetic shell of thickness $h$ with a shape locally described by a Gaussian $\mathcal{K}(\vec{r})$ and mean $\mathcal{H}(\vec{r})$ curvatures. A magnetic texture is controlled by the exchange, anisotropy and magnetostatic interactions. The total energy, normalized by $4\pi M_s^2$, has the following form:
\begin{equation} \label{eq:E-total}
\begin{aligned}
E &= E_{\text{x}} + E_{\text{an}} + E_{\text{d}}, \\
E_{\text{x}} &=- \ell^2 \int \mathrm{d} \vec{r}\, \vec{m}\cdot \vec{\nabla}^2\vec{m},\\
E_{\text{an}} &=\frac{Q}{2} \int \mathrm{d} \vec{r} \left(\vec{m} \cdot \vec{e}_a\right)^2, \\
E_{\text{d}} &=\frac{1}{8\pi} \!\!\int\!\! \mathrm{d} \vec{r} \!\! \int \!\! \mathrm{d} {\vec{r}'} \left(\vec{m}(\vec{r}) \!\cdot\! \vec{\nabla}\right)\left(\vec{m}({\vec{r}'}) \!\cdot\! {\vec{\nabla}'}\right) \! \frac{1}{\left|\vec{r} - {\vec{r}'}\right|},\!\!\!\! \\
E_{\textsc{dm}} & = D \int \mathrm{d} \vec{r} \left( m_n \nabla \vec{m} - (\vec{m}\cdot \nabla) m_n \right).
\end{aligned}
\end{equation}
Here, $\ell = \sqrt{A/4\pi M_s^2}$ is the exchange length, $A$ is the exchange constant, $M_s$ is the saturation magnetization, $Q = K/(2\pi M_s^2)$ with $K$ being the intrinsic crystalline anisotropy constant, $\vec{e}_a = \vec{e}_a(\vec{r})$ is the direction of the anisotropy axis and $\vec{m}(\vec{r}) = \vec{M}/M_s$ is the unit vector of magnetization. We suppose that the anisotropy direction $\vec{e}_a$ is determined by the surface geometry, it corresponds to one of the principal directions or their linear combination, see Appendix \ref{sec:geometry} for details, $D = D_\text{int}/(4\pi M_s^2)$ with $D_\text{int}$ being constant of the intrinsic Dzyaloshinskii--Moriya interaction (DMI), and $m_n$ is the normal component of magnetization.
We limit our discussion to the case of thin shells and describe them as an extrusion of a surface $\vec{\varsigma}(\vec{r})$ by a constant value $h$ along the vector $\vec{\hat{n}}= \vec{\hat{n}}(\vec{r})$ normal to the surface. Furthermore, we assume that the magnetization does not depend on the thickness coordinate along $\vec{\hat{n}}$.
By choosing the curvilinear reference frame, adapted to the geometry of an object, anisotropy obtains its usual spatially-invariant form, see Fig.~\ref{fig:schematics}.
For the theoretical analysis, we apply a new mathematical tool based on covariant derivatives. The main purpose of using this language is to separate two effects: an system-specific curvature effect and spurious effect of the curvilinear reference frame, see Appendix~\ref{sec:geometry} for details. Because of the geometry broken symmetry it is natural to restructure all magnetic energy terms containing spatial derivatives. A characteristic example is an exchange interaction: being isotropic in the Cartesian reference frame, it contains three components of different symmetries in curvilinear coordinates, $E_{\text{x}} = E_{\text{x}}^0 + E_{\text{x}}^\textsc{a} + E_{\text{x}}^\textsc{d}$~\cite{Gaididei14,Sheka15}. $E_{\text{x}}^0$ is a `common', regular isotropic part of exchange interaction, which has the form similar to the one in a planar film:
\begin{subequations} \label{eq:E-ex}
\begin{equation} \label{eq:E-ex-0}
E_{\text{x}}^0 =
h \ell^2 \int \left(\eth_\alpha m_i\right) \left(\eth_\alpha m_i\right) \mathrm{d} S
\end{equation}
with $\eth_\alpha$ being the modified covariant derivative with respect to the surface coordinate $x_\alpha$, see Eq.~\eqref{eq:D-covar}, and $m_i$ being the magnetization components in the curvilinear orthonormal Darboux three-frame $\left\{\vec{e}_1, \vec{e}_2, \hat{\vec{n}}\right\}$ on the surface $\vec{\varsigma}$, where $\vec{e}_1$ and $\vec{e}_2$ are unit vectors corresponding to the principal directions, $\hat{\vec{n}} = \vec{e}_1\times \vec{e}_2$ is the normal to the surface, see Fig.~\ref{fig:schematics} and Appendix \ref{sec:geometry} for details. Here and below, we use Greek letters $\alpha,\,\beta,\ldots = {1,2}$ to denote indices restricted to the shell surface. To indicate all three components of any vector, we use Latin indices $i,\,j,\ldots = {1,2,n}$. Here and below we also use the Einstein summation convention. We emphasize, that all effects stem from the choice of the reference frame are properly and unambiguously assigned to $E_\text{ex}^0$. This represents the major advantage of the approach based on covariant derivatives.
The second term in the exchange energy reads
\begin{equation} \label{eq:Eex-A}
E_{\text{x}}^\textsc{a} = h\ell^2 \int w_\text{x}^{\textsc{a}} \mathrm{d}S , \quad w_\text{x}^{\textsc{a}} = \mathscr{K}_{ij} m_im_j. \\
\end{equation}
In general, this energy term describes the curvature-induced biaxial anisotropy, $\begin{Vmatrix}\mathscr{K}_{ij}\end{Vmatrix} = \text{diag} \left(\kappa_1^2, \kappa_2^2 ,\kappa_1^2 + \kappa_2^2\right)$ with $\kappa_1$ and $\kappa_2$ being local values of principle curvatures, related to Gaussian and mean curvature as $\mathcal{K} = \kappa_1 \kappa_2$ and $\mathcal{H} = \kappa_1 + \kappa_2$, respectively. Then, energy density of this term reads
\begin{equation}\label{eq:wex-a}
w_\text{x}^{\textsc{a}} = \kappa_1^2 m_1^2 + \kappa_2^2 m_2^2 + (\kappa_1^2 + \kappa_2^2)m_n^2.
\end{equation}
A striking manifestation of the curvature-induced anisotropy is shape-induced patterning, for a review see \cite{Streubel16a}.
The last term in the exchange energy is a curvature-induced extrinsic DMI \cite{Gaididei14}
\begin{equation} \label{eq:Eex-D}
\begin{aligned}
E_{\text{x}}^\textsc{d} = & 2h\ell^2 \int \mathrm{d}S \left(w_\text{x}^{\textsc{d}\,1} + w_\text{x}^{\textsc{d}\,2} \right)\\
& w_\text{x}^{\textsc{d}\,\alpha} = \kappa_\alpha \mathscr{L}_{\alpha n}^{(\alpha)},\quad \alpha=1,2,
\end{aligned}
\end{equation}
where no summation over $\alpha$ in Eq.~\eqref{eq:Eex-D} is applied. This term is
determined by the curvilinear-geometry analogue of Lifshitz invariants
\begin{equation} \label{eq:Lifshitz-inv}
\mathscr{L}_{i j}^{(\alpha)} = m_i \eth_{\alpha} m_j - m_j \eth_{\alpha} m_i.
\end{equation}
\end{subequations}
The two Lifshitz invariants in \eqref{eq:Eex-D} are determined by principal curvatures $\kappa_1$ and $\kappa_2$. The curvature-induced DMI is a reason for a chiral symmetry breaking, i.e. magnetochiral effects \cite{Hertel13a}, for a review see \cite{Streubel16a}.
The curvilinear geometry also has an affect on the magnetostatic energy of a shell $E_{\text{d}}$. For further analysis, it is insightful to modify magnetostatic volume charges:
\begin{equation} \label{eq:div-2D}
-\vec{\nabla}\cdot \vec{m} = -\eth_\alpha m_\alpha +\mathfrak{S}, \qquad \mathfrak{S}(\vec{r}) = \mathcal{H}(\vec{r}) m_n(\vec{r}).
\end{equation}
Physics of curvilinear magnetism naturally introduces three fundamental charges: surface and volume magnetostatic charges introduced by \citet{Brown63}, and novel curvature-inducd charge $\mathfrak{S}$, determined by the mean curvature $\mathcal{H}$.
Although, the latter has a striking similarity to a `conventional' surface charge $\sigma^{\pm} = \vec{m}\cdot \vec{n}_{\textsc{s}}^\text{top~(bot)}$ on the top $+$ (bottom $-$) surface, which is also proportional to the normal component of the magnetization. Still, there is an important difference between $\mathfrak{S}$ and $\sigma^{\pm}$. The surface charges $\sigma^{\pm}$ have opposite signs at opposite shell surfaces. Hence, these charges act like an effective magnetostatic capacitor, see Fig.~\ref{fig:schematics}. In contrast, the curvature-induced charge $\mathfrak{S}$ is determined by the normal to the surface $\vec{\varsigma}$ (but not via the top/bottom surface of a shell). Furthermore, the sign of $\mathfrak{S}$ is defined by the mean curvature $\mathcal{H}$ only. This new charge leads to the appearance of new physical effects which are intrinsically nonlocal and reveal themselves as nonlocal anistorpy and nonlocal chiral effects.
Similar to the exchange interaction, the geometrically broken symmetry results in the reorganization of the magnetostatic energy terms in the form, adapted to the geometry: $E_{\text{d}} = E_\text{ms}^{0} + E_{\text{d}}^{\textsc{a}} + E_{\text{d}}^{\textsc{c}} + E_{\text{d}}^{{\textsc{s-v}}}$. The term $E_\text{ms}^{0}$ is similar to the planar case,
\begin{subequations} \label{eq:E-ms-nonloc}
\begin{equation} \label{eq:E0-ms}
\begin{aligned}
E_\text{ms}^{0} &= \frac{1}{8\pi} \int \vec{m}(\vec{r}) \cdot \mathrm{d} \vec{S} \int \frac{\vec{m}({\vec{r}'})\cdot \mathrm{d} {\vec{S}'}}{\left|\vec{r} - {\vec{r}'}\right|}\\
& + \frac{1}{8\pi} \int \mathrm{d} \vec{r}\ \eth_\alpha m_\alpha(\vec{r}) \int \mathrm{d} {\vec{r}'} \frac{\eth_\alpha m_\alpha({\vec{r}'})}{\left|\vec{r} - {\vec{r}'}\right|}.
\end{aligned}
\end{equation}
Here, $\mathrm{d}\vec{S} = \vec{n}^\text{+~(-)}\mathrm{d}S$ is a directed surface element.
In the main order on the shell thickness $h$, the above magnetostatic energy term is $E_\text{ms}^0 = (h/2)\int (\vec{m}\cdot \vec{\hat{n}})^2 \mathrm{d}S + \mathcal{O}(h^2)$ \cite{Carbou01,Slastikov05,Fratta16b}. This term is local and typically leads to the renormalization of anisotropy coefficients. It is the only term, linear in $h$ stemming from the magnetostatic interaction.
All other contributions to the magnetostatic interaction are essentially nonlocal. In thin shell limit they scale as $h^2 + \mathcal{O}(h^3)$.
The next magnetostatic term reads
\begin{equation} \label{eq:E1-ms}
\begin{aligned}
E_{\text{d}}^{\textsc{a}} = & \frac{1}{4\pi}\! \int w_\text{d}^{\textsc{a}} \mathrm{d} \vec{r},\\
w_\text{d}^{\textsc{a}} = & \mathfrak{S}(\vec{r}) \left[ \frac12 \int \frac{\mathfrak{S}({\vec{r}'})\mathrm{d} {\vec{r}'}}{\left|\vec{r} - {\vec{r}'}\right|} - \int \frac{\vec{m}({\vec{r}'}) \cdot \mathrm{d} {\vec{S}'}}{\left|\vec{r} - {\vec{r}'}\right|} \right].
\end{aligned}
\end{equation}
Although nonlocal, this term is bilinear on the normal component of magnetization and contributes to the shape anisotropy.
The curvature-induced chiral part of the nonlocal magnetostatic interaction reads
\begin{equation} \label{eq:E2-ms}
E_{\text{d}}^{\textsc{c}} \! = \! \frac{1}{4\pi} \!\!\int \!\! w_\text{d}^{\textsc{c}} \mathrm{d} \vec{r},\quad w_\text{d}^{\textsc{c}} = -\eth_\alpha m_\alpha(\vec{r}) \!\int\! \frac{\mathfrak{S} ({\vec{r}'}) \mathrm{d} {\vec{r}'}}{\left|\vec{r} - {\vec{r}'}\right|}\!.\!\!
\end{equation}
It characterizes the interaction between `common' volume charge $ \eth_\alpha m_\alpha(\vec{r}) $ and the curvature-induced charge $\mathfrak{S}$. Thus, the energy \eqref{eq:E2-ms} is specific to curved shells only. Similarly to the curvature-induced DMI $E_{\text{x}}^\textsc{d}$, the magnetostatic contribution $E_{\text{d}}^{\textsc{c}}$ is linear with respect to the derivative of magnetization. Having a similarity with the Lifshitz invariants in Eq.~\eqref{eq:Eex-D}, this energy term favours the coupling between the out-of-surface magnetization $m_n$ and spatial derivatives of the in-surface components $m_\alpha$. Therefore, this term is responsible for nonreciprocal effects, in particular, magnetochiral effects. We emphasize that in contrast to the curvature-induced DMI \eqref{eq:Eex-D}, this chiral term~\eqref{eq:E2-ms} is essentially nonlocal.
The last term in magnetostatics describes the interaction between surface and volume magnetostatic charges:
\begin{equation} \label{eq:E3-ms}
E_{\text{d}}^{{\textsc{s-v}}} \!=\! \frac{1}{4\pi} \!\! \int \!\! w_\text{d}^{\textsc{s-v}} \mathrm{d} \vec{r},\quad w_\text{d}^{\textsc{s-v}} \!=\! \eth_\alpha m_\alpha(\vec{r}) \!\int\! \frac{\vec{m}({\vec{r}'}) \!\cdot\! \mathrm{d} {\vec{S}'}}{\left|\vec{r}- {\vec{r}'}\right|}\!.\!\!
\end{equation}
This coupling between surface and volume magnetostatic charges does not exist in planar fimls.
It also vanishes for any homogeneous magnetic texture in curved shell. We point out the interaction \eqref{eq:E3-ms} is chiral and appears if the top and bottom surfaces of a shell are not equivalent, i.e. they cannot be translated one into another by translation along the normal. As a stark consequence, novel chiral effects emerge in spatially corrugated magnetic thin films. For instance, this term appears in cylindrical and spherical shells due to the difference in the area of the inner and outer surfaces.
\end{subequations}
The energy of intrinsic DMI is broken into two componets: $E_\textsc{dm} = E_\textsc{dm}^0 + E_\textsc{dm}^\textsc{a}$~\cite{Kravchuk16a}. Here, $E_\textsc{dm}^0$ is a regular part of DMI with a structure, similar to the planar case:
\begin{subequations}\label{eq:intrinsic-DMI}
\begin{equation}\label{eq:DMI-0}
E_\textsc{dm}^0 = hD \int \left( m_n \eth_{\alpha}m_\alpha - m_\alpha \eth_{\alpha} m_n \right) \mathrm{d}S,
\end{equation}
cf.~Eq.~\eqref{eq:Eex-D}. The second part plays a role of an additional uniaxial anisotropy
\begin{equation}\label{eq:DMI-a}
E_\textsc{dm}^\textsc{a} = - hD \int \mathfrak{S}m_n \mathrm dS,
\end{equation}
cf.~Eq.~\eqref{eq:E1-ms}.
\end{subequations}
\section{Discussion}
\label{sec:discussion}
\begin{table*}
\caption{\label{tbl:classif-ground} Effects of curvature-induced chiral and anisotropic terms in exchange and magnetostatic energies on \emph{assumed equilibrium state} $\widetilde{\vec{m}}$ given by the anisotropy for different symmetries. Here, each of curvatures is considered either zero or nonconstant function. The central and right hand parts of the table contains consequences of the input given in the left. Abbreviations EA HA correspond to easy-axis and hard axis anisotropies respectively. Last five columns show presence of geometry-induced exchange and magnetostatic terms. Black arrows and dotted lines in surfaces show principal directions for the corresponding surfaces. A direction $\vec{e}_a$ is one of the unit vectors or their linear combination.}
\includegraphics[width=180mm,trim={2mm 72mm 2mm 71mm},clip]{table-static}
\end{table*}
The direct analysis of all magnetostatic energy contributions~\eqref{eq:E-ms-nonloc} is complicated by the nonlocal integration kernels. For this reason, we apply a symmetry analysis to the energy of a ferromagnetic shell to distinguish sources of possible effects of curvature on the magnetic texture.
In the following, we consider the case of strong anisotropies, which allows us to study a magnetic texture, which does not deviate significantly from the \emph{assumed equilibrium state} $\widetilde{\vec{m}}$ given by the anisotropy. We are interested in how local properties, i.e. local curvatures of the surface and local orientation of the magnetic easy axis, impact the resulting global magnetic state.
\subsection{Effects of curvature, classified by the shell type}
\label{sec:classification}
Any surface $\vec{\varsigma}$ can be locally defined via its two principal curvatures $\kappa_1$ and $\kappa_2$, which are present in the energy terms discussed above. For our discussion, we consider uniaxial magnets with special types of anisotropy along one of the principal directions for the following distinct cases of surfaces:
(i) A class of developable surfaces of zero Gaussian curvature, $\mathcal{K}(\vec{r}) = 0$, includes cylinders, cones and tangent surfaces~\cite{Krivoshapko15}. They can be locally developed into a plane without stretching. Since cones and tangent surfaces are singular ones~\cite{Johansen05}, here, we consider generalized cylindrical surfaces only.
(ii) Minimal surfaces with vanishing mean curvature, $\mathcal{H}(\vec{r}) = 0$, have principal curvatures of opposite signs and in the vicinity of each point they are saddle-shaped. Minimal surfaces provide the minimal surface area enclosed by a given boundary.
(iii) General case with nonvanishing $\mathcal{H}$ and $\mathcal{K}$ and arbitrary local surface elements including convex and saddle ones.
The impact of the geometry on a magnetic texture is summarized in Table~\ref{tbl:classif-ground}. It is given by the interplay of the curvature-induced energy terms and the type of anisotropy and orientation of the anisotropy axis. We refer to the curvature-related energy terms as following:
\begin{equation}\label{eq:}
\begin{aligned}
E = & E_0 + 2 h\ell^2 \int \mathrm dS \left( w_\text{x}^{\textsc{d}\,1} + w_\text{x}^{\textsc{d}\,2} + w_\text{x}^{\textsc{a}} \right) \\
& + \dfrac{1}{4\pi} \int \mathrm d\vec{r} \left( w_\text{d}^{\textsc{c}} + w_\text{d}^{\textsc{a}} \right).
\end{aligned}
\end{equation}
Here $E_0 = E_\text{ex}^0 + E_\text{ms}^{0} + E_{\text{d}}^{{\textsc{s-v}}}$ absorbs terms which do not depend explicitly on the curvature of the surface. First two of them contain derivatives of magnetization components and can result in chiral effects even in a purely planar case due to chiral magnetic texture, the so-called \emph{pattern-induced chirality breaking} ~\cite{Streubel16a}. These effects are well studied for magnons on the background of solitons \cite{Sheka01}, vortices \cite{Sheka04} and skyrmions \cite{Kravchuk18}. We do not discuss influence of the intrinsic DMI~\eqref{eq:intrinsic-DMI} here as it has symmetry of already included terms.
Third term, $E_{\text{d}}^{{\textsc{s-v}}}$, is present only for the inhomogeneous magnetization texture if top and bottom surfaces of the shell are not equivalent.
The curvature-induced exchange terms $w_\text{x}^{\textsc{d}\,1}$, $ w_\text{x}^{\textsc{d}\,2}$ and $w_\text{x}^{\textsc{a}}$ scale linearly with the shell thickness. Both magnetostatic terms, $ w_\text{d}^{\textsc{c}} $ and $w_\text{d}^{\textsc{a}} $
are present only for curved shells with a non-zero mean curvature, $\mathcal{H}\neq 0$; for the infinitesimally thin shells they scale quadratically with thickness. These magnetostatic terms are absent for minimal surfaces, e.g. for catenoids and helicoids.
\begin{figure*}
\includegraphics[width=\linewidth]{influence}
\caption{(Color online) \textbf{Schematics of modification of the \emph{assumed equilibrium state} and spin wave spectrum by curvature.} (a)--(d) Elliptical and circular cylinders with easy-normal anisotropy affected by curvature in different ways: Magnetization pattern in the elliptical cylinder is modified due to the symmetry breaking. Magnetization is shown by red arrows, normal direction is shown by dashed lines. (e) Static vortex state in the circular cylinder is modified by $w_\text{x}^a$ only. (f),~(g) Dispersion of spin waves propagating along cylinder axis $\vec{e}_1$ [is shown by black arrow in panel (e)]. While dispersion curve is only shifted by $\Delta \omega_{\text{gap}}$ for easy-normal anisotropy due to contribution of $w_\text{x}^\textsc{a}$, also it becomes asymmetric for the vortex state due to $w_\text{d}^\textsc{c}$ and $w_\text{d}^\textsc{s-v}$~\cite{Otalora16,Otalora17}. Blue arrows show change from assumed to the actual dispersion curve with $\Delta \omega_{\text{l}}$ and $\Delta \omega_{\text{r}}$ being frequency shifts for the left and right branches of the lowest radially symmetric mode respectively. }
\label{fig:influence}
\end{figure*}
It is important to stress that such an approach can not be considered as a sufficient condition of existence and moreover stability of corresponding magnetization states.
Table~\ref{tbl:classif-ground} provides the following information. If we know local curvatures and the direction of the easy axis in the vicinity of a given point, then we can assess if the resulting magnetic texture will be modified due to the presence of a curvature-induced anisotropy and if the texture will be chiral. We consider that the respective energy term will impact the texture if the term is nonzero. \emph{No other criteria are applied} while assembly Table~\ref{tbl:classif-ground}. In particular, this Table cannot be considered as sufficient conditions of existence and stability of the magnetization state. Here we discuss possible statical states by applying only symmetrical arguments to the energy functional irrelevant they are in local minimum of energy or not. Investigation of equilibrium magnetization texture for the concrete geometry should be a purpose of a separate work. For example, $w_\text{x}^{\textsc{d}\,1}$ does not impact magnetic textures (it vanishes) for the following cases: either $\kappa_1 \equiv 0$, or magnetic texture does not vary along $x_1$, or $m_n \equiv 0$.
A possible magnetic texture is assumed based on the sample symmetry and interplay between intrinsic and curvature-induced anisotropies.
For instance, we consider a developable surface with $\mathcal{K} = 0$ and non-constant second principal curvature, e.g., elliptical cylinder or ripple, and assume that magnetic easy axis is pointing normally to the surface. Then, the magnetic texture $\vec{m}= \vec{m}(x_2)$ is influenced by $w_\text{x}^{\textsc{d}\,2}$ (local chiral term). Both chiral and anisotropic magnetostatic terms, $w_\text{d}^\textsc{c}$ and $w_\text{d}^\textsc{a}$, respectively, are also present since the mean curvature is nonzero (these terms are essentially nonlocal). The term $w_\text{d}^\textsc{s-v}$, which is responsible for the interaction between the surface and volume charges, can appear due to inequivalence of top and bottom surfaces of the shell for inhomogeneous magnetization texture with non-vanishing `common' volume magnetostatic charges. Based on these considerations, the assumed equilibrium state $\widetilde{\vec{m}}$ (e.g. normally magnetized elliptical cylinder) will be modified due to local and nonlocal curvature effects as follows: (i) the state will be chiral, i.e. deviation from the $\hat{\vec{n}}$ is linear with respect to $\kappa_2(x_2)$, and (ii) effective easy-normal anisotropy will be inhomogeneously changed. As a result of this consideration, the initially assumed strictly normal magnetization distribution is modified by the appearance of the $m_2$ component: $\vec{m}= \{0, m_2(x_2), m_n(x_2)\}$.
Minimal surfaces do not exhibit effects from magnetostatics, which explicitly depend on the curvature due to $\mathcal{H} \equiv 0$. At the same time, all geometry-induced exchange-driven terms are present for any texture symmetry except easy-surface anisotropy or easy-axis anisotropy along $\vec{e}_2$. The chiral magnetostatics-driven term $w_\text{d}^{\textsc{s-v}}$ is always present for inhomogeneous textures with non-zero magnetostatic charge if the top and bottom surfaces of a shell are not equivalent. As in the previous case, the initially assumed strictly normal magnetic texture is modified by the appearance of the $m_1$ component: $\vec{m}= \{m_1(x_1), 0, m_n(x_1)\}$ for a catenoid.
For the general case of $\mathcal{H} \neq 0$ and $\mathcal{K} \neq 0$, any texture is expected to become chiral and modified due to the curvature-induced anisotropy (local and nonlocal). Note, that Table~\ref{tbl:classif-ground} is also valid for nonlinear excitations of the equilibrium state like domain walls if their symmetry corresponds to the function given in the `Texture symmetry' column.
\subsection{Special cases of magnetic shells}
The special interest attracts the spherical geometry with $\kappa_1 = \kappa_2 = \text{const}$, see Table~\ref{tbl:classif-ground}. The curvature-induced exchange driven DMI \eqref{eq:Eex-D} is well-established for the spherical surfaces. For magnetic vortices it results in coupling between the localized out-of-surface vortex core structure $m_n$ and the delocalized in-surface magnetization texture $m_1$, the so-called polarity chiralty coupling \cite{Kravchuk12a}. Note that without nonlocal magnetostatic interaction the magnetization states on the sphere forms three dimensional onion state with the in-surface meridional direction. Very recently it was shown that the volume magnetostatics results in a whirligig state \cite{Sloika16c}, which has no `common' volume charges, hence the magnetostatic energy of such a state is described by curvature-induced charges $\mathfrak{S}=2m_n/R$, see \eqref{eq:E1-ms}.
Using a spherical shell with easy-surface anisotropy as a reference example, let us estimate conditions of nonlocality of curvature effects. We consider a shell with localized curvature in a shape of some curved bump accommodating a localized topological defect. The defect size $w$ is much smaller than the typical curvature radius. Under this assumption, the curvature can be assumed constant (as for sphere, $\kappa_1(0) = \kappa_2 (0) =\kappa_0$) in the vicinity of the magnetic defect, hence we model the surface near the defect as the spherical one. The local curvature effects are determined mainly by the curvature-induced DMI, $E_{\text{x}}^\textsc{d}\sim h \ell^2\kappa_0 w$. The nonlocal curvature effects are mainly caused by the volume magnetostatic charges. Using the asymptotic analysis similar to \cite{Sloika17}, one can estimate that in the main order on the curvature, the magnetostatic contribution is determined by $ h^2 w^2 \kappa_0$. Both energy terms, which describe curvature effects, local and nonlocal ones, become of the same order when the film thickness is $h_c\approx \ell^2/w$. For topological defects with a typical size $w$ similar to the exchange length $\ell$, we obtain $h_c\sim \ell$. Note that the defect size can be much smaller than the exchange length: e.g., the curvature-induced skyrmion in a spherical shell with easy-normal anizotropy has a typical size $w\sim \ell^2\kappa_0$ \cite{Kravchuk16a}, which results in $h_c\sim 1/\kappa_0\gg \ell$. For thin films with $h\lesssim h_c$ the local picture with the exchange-driven curvature effects is adequate. Thicker films require nonlocal effects to be considered in the description of magnetic textures.
Developable surfaces are a special case due to the absence of $w_\text{x}^{\textsc{d}\,1}$ term including a family of circular and elliptical cylinders. The effect of curvature on magnetic state can be illustrated for the case of strong easy-normal anisotropy. According to Table~\ref{tbl:classif-ground}, the assumed normal state $\widetilde{\vec{m}}=\hat{\vec{n}}$ is affected by the following chiral contributions, $ w_\text{x}^{\textsc{d}\,2}$, $w_\text{d}^{\textsc{c}}$, and $ w_\text{d}^{\textsc{s-v}}$ in the case of elliptic cylinder, see Figs.~\ref{fig:influence}(a)--(b). The tilt of magnetization from the normal direction is mainly determined by the curvature variation, $\partial_2\kappa_2$. Analogous considerations allows to conclude that the assumed state along the normal direction will remain for the circular cylinder with $\kappa_2=\text{const}$, see Figs.~\ref{fig:influence}(c)--(d).
We focused the above analysis on simple magnetization texture. Nevertheless, the proposed theory can be used to describe also complex static and dynamic excitations.
To illustrate this approach we consider the spin wave propagation along straight generatrix $\vec{e}_1$ on the equilibrium state of cylinders, obtained in Fig.~\ref{fig:influence}. The analysis of the energy terms predicts the reciprocal propagation of spin waves in cylinders with easy-normal anisotropy for both elliptic and circular cases. Similar effects are known for the planar films with intrinsic DMI \cite{Cortes-Ortuno13}. The curvature-induced anisotropy $w_{\text{ex}}^\textsc{a}$ as well as magnetostatic term $w_\text{d}^{\textsc{a}}$ are responsible for the shift of the magnon gap $\Delta \omega_{\text{gap}}$, see Fig.~\ref{fig:influence} (f). Note this effect is similar to the magnon gap shift for the vortex domain wall in circular cylinders \cite{Gonzalez10}.
Now we consider the circular cylinder with anisotropy along $\vec{e}_2$. According to Table~\ref{tbl:classif-ground}, there are no chiral effects for the equilibrium state, and the resulting vortex state corresponds to the azimuthal anisotropy direction. One can excite the spin waves propagating along $\vec{e}_1$. They are influenced by curvature-induced anisotropy $w_\text{x}^\textsc{a}$, which results in magnon gap shift $\Delta \omega_{\text{gap}}$, see Fig.~\ref{fig:influence}(g). Typically, the analysis of energy terms is insufficient for the description of dynamical excitations such as spin waves: one needs to analyse Landau-Lifshitz equations. The key point of such analysis is to derive an effective field. In the case under consideration the only nonvanishing chiral energy terms can be written through the normal components of the effective dipolar fields $H_n^\textsc{c}$ and $H_n^\textsc{s-v}$:
\begin{equation} \label{eq:E-ms-cylinder}
\begin{aligned}
\!w_\text{d}^{\textsc{c}\,\text{cyl}} & = - m_n(x_1) H_n^\textsc{c}\\
& =- \dfrac{m_n(x_1)}{\rho} \int \frac{ \partial_1' m_1(x_1') \mathrm{d} \vec{r'}}{\left|\vec{r} - \vec{r'}\right|},\\
\!w_\text{d}^{\textsc{s-v}\,\text{cyl}}\! & = - m_n(x_1)H_n^\textsc{s-v} \\
& =\! \rho m_n(x_1) \!\!\int \!\!\mathrm{d} x_1' \mathrm{d} x_2' \frac{\partial_1' m_1(x_1')}{\left|\vec{r} - \vec{r'}\right|}\Biggr\rvert_{\rho=R}^{\rho=R+h} \!\!,
\end{aligned}
\end{equation}
with $\rho$ being the cylindrical radius. One can see, that the dynamical magnetization $m_1\propto e^{i k_1 x_1}$ results in the break of the mirror invariance $x_1 \to -x_1$ of the energy densities as well as effective fields.
Finally, the wave length of magnons at a given frequency is different for opposite propagation directions, which results in a splitting of the spin wave states with left- and right handed chiralities, which was very recently studied in Refs.~\cite{Otalora16,Otalora17}, see Fig.~\ref{fig:influence}(g).
\section{Conclusions and outlook}
\label{sec:conclusions}
The magnetism in curved geometries encompasses a range of fascinating geometry-induced effects in the magnetic properties of materials \cite{Streubel16a}. Here we propose a platform for theoretical analysis of magnetization textures in curvilinear ferromagnetic shells of different geometries. The developed generalized micromagnetic theory of curvilinear ferromagnetic shells allows to treat together both local (exchange and anisotropy) and non-local (magnetostatics) interactions.
To illustrate our theory we classify possible curvature effects on the equilibrium states.
We focus our analysis on rather simple magnetization structures, mostly defined by the anisotropy. Nevertheless, the developed theory is general: it allows to describe also strongly nonlinear magnetization texture, e.g., domain walls, vortices, skyrmions. It is important to specify that our illustrations of proposed micromagnetic theory of curvilinear ferromagnetic shells are based only on symmetrical arguments for the energy functional. In particular, these arguments cannot be considered as sufficient conditions of existence of magnetization state. Nevertheless, we hope that the presented work will open a pull for further investigations of magnetization textures for the concrete geometries. In particular, the proposed theory can be applied for the prediction of properties and responses of curved thin films. This allows to carry out targeted design and optimization for specific spintronic and magnetooptic devices and applications. The proposed theory can be generalized to include intrinsic Dzyaloshinskii-Moriya interaction of the film using by introducing mesoscale DMI \cite{Volkov18}. Still, the key impact of the developed theory is in the possibility to tailor the properties of `standard' ferromagnets to realize chiral textures.
These developments will pave the way towards new device ideas relying on curvature effects in magnetic nanostructures.
We do expect that the impact goes well beyond the magnetism community. The presented conclusions can be easily extended for studying the evolution of generic vector fields on curved shells in different models of condensed (graphene \cite{Yan13a}, superconductors \cite{Vitelli04}) and soft (nematics, cholesterics \cite{Napoli12}) matter.
\begin{acknowledgements}
The authors thank Dr. Volodymyr Kravchuk for helpful discussions. D.~D.~S. and O.~V.~P. thank Helmholtz-Zentrum Dresden-Rossendorf e.~V. (HZDR), where part of this work was performed, for their kind hospitality and acknowledge the support from the Alexander von Humboldt Foundation (Research Group Linkage Programme). O.~V.~P. acknowledges the support from DAAD (code No. 91530902). We acknowledge the support from FONDECYT 1161403 and Centers of excellence with Basal/CONICYT financing, grant FB0807, CEDENNA. This work was supported by the Program of Fundamental Research of the Department of Physics and Astronomy of the National Academy of Sciences of Ukraine (Project No. 0116U003192). This work was partially supported by Taras Shevchenko National University of Kyiv (Projects 19BF052-01 and 18BF052-01M).
\end{acknowledgements}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,110
|
\section{Introduction}
Inflation \cite{Guth:1980zm} has become the leading paradigm for the very early universe. However, the detailed mechanism for inflation still remains unknown. Inspired by the picture of string theory landscape \cite{Bousso:2000xa}, one could expect that the inflationary potential has very complicated structure \cite{Huang:2008jr}. Inflation in the string theory landscape has important implications in both observable stage of inflation and eternal inflation.
The complicated inflationary potentials in the string theory landscape open up a great number of interesting observational effects during observable inflation. Researches investigating the complicated structure of the inflationary potential include multi-stream inflation \cite{Li:2009sp, Li:2009me}, quasi-single field inflation \cite{Chen:2009we}, meandering inflation \cite{Tye:2009ff}, old curvaton \cite{Gong:2008ni}, etc.
The string theory landscape also provides a playground for eternal inflation. Eternal inflation is an very early stage of inflation, during which the universe reproduces itself, so that inflation becomes eternal to the future. Eternal inflation, if indeed happened (for counter arguments see, for example \cite{Mukhanov:1996ak}), can populate the string theory landscape, providing an explanation for the cosmological constant problem in our bubble universe by anthropic arguments.
In this Letter, we shall focus on the multi-stream inflation scenario. Multi-stream inflation is proposed in \cite{Li:2009sp}. And in \cite{Li:2009me}, it is pointed out that the bifurcations can lead to multiverse. Multi-stream inflation assumes that during inflation there exist bifurcation(s) in the inflation trajectory. For example, the bifurcations take place naturally in a random potential, as illustrated in Fig. \ref{fig:random}. We briefly review multi-stream inflation in Section \ref{sec:observable}. The details of some contents in Section \ref{sec:observable} can be found in \cite{Li:2009sp}. We discuss some new implications of multi-stream inflation for the inflationary multiverse in Section \ref{sec:eternal}.
\begin{figure}
\includegraphics[width=0.37\textwidth]{random.eps}
\caption{In this figure, we use a tilted random potential to mimic a inflationary potential in the string theory landscape. One can expect that in such a random potential, bifurcation effects happens generically, as illustrated in the trajectories in the figure.}
\label{fig:random}
\end{figure}
\begin{figure}
\includegraphics[width=0.37\textwidth]{msinf.eps}
\caption{One sample bifurcation in multi-stream inflation. The inflation trajectory bifurcates into $A$ and $B$ when the comoving scale $k_1$ exits the horizon, and recombines when the comoving scale $k_2$ exits the horizon.}
\label{fig:msinf}
\end{figure}
\section{Observable bifurcations} \label{sec:observable}
In this section, we discuss the possibility that the bifurcation of multi-stream inflation happens during the observable stage of inflation. We review the production of non-Gaussianities, features and asymmetries \cite{Li:2009sp} in the CMB, and investigate some other possible observational effects.
To be explicit, we focus on one single bifurcation, as illustrated in Fig. \ref{fig:msinf}. We denote the initial (before bifurcation) inflationary direction by $\varphi$, and the initial isocurvature direction by $\chi$. For simplicity, we let $\chi=0$ before bifurcation. When comoving wave number $k_1$ exits the horizon, the inflation trajectory bifurcates into $A$ and $B$. When comoving wave number $k_2$ exits the horizon, the trajectories recombines into a single trajectory. The universe breaks into of order $k_1/k_0$ patches (where $k_0$ denotes the comoving scale of the current observable universe), each patch experienced inflation either along trajectories $A$ or $B$. The choice of the trajectories is made by the isocurvature perturbation $\delta\chi$ at scale $k_1$. This picture is illustrated in Fig. \ref{fig:msicmb}.
\begin{figure}
\includegraphics[width=0.37\textwidth]{msicmb.eps}
\caption{In multi-stream inflation, the universe breaks up into patches with comoving scale $k_1$. Each patch experienced inflation either along trajectories $A$ or $B$. These different patches can be responsible for the asymmetries in the CMB.}
\label{fig:msicmb}
\end{figure}
We shall classify the bifurcation into three cases:
{\it Symmetric bifurcation}. If the bifurcation is symmetric, in other words, $V(\varphi,\chi)=V(\varphi, -\chi)$, then there are two potentially observable effects, namely, quasi-single field inflation, and a effect from a domain-wall-like objects, which we call domain fences.
As discussed in \cite{Li:2009sp}, the discussion of the bifurcation effect becomes simpler when the isocurvature direction has mass of order the Hubble parameter. In this case, except for the bifurcation and recombination points, trajectory $A$ and trajectory $B$ experience quasi-single field inflation respectively. As there are turnings of these trajectories, the analysis in \cite{Chen:2009we} can be applied here. The perturbations, especially non-Gaussianities in the isocurvature directions are projected onto the curvature direction, resulting in a correction to the power spectrum, and potentially large non-Gaussianities. As shown in \cite{Chen:2009we}, the amount of non-Gaussianity is of order
\begin{equation}
f_{NL} \sim P_\zeta^{-1/2} \left(\frac{1}{H}\frac{\partial^3 V}{\partial \chi^3}\right) \left(\frac{\dot\theta}{H}\right)^3~,
\end{equation}
where $\theta$ denotes the angle between the true inflation direction and the $\varphi$ direction.
As shown in Fig. \ref{fig:msicmb}, the universe is broken into patches during multi-stream inflation. There are wall-like boundaries between these patches. During inflation, these boundaries are initially domain walls. However, after the recombination of the trajectories, the tensions of these domain walls vanish. We call these objects domain fences. As is well known, domain wall causes disasters in cosmology because of its tension. However, without tension, domain fence does not necessarily cause such disasters. It is interesting to investigate whether there are observational sequences of these domain fences.
{\it Nearly symmetric bifurcation}
If the bifurcation is nearly symmetric, in other words, $V(\varphi,\chi) \simeq V(\varphi, -\chi)$, but not equal exactly, which can be achieved by a spontaneous breaking and restoring of an approximate symmetry, then besides the quasi-single field effect and the domain fence effect, there will be four more potentially observable effects in multi-stream inflation, namely, the features and asymmetries in CMB, non-Gaussianity at scale $k_1$ and squeezed non-Gaussianity correlating scale $k_1$ and scale $k$ with $k_1<k<k_2$.
The CMB power asymmetries are produced because, as in Fig. \ref{fig:msicmb}, patches coming from trajectory $A$ or $B$ can have different power spectra $P_\zeta^A$ and $P_\zeta^B$, which are determined by their local potentials. If the scale $k_1$ is near to the scale of the observational universe $k_0$, then multi-stream inflation provides an explanation of the hemispherical asymmetry problem \cite{Erickcek:2008sm}.
The features in the CMB (here feature denotes extra large perturbation at a single scale $k_1$) are produced as a result of the e-folding number difference $\delta N$ between two trajectories. From the $\delta N$ formalism, the curvature perturbation in the uniform density slice at scale $k_1$ has an additional contribution
\begin{equation}
\delta\zeta_{k_1}\sim \delta N \equiv |N_A-N_B|~.
\end{equation}
These features in the CMB are potentially observable in the future precise CMB measurements. As the additional fluctuation $\delta\zeta_{k_1}$ does not obey Gaussian distribution, there will be non-Gaussianity at scale $k_1$.
Finally, there are also correlations between scale $k_1$ and scale $k$ with $k_1<k<k_2$. This is because the additional fluctuation $\delta\zeta_{k_1}$ and the asymmetry at scale $k$ are both controlled by the isocurvature perturbation at scale $k_1$. Thus the fluctuations at these two scales are correlated. As estimated in \cite{Li:2009sp}, this correlation results in a non-Gaussianity of order
\begin{equation}
f_{NL}\sim \frac{\delta\zeta_{k_1}}{\zeta_{k_1}} \frac{P_\zeta^A-P_\zeta^B}{P_\zeta^A} P_\zeta^{-1/2}~.
\end{equation}
{\it Non-symmetric bifurcation}
If the bifurcation is not symmetric at all, especially with large e-folding number differences (of order ${\cal O}(1)$ or greater) along different trajectories, the anisotropy in the CMB and the large scale structure becomes too large at scale $k_1$. However, in this case, regions with smaller e-folding number will have exponentially small volume compared with regions with larger e-folding number. Thus the anisotropy can behave in the form of great voids. We shall address this issue in more detail in \cite{prep}. Trajectories with e-folding number difference from ${\cal O}(10^{-5})$ to ${\cal O}(1)$ in the observable stage of inflation are ruled out by the large scale isotropy of the observable universe.
At the remainder of this section, we would like to make several additional comments for multi-stream inflation:
{\it The possibility that the bifurcated trajectories never recombine}. In this case, one needs to worry about the domain walls, which do not become domain fence during inflation. These domain walls may eventually become domain fence after reheating anyway. Another problem is that the e-folding numbers along different trajectories may differ too much, which produce too much anisotropies in the CMB and the large scale structure. However, similar to the discussion in the case of non-symmetric bifurcation, in this case, the observable effect could become great voids due to a large e-folding number difference. The case without recombination of trajectory also has applications in eternal inflation, as we shall discuss in the next section.
{\it Probabilities for different trajectories}. In \cite{Li:2009sp}, we considered the simple example that during the bifurcation, the inflaton will run into trajectories $A$ and $B$ with equal probabilities. Actually, this assumption does not need to be satisfied for more general cases. The probability to run into different trajectories can be of the same order of magnitude, or different exponentially. In the latter case, there is a potential barrier in front of one trajectory, which can be leaped over by a large fluctuation of the isocurvature field. A large fluctuation of the isocurvature field is exponentially rare, resulting in exponentially different probabilities for different trajectories. The bifurcation of this kind is typically non-symmetric.
{\it Bifurcation point itself does not result in eternal inflation}. As is well known, in single field inflation, if the inflaton releases at a local maxima on a ``top of the hill'', a stage of eternal inflation is usually obtained. However, at the bifurcation point, it is not the case. Because although the $\chi$ direction releases at a local maxima, the $\varphi$ direction keeps on rolling at the same time. The inflation direction is a combination of these two directions. So multi-stream inflation can coexist with eternal inflation, but itself is not necessarily eternal.
\section{Eternal bifurcations} \label{sec:eternal}
In multi-stream inflation, the bifurcation effect may either take place at an eternal stage of inflation. In this case, it provides interesting ingredients to eternal inflation. These ingredients include alternative mechanism to produce different bubble universes and local terminations for eternal inflation, as we shall discuss separately.
{\it Multi-stream bubble universes}. The most discussed mechanisms to produce bubble universes are tunneling processes, such as Coleman de Luccia instantons \cite{Coleman:1980aw} and Hawking Moss instantons \cite{Hawking:1981fz}. In these processes, the tunneling events, which are usually exponentially suppressed, create new bubble universes, while most parts of the spatial volume remain in the old bubble universe at the instant of tunneling.
If bifurcations of multi-stream inflation happen during eternal inflation, two kinds of new bubble universes can be created with similar probabilities. In this case, at the instant of bifurcation, both kinds of bubble universes have nearly equal spatial volume. With a change of probabilities, the measures for eternal inflation should be reconsidered for multi-stream type bubble creation mechanism.
If the inflation trajectories recombine after a period of inflation, the different bubble universes will eventually have the same physical laws and constants of nature. On the other hand, if the different inflation trajectories do not recombine, then the different bubble universes created by the bifurcation will have different vacuum expectation values of the scalar fields, resulting to different physical laws or constants of nature. It is interesting to investigate whether the bifurcation effect is more effective than the tunneling effect to populate the string theory landscape.
Note that in multi-stream inflation, it is still possible that different trajectories have exponentially different probabilities, as discussed in the previous section. In this case, multi-stream inflation behaves similar to Hawking Moss instantons during eternal inflation.
\begin{figure}
\includegraphics[width=0.37\textwidth]{msinfeternal.eps}
\vspace{2mm}
\includegraphics[width=0.37\textwidth]{cascade.eps}
\caption{Cascade creation of bubble universes. In this figure, we assume trajectory $A$ is the eternal inflation trajectory, and trajectory $B$ is the non-eternal inflation trajectory.}
\label{fig:cascade}
\end{figure}
{\it Local terminations for eternal inflation}. It is possible that during multi-stream inflation, a inflation trajectory bifurcates in to one eternal inflation trajectory and one non-eternal inflation trajectory with similar probability. In this case, the inflaton in the eternal inflation trajectory frequently jumps back to the bifurcation point, resulting in a cascade creation of bubble universes, as illustrated in Fig. \ref{fig:cascade}. This cascade creation of bubble universes, if realized, is more efficient in producing reheating bubbles than tunneling effects. Thus it reduces the measure for eternal inflation.
There are some other interesting issues for bifurcation in the multiverse. For example, the bubble walls may be observable in the present observable universe, and the bifurcations can lead to multiverse without eternal inflation. These possibilities are discussed in \cite{Li:2009me}.
\section{Conclusion and discussion}
To conclude, we briefly reviewed multi-stream inflation during observable inflation. Some new issues such as domain fences and connection with quasi-single field inflation are discussed. We also discussed multi-stream inflation in the context of eternal inflation. The bifurcation effect in multi-stream inflation provides an alternative mechanism for creating bubble universes and populating the string theory landscape. The bifurcation effect also provides a very efficient mechanism to locally terminate eternal inflation.
\section*{Acknowledgment}
We thank Yifu Cai for discussion. This work was supported by NSERC and an IPP postdoctoral fellowship.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,862
|
Biografia
Seguì il padre in Africa a seguito dei pompeiani che si opponevano a Giulio Cesare e partecipò alla Battaglia di Tapso dove furono sconfitti da Cesare. Nonostante fosse stato perdonato da Cesare, che gli aveva consentito di mantenere la proprietà paterna, si unì ai congiurati che tramavano contro Cesare, ed in particolare al cognato Bruto (marito della sorella Porzia) ed al suo alleato Cassio.
Dopo l'uccisione di Cesare, Marco Porcio Catone fuggì in Grecia insieme ad altri congiurati guidati da Cassio e Bruto inseguiti dalla forze del secondo triumvirato guidate da Antonio e Ottaviano. Perì nel 42 a.C. nella seconda battaglia di Filippi che sancì la definitiva sconfitta dei Cesaricidi.
Bibliografia
Catone, Marco
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 487
|
Megan Davis, MD graduated with her Bachelor of Arts in Psychology from Baylor University and continued on to medical school at The University of Oklahoma College of Medicine. She completed her fellowship training in Hospice and Palliative Medicine, as well as her residency in Internal Medicine and Pediatrics, at The University of Arkansas for Medical Sciences in Little Rock. Dr. Davis is board certified in internal medicine, pediatrics, and hospice and palliative Medicine. She joins Dr. Juan Lombeida, Supportive Care physician at Highlands, as they expand their supportive care program in both Rogers and Fayetteville. Dr. Davis will see patients at both locations.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,488
|
\section{Introduction}
The Blue Whale Challenge is organised in such a way so as to ultimately brainwash the minds of the players and drive them to cause self-harm. The tasks include waking up at odd hours, listening to psychedelic music, watching scary videos and inflicting cuts and wounds on their bodies~\cite{9}. This causes the players to have disturbed minds, making them more susceptible to influence.\textquotedblleft At some point, it is necessary to push the teenager not to sleep at night. [In this way, their] psyche becomes more susceptible to influence \textquotedblright, Philipp Budeikin said, explaining his tactics of manipulation~\cite{2}.
\begin{figure}
\centering
\frame{\includegraphics[width=0.5\textwidth,keepaspectratio]{50-tasks.jpg}}
\caption{The list of 50 Tasks as found on Reddit. We find some minor modifications in the list at different sources.}
\label{fig:task}
\end{figure}
The curators and potential players use online social networks to contact each other. Curators seek out young people on social media who want to take part in the 50-day challenge and subject them to the tasks~\cite{26}. Also, a task of the game - Task 8 in Fig. \ref{fig:task} - asks the player to post to VKontakte (VK), a Russian social networking website. Posts of the challenge have now spread to Twitter, Instagram, Facebook, Reddit and other social networks.
The creator of the game, Philipp Budeikin, 21 years, was arrested for coaxing 16 schoolgirls to kill themselves~\cite{1}. Another Russian, Ilya Sidorov, 26 years, confessed to being the administrator of a so-called suicide group that had 32 under-age members~\cite{1}. The most recent administrator caught was a 17-year-old Russian girl who initially played the game but did not take her life in the end, instead turned into a curator~\cite{3}. A 17-year-old Chinese student was also arrested and charged with extremism over a Blue Whale chat group~\cite{19}. But even with the original curators in jail, how is the game still thriving?
There are a lot of misconceptions about the game. Some people think that it exists as an APK file on playstore. Though there have been claims that at some point such applications existed on playstore, there is no such application as of now. People also think that it is a flash game available on some website. But the game actually thrives as a phenomenon on social media, not as an application. Some people are trying to hunt down public groups on social media websites like VK or Facebook~\cite{7}. A Chinese tech giant, Tencent, has found at least 12 groups on its QQ instant messaging service using keywords related to the Blue Whale game~\cite{18}. Other reports claim that the game is fake and made the news because of spread of misinformation~\cite{6}. Several cases of suicide and self-harm have not gained as much \textquotedblleft popularity\textquotedblright as compared to similar incidents that are said to be caused as a result of the challenge and it is difficult to point out suicides that were caused solely due to the game~\cite{5}. \textquotedblleft It is said that reckless journalism actually created a sense of panic about the Blue Whale challenge when the real concern should actually be addressing mental illness.\textquotedblright~\cite{6}
There are also claims that accounts of people are hacked and used as \textquotedblleft curator\textquotedblright accounts to incite others and that at some point in time, there were links on Facebook leading to the game~\cite{8}. Players were previously sought out through "death groups" on social networks like VK. It is believed that members of these groups contacted curators via direct message and primarily conversed in Russian~\cite{7}. \textquotedblleft So, if the game's curators are in prison, how are teens still playing it? Well, the problem is, with all these teens out there searching for the game, all they need to do is stumble across one individual who is looking to exploit them and very soon they'll be playing Blue Whale, or at least a copy cat's version of the game\textquotedblright~\cite{7}. Any person on the internet can claim to be a curator and continue the game, inciting more and more people to commit self-harm. They can also be bots that act as curators and send messages to people. Since the primary link between victims and curator(s) is using online social media, we can try to find out users who might be vulnerable and spot common properties these accounts might share. Governments and authorities across the world are trying to take steps to curb the spread of the Blue Whale Challenge.
We wish to (1) understand the social media spread of the challenge, (2) spot the behaviour of the people taking interest in Blue Whale challenge and, (3) analyse demographics of the users who may be playing the game.
\section{Related Work}
Scientific literature was reviewed to get a deeper understanding of the challenge and how it thrives online. In~\cite{16}, the authors studied the structure of the tasks to understand how the challenge brainwashes the players who take it. They further explain how the instructions of the game exploit fear psychology to prepare the victims for self-infliction of pain and suicide: Tasks 1 to 9 serve as the induction tasks followed by the habituation tasks - 10 to 25 - followed by the final preparation tasks - 26 to 50. Also, the authors found that teenagers with complicated upbringing and negative life experiences are more likely to get involved in the game. Another paper~\cite{17} by Jouni Smed et al assesses the negative effects of gamification such as game addiction and ethical issues. The Blue Whale challenge shows how gamification can be used for harmful purposes, that is used to engage the users and drive them towards suicide.
\section{Methodology}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth,keepaspectratio]{architecture.jpg}
\caption{Architecture Diagram for Methodology}
\label{arch}
\end{figure}
Data was collected from three social media websites: VK, Instagram, and Twitter. The aforementioned Social Networks were chosen for the analysis as data was readily available on them. Table \ref{Table:1} shows the duration for which posts from social networks were collected. Fig. \ref{arch} shows the architecture diagram for the methodology followed for the study.
\begin{savenotes}
\begin{table}
\centering
\caption{Duration for which data from different social networks was collected(GMT)}
\label{Table:1}
\begin{tabular}{@{}lllll@{}}
\toprule
Social Media && \#First Post Date(dd-mm-yyyy) && \#Last Date for Data Collection(dd-mm-yyyy) \\ \midrule
VK && 01-03-2017 && 01-10-2017 \\
Instagram && 03-07-2013 && 01-10-2017 \\
Twitter && 18-08-2017 && 01-10-2017 \\ \bottomrule
\end{tabular}
\end{table}
\end{savenotes}
We collected data for the following Hashtags: \#i\_am\_whale, \#curatorfindme, \#f57, \#wakemeupat420, \#iamawhale, \#I\_am\_whale, \#iamwhale, \#imwhale. Initially, the collected posts contained either of the two hashtags: \#curatorfindme or \#i\_am\_whale. Query expansion was used to include the following hashtags in our analysis as well: \#f57, \#wakemeupat420, \#iamawhale, \#I\_am\_whale, \#iamwhale, \#imwhale. Hashtags like '\#bluewhalechallenge' and '\#bluewhale' were avoided as they contained a lot of noise (posts which were news related or spreading awareness about the challenge). We performed various analysis on the collected data, categorising it into Temporal Analysis, Content Analysis, and Network Analysis.
\section{Statistics}
Table \ref{Table:2} shows the number of cases associated with the Blue Whale Challenge - including those who died, who were saved and who showed signs of playing the game. We referred to various articles from news sources; except for one news source from Chile with highest global Alexa ranking of 130,040, the ranking of all other news sources fell under 45,000. The news source from Chile was a regional newspaper with country-wise Alexa score of 721. Finding the exact number of deaths due to this is extremely difficult. According to news sources, around 130 deaths are associated with the Blue Whale Challenge in Russia and all these teenagers were known to be part of the same internet group~\cite{24,23}. According to the Russian investigation though, only 8 deaths were actually due to the Blue Whale challenge~\cite{25}. India ranks highest according to Google trends in terms of searches related to the Blue Whale game in 12 months~\cite{13,11}. Table \ref{Table:3} captures the various demographics of the data we collected. Fig. \ref{fig:2} shows the devices used to post about the challenge on VK and Twitter. We can see that a lot of posts on VK have been made using the mobile website; this can be because the Russian social network was recently banned in India due to the challenge~\cite{20}. When we cross-checked the collected posts on 13th October 2017, a number of them had already been deleted. Fig. \ref{susp} shows the image contained in such a deleted post on VK. Table \ref{Table:3} shows these figures for different social networks. Comments on Twitter stands for replies.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth,keepaspectratio]{carto.jpg}
\caption{Distribution of Blue Whale cases across the world}
\label{carto}
\end{figure}
\begin{savenotes}
\begin{table}
\centering
\caption{Number of cases related to the Blue Whale Challenge in different countries}
\label{Table:2}
\begin{tabular}{@{}llll@{}}
\toprule
Country & \#Cases & \#Country & \#Cases \\ \midrule
Argentina & 3 & Pakistan & 2 \\
Bangladesh & 2 & Portugal & 2 \\
Brazil & 4 & Russia & 130 \footnote{Around 130 suicides in Russia are linked to the Blue Whale Challenge~\cite{24,23}} \\
Chile & 3 & Saudi Arabia & 1 \\
China & 1 & Serbia & 1 \\
India & 10 & Spain & 1 \\
Ireland & 1 & Turkey & 1 \\
Italy & 2 & United States & 4 \\
Kenya & 1 & Uruguay & 1 \\ \midrule
& & TOTAL & 170 \\
\bottomrule
\end{tabular}
\end{table}
\end{savenotes}
\begin{table}
\centering
\caption{Data Description}
\label{Table:3}
\begin{tabular}{@{}lllll@{}}
\toprule
Social Media & \#Posts & \#Unique users & \#Comments & \#Deleted posts \\ \midrule
VK & 862 & 705 & 894 & 76 \\
Instagram & 1,137 & 736 & 751 & 386 \\
Twitter & 677 & 548 & 27 & 83 \\ \bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,keepaspectratio]{Device.png}
\caption{Platforms used to access various social media sites and post about the Blue Whale Challenge.}
\label{fig:2}
\end{figure}
\begin{figure}
\centering
\frame{\includegraphics[width=0.3\textwidth,keepaspectratio]{susp_account.png}}
\caption{The image posted on VK - initially collected as a part of our dataset - was deleted. Interestingly, the entire user account has been temporarily suspended. The text content of the post was \#i\_am\_whale.}
\label{susp}
\end{figure}
\section{Analysis}
We divide our analysis into 3 broad categories: Temporal, Content, and Network; with graphs from each of the 3 networks (where available).
\subsection{Temporal Analysis}
\begin{figure}[!htb]
\centering
\subfloat[][VK]{\includegraphics[width=0.55\textwidth,keepaspectratio]{vk_posts.png}}
\centering
\subfloat[][Instagram]{\includegraphics[width=0.55\textwidth,keepaspectratio]{insta_first_last.png}}\\
\centering
\subfloat[][Twitter]{\includegraphics[width=0.55\textwidth,keepaspectratio]{twitter_time.png}}
\centering
\caption{Time difference between first and last posts related to Blue Whale on different social networks}
\label{fig:3}
\end{figure}
\begin{figure}[!htb]
\centering
\subfloat[][Indegree - number of followers - of \\users posting about the Blue Whale \\Challenge]{\includegraphics[width=0.5\linewidth,keepaspectratio]{insta_indegree.png}}
\centering
\subfloat[][Outdegree - number of followings - of users posting about the Blue Whale \\Challenge]{\includegraphics[width=0.5\linewidth,keepaspectratio]{insta_outdegree.png}}
\caption{Instagram}
\label{fig:4}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\textwidth,keepaspectratio]{sum_posts.png}
\caption{VK - Total number of Blue Whale posts with respect to the number of friends of the users}
\label{fig:5}
\end{figure}
In general, we observe that people on VK and Twitter continued to post about the Blue Whale Challenge/followed the challenge even multiple days after their initial post. We deemed a post Blue Whale related if the text contained any of the relevant hashtags as used in the data collection. Most of the content in Blue Whale related posts was hashtags. The statistics from Instagram show a really low follow-up time. Though Instagram isn't removing all the posts, but instead, if any of the sensitive hashtags are searched for, they ask the user whether they need support but still give an option to see the posts anyway. In Fig. \ref{fig:3}(a), it is seen that the time difference between the first and last post related to Blue Whale on VK by users ranges from 0 hours to 500 hours. Around 85\% of the users continued posting about Blue Whale for less than an hour. 98.6\% of the users have a time difference of fewer than 200 hours between their first and last Blue Whale related post. In Fig. \ref{fig:3}(b), it is seen that the time difference between the first and last post related to Blue Whale on Instagram by users ranges from 0 hours to 22.5 hours. 94\% of the users have a time difference less than 15 hours between their first and last Blue Whale related post. In Fig. \ref{fig:3}(c), it is seen that the time difference between the first and last post related to Blue Whale on Twitter goes up to 2,000 hours. But similar to VK, 81.25\% of the users continued posting about the challenge for less than an hour. \\
In Fig. \ref{fig:4}(a), it is seen that around 70\% of users talking about Blue Whale challenge on Instagram have less than 200 followers. In Fig. \ref{fig:4}(b), it is seen that around 60\% of users talking about Blue Whale challenge on Instagram have less than 200 followings. In Fig. \ref{fig:5}, it is seen that 60\% of the users talking about the Blue Whale Challenge on VK have up to 50 friends. We observe that most of the users who posted about the Blue Whale Challenge on both VK and Instagram did not have a high number of Followers/Friends. On manual verification, we also found that most of these user IDs were actually new and only contained posts about the challenge. On Twitter, 27.01\% of the user accounts in our dataset were created in 2017.
\subsection{Network Analysis}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth,keepaspectratio]{insta-comment-network2.png}
\caption{Instagram Comments Graph: If a user comments on the post by another user, there is a directed edge between them}
\label{fig:6}
\end{figure}
Fig. \ref{fig:6} and Fig. \ref{fig:7} depict the network of the users we got from Instagram and VK respectively. Fig. \ref{fig:7}(a) shows links between users that are friends on VK and are present in the data collected. Fig. \ref{fig:6} and Fig. \ref{fig:7}(b) show links between users based on their comments on others' posts, that is, there is an edge from A to B if A commented on B's post. As we can see the comment network for Instagram is much more sparse as compared to VK. The comment network on VK has an average clustering coefficient of 0.012. From Fig. \ref{fig:7}(a), we also observe that certain users posting about the Blue Whale Challenge are inter-connected on VK, that is, most of the users in this subset tend to be friends on VK and form communities. The average clustering coefficient of the VK friends graph comes out to be 0.262 - this excludes the nodes which have no edges. On the other hand, we were not able to find any follower-following link amongst the users present on Instagram.
\begin{figure}[!htb]
\centering
\subfloat[][VK Friends Graph: If 2 users are friends then there \\is an edge between them]{\includegraphics[width=0.55\textwidth,keepaspectratio]{vk-friends-network2.png}}
\centering
\subfloat[][VK Comments Graph: If a user comments on the post by another user, there is a directed edged between them]{\includegraphics[width=0.55\textwidth,keepaspectratio]{vk-comments-network2.png}}
\caption{VK - Network Analysis}
\label{fig:7}
\end{figure}
\subsection{Content Analysis}
\subsubsection{Language Analysis:}
We used a port of Google's language detection library in python - \href{https://pypi.python.org/pypi/langdetect}{langdetect} - to determine the languages of the posts. English is the most commonly used language in the three social networks.
In Fig. \ref{fig:lang}(a) it is seen that Italian, Persian and German are used in almost equal number of posts on Instagram. In Fig. \ref{fig:lang}(b) it is seen that Tamil and Hindi are used in quite a few posts on Twitter. In Fig. \ref{fig:lang}(c) it is seen that a good number of posts on VK are made in Welsh, Romanian and Somali languages.
\begin{figure}
\centering
\subfloat[Instagram]{\includegraphics[scale=.12]{insta_lang.png}}
\subfloat[Twitter]{\includegraphics[scale=.12]{twitter_lang.png}} \\
\subfloat[VK]{\includegraphics[scale=.12]{vk_lang.png}}
\caption{Different languages on different social networks in which Blue Whale related posts are made}
\label{fig:lang}
\end{figure}
\subsubsection{Sensitive Information:}
People reveal sensitive information about themselves like their email addresses and phone numbers so that curators can contact them. In Fig. \ref{fig:sensitive}, we see that around 70 phone numbers are revealed by users in posts and comments about the Blue Whale challenge on VK. Some email addresses and phone numbers are also revealed by users on Twitter and Instagram.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,keepaspectratio]{content.png}
\caption{Amount and type of sensitive information shared across various social media sites to play Blue Whale game}
\label{fig:sensitive}
\end{figure}
\subsubsection{User Mentions}
144 unique user accounts were mentioned in the collected twitter dataset. Most of these user-mentioned accounts are either famous users or users who are trying to stop this game; the list also contains accounts of Twitter security and Twitter. But there were a few accounts that tweeted only about the Blue Whale challenge. Some typical behaviours shown by these user accounts are: (1) tweets containing less text and more hashtags so as to catch the attention of users including curators, (2) occasional pictures of results of tasks such as carved arms and legs, and (3) low follow-up, that is, most users don't follow up after 1-3 posts about the challenge. Fig. \ref{userMention} shows these behaviours by accounts found as user-mentions in the collected tweets. \\
We also came across an interesting account that was acting to be a curator and asking users to private message or follow him/her if they wanted to join the game. Fig. \ref{fig:curator} shows the profile of this Twitter account.
\begin{figure}
\centering
\subfloat[Account that posted images of cut legs]{\frame{\includegraphics[scale=.4]{userMention1.png}}}
\hfill
\subfloat[Account that posted images of cut wrist]{\frame{\includegraphics[scale=.4]{userMention2.png}}} \\
\centering
\subfloat[Account that used popular Blue Whale related hashtags to catch attention]{\frame{\includegraphics[scale=.4]{userMention3.png}}}
\caption{Examples of some typical behaviours showed by user mentioned accounts on Twitter}
\label{userMention}
\end{figure}
\section{Different types of Users involved in the game}
\begin{figure}
\centering
\frame{\includegraphics[width=0.5\textwidth,keepaspectratio]{curator.png}}
\caption{A user-mentioned account on Twitter pretending to be a curator}
\label{fig:curator}
\end{figure}
\subsection{Potential Victims}
\begin{figure}
\centering
\frame{\includegraphics[width=0.6\textwidth,keepaspectratio]{Twitter1.png}}
\caption{Twitter - A post containing contact information revealed by a user in pursuit of joining the game }
\label{twitter_post}
\end{figure}
Users who are depressed and ready to go to any extent to become a part of the game fall under this category. Such users often tend to reveal personal information like phone numbers, email addresses etc. so that curators can contact them. Fig. \ref{twitter_post} shows a tweet where a user revealed his/her contact information.
\begin{figure}
\centering
\subfloat[A WhatsApp Group specifically made for people willing to be a part of the Blue Whale Challenge]{\label{main:a}\frame{\includegraphics[scale=.3]{whatsapp.png}}}
\hfill\subfloat[A possible propagator.]{\label{main:b}\frame{\includegraphics[scale=.3]{vk2.png}}} \\
\centering
\subfloat[Pictures of cut arms of users who are taking the Blue Whale Challenge. One or more tasks of the challenge involve conducting self mutilation and then taking pictures of the same.]{\label{main:c}\frame{\includegraphics[scale=.3]{vk1.png}}}
\caption{VK - User Posts}
\label{vk_post}
\end{figure}
\subsection{Propagators and/or Pretentious Curator}
Users who post about the challenge with the intention of promoting it fall under this category. In extremely rare cases, it is possible that these propagators might be curators but such an event is counter-intuitive as actual curators would not risk revealing their identity. Fig. \ref{fig:curator} shows a Twitter account that claims to be a curator and asks users to private message or follow him/her to join the game. There have been cases where propagators or pretentious curators share the images of the 50 tasks (Fig. \ref{fig:task}) along with the links to APK files - which are not related to the game - misleading the users into believing that an application for the game exists. Some propagators also tend to share images of victims as shown in Fig. \ref{vk_post}(c). Fig. \ref{vk_post}(a) shows a propagator who shared a WhatsApp group link. Such things often excite users to reveal their personal information.
\begin{figure}
\centering
\subfloat[][Carve F57 - A task in the game.]{\frame{\includegraphics[width=0.55\textwidth,keepaspectratio]{insta1.png}}}
\hfill
\subfloat[][Use of Blue Whale game related hashtags for popularity gain.]{\frame{\includegraphics[width=0.2\textwidth,keepaspectratio]{insta2.png}}}
\caption{Instagram - User Posts}
\label{insta_post}
\end{figure}
\subsection{Hashtag Hijackers}
There are users that use the Blue Whale challenge related hashtags in their posts just to seek attention and get some reactions to their posts. Fig. \ref{insta_post}(b) shows how irrelevant hashtags are used in order to garner more views and attention for the post. These people contribute to the noise in the data collected.
\section{Conclusion}
The Blue Whale challenge is a game that spread via online social media. It originated in Russia and is still on the rise; it is being propagated by the people themselves. On social media, people try using all types of keywords, hashtags, and images, so as to catch the attention of curators and be able to join the game. They might not even know what the game is about but want to play it. Then there are others who do not exclusively post only about the game but drop hints that they might be disturbed and looking for a way to end their life. A lot of sensitive information like phone numbers, email addresses etc. is revealed by people who want to take part in the challenge or those who are propagating it. A low fraction of people posting about the challenge follow up and post regularly. Users interested in Blue Whale Challenge are much better connected on VK than on Instagram. Also, the interaction between the users on VK - that is commenting on each other's posts - is much more than the interaction between such users on Instagram. The complexity of this game is that it is difficult to pinpoint which deaths are caused solely because of it. People may be depressed or affected by hardships before taking up the challenge. It is also possible that people who committed suicide showed the general symptoms that overlapped with those playing the game. Hence, it is difficult to verify deaths that are claimed to have occurred because of the challenge. Also, conversations between the curators and players are suspected to take place mainly through direct message - most of which are deleted like the posts on social media. Further, a lot of user accounts are deactivated or suspended.
\section{Actions taken by Social Media Services}
Instagram shows a warning when people search for pictures related to the Blue Whale Challenge. It offers help to people who might be going through something difficult but at the same time gives an option to \textquotedblleft see posts anyway\textquotedblright~\cite{12} as shown in Fig. \ref{warn}(a). Along with the warning, Tumblr also lists counselling and anti-suicide resources as shown in Fig. \ref{warn}(b). \\
\begin{figure}[!htb]
\centering
\subfloat[][Instagram]{\frame{\includegraphics[width=0.3\textwidth,keepaspectratio]{insta-actions-taken.jpg}}}
\centering
\subfloat[][Tumblr]{\frame{\includegraphics[width=0.3\textwidth,keepaspectratio]{ev-okay.png}}}
\caption{Warning shown on searching about Blue Whale}
\label{warn}
\end{figure}
In order to stop the spread of the challenge, a committee of experts has been set up by the government of India. The government has also asked companies like Google, Facebook, WhatsApp, Instagram, Microsoft and Yahoo to remove all links related to Blue Whale Challenge~\cite{22}. The supreme court has additionally asked major Indian News Channels to actively spread awareness about the Blue Whale Challenge~\cite{21}. The access of the Russian Social Network VK has also been temporarily banned in India~\cite{20}.
\section{Limitations}
Most online social media websites have been instructed to remove posts pertaining to the Blue Whale Challenge. Images of cut hands and blood are often removed and suspicious user accounts are suspended. Also, a lot of users delete their posts. This leads to a shortage of data even though the effects of the game might be widespread. Also, API limits of social networks limit the amount of data that can be accessed in the first place. Due to lack of interaction with the individuals or those close to them, it is very difficult to know if there are other reasons why the victim is or was involved in the challenge.
\section{Future Work}
We would like to study the characteristics of vulnerable users and try to divide them into different categories like curator (difficult to find this), propagator, vulnerable player, beginner player etc. Based on this a confidence score can be developed which indicates the level of danger the user is in and what kind of interventions can be taken to get him/her out of the situation. If possible, we would like to do a detailed geographic analysis to find where most of these users are coming from.
\section{Acknowledgements}
We would like to acknowledge the role of Srishti Gupta for providing her valuable inputs and suggestions throughout the project. We would also like to thank Kushagra Bhargava for getting us on track and evaluating our progress from time to time. Lastly, we would like to thank Vedant Nanda for his creative incites.
{\small
\bibliographystyle{ieee}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,076
|
These allow a PrintWriter to be created from a File object or by specifying the name of a file.
can specify a character encoding by passing its name in charSet.
PrintWriter supports the print( ) and println( ) methods for all types, including Object.
method and then output the result.
PrintStream class described earlier: it allows you to specify the precise format of the data.
default locale. The second lets you specify a locale. Both return the invoking PrintWriter.
It works exactly like printf( ).
Java SE 6 adds the Console class. It is used to read from and write to the console, if one exists.
simplify some types of console interactions, especially when reading strings from the console.
console will not be available in all cases. Thus, if null is returned, no console I/O is possible.
accessing the console, it usually means that there has been a catastrophic system failure.
password without echoing what is typed. When reading passwords, you should "zero-out"
to obtain a password by scanning memory.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,517
|
Domoney
Successful Entrepreneur and Award-Winning Horticulturist
David Domoney is a British entrepreneur and lifelong horticulturist who is encouraging businesses to embrace the power of nature to increase the motivation and productivity of their workforce.
An astute businessman, David has spent half his life heading up global buying teams for some of the UK's largest blue-chip superstore chains with millions of pounds of purchasing power at his fingertips. The other half of his life has been spent immersed in plants and nature.
Not only is David a Fellow of the Chartered Institute of Horticulture, he is also a decorated garden designer with a collection of 30 Royal Horticultural Society medals, including Chelsea Gold and Best in Category trophies, to his name.
In 2018, David was selected by HRH Prince Edward to receive the Award for Excellence in Horticultural Career Development in recognition of his outstanding accomplishments.
More About David
A rare combination of charm and professionalism, David's unique personality has led to a successful 20 year long career on national TV. A current presenter on Britain's most popular gardening TV show, ITV1's Love Your Garden, David has been appearing alongside Alan Titchmarsh before TV audiences of up to 4 million viewers per episode for the last 7 years. His role as the gardening expert on James Martin's Saturday Morning TV show and a...
A rare combination of charm and professionalism, David's unique personality has led to a successful 20 year long career on national TV. A current presenter on Britain's most popular gardening TV show, ITV1's Love Your Garden, David has been appearing alongside Alan Titchmarsh before TV audiences of up to 4 million viewers per episode for the last 7 years. His role as the gardening expert on James Martin's Saturday Morning TV show and as the resident gardener on ITV1's This Morning, have made David Domoney a household name.
David's two great passions for business and horticulture are now married together in his position as CEO of his own company, Domoney Ltd, which sees David joining forces with top brands to promote the benefits of horticulture to the nation.
Set up in 1999, his company's clients have included corporate giants like John Lewis, BMI Airlines, Lexus, Bayer, The Ritz Piccadilly, Manchester Airport, Leviev Diamonds Bond Street, ITV, Trinity Mirror, House Beautiful Magazine, Manchester Children's Hospital and The Commonwealth War Graves Commission, to name but a few.
David is an accomplished motivational business speaker and charity auctioneer and has over 20 years' experience as an awards presenter and host. Offering a range of captivating talks to suit everyone's tastes, David is ready to share his energy and expertise with your audience, whoever they may be. With the polish of a business professional combined with the knowledge of a seasoned horticulturist, David represents a dynamite combination that will take any event by storm.
David says: "My keynote speeches furnish organisations with practical techniques for increasing the productivity, engagement and creativity of their workforce by harnessing the power of nature in the face of ever-expanding technology."
Jeremy Vine
Jeremy Vine is probably best known as the presenter of the self-titled Channel 5's daily topical programme Jeremy...
Alok Jha
ITN science correspondent Alok Jha was formerly the science correspondent at the Guardian and a BBC TV...
Phil's passion for property emerged when, after a stint working for an estate agent in Canterbury, he...
Bob Harris OBE
Few people have broken as many bands into the mainstream consciousness as Bob Harris ... through his...
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 39
|
Q: Reading image from MS-Access database in Classic ASP I am trying to read JPG images from MS-Access database using the following code in classic ASP:
Response.Expires = 0
Response.Buffer = TRUE
Response.Clear
Response.ContentType = "image/jpg"
Set cn = Server.CreateObject("ADODB.Connection")
cn.Open "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=" & Server.MapPath("/database/database.mdb")
sqlString = "Select * from tblBusinessImages where fldID = " & request.querystring("id")
Set rs = cn.Execute(sqlString)
Response.BinaryWrite rs("fldImageData")
Response.End
But I keep getting an error telling that the browser can't read or display the image.
The database field 'tblBusinessImages' is an OLE field, and the image is saved into it by copy-paste, only for testing purpose at this time (could this be a wrong way?)
Now I know that MS-Access saves extra data in the BLOB object (as MSDN says here:
If any extraneous information is contained in the BLOB data, this will
be passed by this script, and the image will not display properly.
This becomes important when you realize that most methods of placing
images into BLOB fields place extra information in the form of headers
with the image. Examples of this are Microsoft Access and Microsoft
Visual FoxPro. Both of these applications save OLE headers in the BLOB
field along with the actual binary data.
)
My question is how do I read the RAW image data from a BLOB without the extra data/headers that MS-Access saves?
Thanks.
A: After a day of work I realized what the problem was: The problem was in the way the picture was saved to the database (manually).
In order to save images to database, the following code should be used:
Dim fileName
Dim conn
Dim rsTemp
Dim fldID
Dim sSQL
Dim mystream
Set mystream = Server.CreateObject("ADODB.Stream")
mystream.Type = 1
mystream.Open
mystream.LoadFromFile "D:\Desktop\My Downloads\compose1.jpg"
Set conn = Server.CreateObject("ADODB.Connection")
Set rsTemp = Server.CreateObject("ADODB.Recordset")
conn.Open "DRIVER=Microsoft Access Driver (*.mdb);DBQ=" & Server.MapPath("/database/database.mdb")
sSQL = "Select fldImageData from tblBusinessImages where fldID = 1;"
rsTemp.Open sSQL, conn, 3, 3
rsTemp.Fields("fldImageData").AppendChunk mystream.Read
rsTemp.Update
rsTemp.Close
set mystream = nothing
And in order to read an image from MS-Access database, this code should be used:
Dim conn
Dim rsTemp
Dim sSQL
Dim fldID
fldID = Request.QueryString("id")
If Not fldID = "" And IsNumeric(fldID) Then
Set conn = Server.CreateObject("ADODB.Connection")
Set rsTemp = Server.CreateObject("ADODB.Recordset")
conn.Open "DRIVER=Microsoft Access Driver (*.mdb);DBQ=" & Server.MapPath("/database/database.mdb")
sSQL = "Select * from tblBusinessImages where fldID = " & request.querystring("id")
rsTemp.Open sSQL, conn, 3, 3
If Not rsTemp.EOF Then
Response.ContentType = "image/jpeg"
Response.BinaryWrite rsTemp("fldImageData")
Else
Response.Write("File could not be found")
End If
rsTemp.Close
conn.Close
Set rsTemp = Nothing
Set conn = Nothing
Else
Response.Write("File could not be found")
End If
This way the image data will be saved as Long Binary Data in the OLE field in the database. When read, it will be posted to the browser as a readable image data.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,264
|
{"url":"https:\/\/gmatclub.com\/forum\/m70-275109.html","text":"GMAT Question of the Day - Daily to your Mailbox; hard ones only\n\n It is currently 13 Nov 2018, 14:30\n\n### GMAT Club Daily Prep\n\n#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.\n\nCustomized\nfor You\n\nwe will pick new questions that match your level based on your Timer History\n\nTrack\n\nevery week, we\u2019ll send you an estimated GMAT score based on your performance\n\nPractice\nPays\n\nwe will pick new questions that match your level based on your Timer History\n\n## Events & Promotions\n\n###### Events & Promotions in November\nPrevNext\nSuMoTuWeThFrSa\n28293031123\n45678910\n11121314151617\n18192021222324\n2526272829301\nOpen Detailed Calendar\n\u2022 ### Essential GMAT Time-Management Hacks\n\nNovember 14, 2018\n\nNovember 14, 2018\n\n07:00 PM PST\n\n08:00 PM PST\n\nJoin the webinar and learn time-management tactics that will guarantee you answer all questions, in all sections, on time. Save your spot today! Nov. 14th at 7 PM PST\n\n# M70-23\n\nAuthor Message\nTAGS:\n\n### Hide Tags\n\nMath Expert\nJoined: 02 Sep 2009\nPosts: 50572\n\n### Show Tags\n\n03 Sep 2018, 04:46\n00:00\n\nDifficulty:\n\n45% (medium)\n\nQuestion Stats:\n\n80% (02:03) correct 20% (04:33) wrong based on 5 sessions\n\n### HideShow timer Statistics\n\nA circular pond is drained through a tiny hole in its middle, such that its diameter becomes smaller at a constant rate of $$x$$ meters per second. If it takes the pond $$y$$ seconds to fully drain, what was its initial circumference?\n\nA. $$\\frac{2\\pi y}{x}$$\nB. $$\\pi xy$$\nC. $$\\frac{\\pi xy}{2}$$\nD. $$2\\pi xy$$\nE. $$4\\pi xy$$\n\n_________________\nMath Expert\nJoined: 02 Sep 2009\nPosts: 50572\n\n### Show Tags\n\n03 Sep 2018, 04:46\nOfficial Solution:\n\nA circular pond is drained through a tiny hole in its middle, such that its diameter becomes smaller at a constant rate of $$x$$ meters per second. If it takes the pond $$y$$ seconds to fully drain, what was its initial circumference?\n\nA. $$\\frac{2\\pi y}{x}$$\nB. $$\\pi xy$$\nC. $$\\frac{\\pi xy}{2}$$\nD. $$2\\pi xy$$\nE. $$4\\pi xy$$\n\nWe\u2019ll go for ALTERNATIVE because there are variables in all the answers.\n\nIf we pick $$x = 1$$ and $$y = 2$$, it takes 2 seconds for the pool to drain, and each second represents 1 meter of the diameter, so the diameter is 2 \u00d7 1 = 2 meters long. Since the circumference is $$\\pi \u00d7 diameter$$, the circle\u2019s circumference is $$\\pi \u00d7 2 = 2\\pi$$. Let\u2019s check the answers: (A) $$\\pi$$ \u2212 No!; (B) $$2\\pi$$ \u2212 possible, but we must check the other answer choices; (C) $$\\pi$$ \u2013 Nope!; (D) $$4\\pi$$ \u2013 No!; (E) $$8\\pi$$ \u2013 Eliminated! We are left with answer choice (B).\n\n_________________\nIntern\nJoined: 15 Aug 2012\nPosts: 42\nSchools: AGSM '19\n\n### Show Tags\n\n04 Sep 2018, 15:15\n1\nBunuel wrote:\nOfficial Solution:\n\nA circular pond is drained through a tiny hole in its middle, such that its diameter becomes smaller at a constant rate of $$x$$ meters per second. If it takes the pond $$y$$ seconds to fully drain, what was its initial circumference?\n\nA. $$\\frac{2\\pi y}{x}$$\nB. $$xy$$\nC. $$\\frac{\\pi xy}{2}$$\nD. $$2\\pi xy$$\nE. $$4\\pi xy$$\n\nWe\u2019ll go for ALTERNATIVE because there are variables in all the answers.\n\nIf we pick $$x = 1$$ and $$y = 2$$, it takes 2 seconds for the pool to drain, and each second represents 1 meter of the diameter, so the diameter is 2 \u00d7 1 = 2 meters long. Since the circumference is $$\\pi \u00d7 diameter$$, the circle\u2019s circumference is $$\\pi \u00d7 2 = 2\\pi$$. Let\u2019s check the answers: (A) $$\\pi$$ \u2212 No!; (B) $$2\\pi$$ \u2212 possible, but we must check the other answer choices; (C) $$\\pi$$ \u2013 Nope!; (D) $$4\\pi$$ \u2013 No!; (E) $$8\\pi$$ \u2013 Eliminated! We are left with answer choice (B).\n\nHi Bunuel\nIf B is the right answer, shouldn't it be pi*x*y?\nMath Expert\nJoined: 02 Sep 2009\nPosts: 50572\n\n### Show Tags\n\n04 Sep 2018, 20:02\nrajudantuluri wrote:\nBunuel wrote:\nOfficial Solution:\n\nA circular pond is drained through a tiny hole in its middle, such that its diameter becomes smaller at a constant rate of $$x$$ meters per second. If it takes the pond $$y$$ seconds to fully drain, what was its initial circumference?\n\nA. $$\\frac{2\\pi y}{x}$$\nB. $$xy$$\nC. $$\\frac{\\pi xy}{2}$$\nD. $$2\\pi xy$$\nE. $$4\\pi xy$$\n\nWe\u2019ll go for ALTERNATIVE because there are variables in all the answers.\n\nIf we pick $$x = 1$$ and $$y = 2$$, it takes 2 seconds for the pool to drain, and each second represents 1 meter of the diameter, so the diameter is 2 \u00d7 1 = 2 meters long. Since the circumference is $$\\pi \u00d7 diameter$$, the circle\u2019s circumference is $$\\pi \u00d7 2 = 2\\pi$$. Let\u2019s check the answers: (A) $$\\pi$$ \u2212 No!; (B) $$2\\pi$$ \u2212 possible, but we must check the other answer choices; (C) $$\\pi$$ \u2013 Nope!; (D) $$4\\pi$$ \u2013 No!; (E) $$8\\pi$$ \u2013 Eliminated! We are left with answer choice (B).\n\nHi Bunuel\nIf B is the right answer, shouldn't it be pi*x*y?\n\nYes. $$\\pi$$ was missing. Edited. Thank you.\n_________________\nIntern\nJoined: 04 Oct 2018\nPosts: 1\n\n### Show Tags\n\n31 Oct 2018, 16:47\nIn every second diameter reduces by X, hence radius reduces by X\/2. So, in Y sec radius reduces by XY\/2. This means the original radius was XY\/2.\nHence circumference is 2pieXY\/2 = XYpie.\n\nPl Comment\nRe: M70-23 &nbs [#permalink] 31 Oct 2018, 16:47\nDisplay posts from previous: Sort by\n\n# M70-23\n\nModerators: chetan2u, Bunuel\n\n Powered by phpBB \u00a9 phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT\u00ae test is a registered trademark of the Graduate Management Admission Council\u00ae, and this site has neither been reviewed nor endorsed by GMAC\u00ae.","date":"2018-11-13 22:30:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8435150384902954, \"perplexity\": 3035.4442659775164}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-47\/segments\/1542039741510.42\/warc\/CC-MAIN-20181113215316-20181114001316-00285.warc.gz\"}"}
| null | null |
Q: Implement 16 bits stored Goertzel algorithm in C I can detect the frequence of 8 bits stored PCM audio in Goertzel algorithm, C language.
Now I want to detect 16 bits stored PCM audio, C language as well. But when I change the code to 16 bits I found it not work well as 8 bits one.
I'm not very familiar with this Goertzel and many years not touched FFS something like that. Is there someone share some code/link or info to help implement this 16 bits one in C language? Thank you very much.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,264
|
Манон Брюне (, род. 2 февраля 1996 года, Лион, Франция) — французская фехтовальщица на саблях. Бронзовый призёр XXXII Олимпийских игр в Токио личной сабле, чемпионка мира, трёхкратный призёр чемпионатов мира, шестикратный призёр чемпионатов Европы.
Биография
Манон Брюне родилась 2 февраля 1996 года в Лионе. Начала заниматься фехтованием с 8 лет.
В 2014 году Манон успешно дебютировала на взрослых международных соревнованиях в составе сборной Франции. Юная фехтовальщица выиграла серебряную медаль на чемпионате Европы в командном первенстве, а через месяц стала серебряным призёром чемпионата мира.
Спустя два года после успехов Манон в составе французской команды завоевала серебряную медаль на чемпионате Европы, уступив в финале российским фехтовальщицам.
В 2017 году француженка стала третьей в командном первенстве на чемпионате Европы, а затем Манон завоевала бронзовую медаль на мировом первенстве в том же виде программы. Через год Манон снова заняла третье место в командной сабле на континентальном чемпионате.
Лучшие результаты
Чемпионаты мира
Золото — чемпионат мира 2018 года (Уси, Китай) (команды)
Серебро — чемпионат мира 2014 года (Казань, Россия) (команды)
Серебро — чемпионат мира 2019 года (Будапешт, Венгрия) (команды)
Бронза — чемпионат мира 2017 года (Лейпциг, Германия) (команды)
Чемпионаты Европы
Серебро — чемпионат Европы 2014 года (Страсбург, Франция) (команды)
Серебро — чемпионат Европы 2016 года (Торунь, Польша) (команды)
Серебро — чемпионат Европы 2019 года (Дюссельдорф, Германия)
Бронза — чемпионат Европы 2017 года (Тбилиси, Грузия) (команды)
Бронза — чемпионат Европы 2018 года (Нови-Сад, Сербия) (команды)
Бронза — чемпионат Европы 2019 года (Дюссельдорф, Германия) (команды)
Примечания
Ссылки
Профиль на сайте Международной федерации фехтования FIE
Фехтовальщики Франции
Бронзовые призёры летних Олимпийских игр 2020 года
Серебряные призёры летних Олимпийских игр 2020 года
Фехтовальщики на летних Олимпийских играх 2016 года
Фехтовальщики на летних Олимпийских играх 2020 года
Чемпионы мира по фехтованию
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,150
|
Why 'Scary Stories to Tell in the Dark' will be a Halloween classic
What's more classic than a motley gang of nerds trying to solve mysteries with an old dusty book?
Ya know, I just wanted to write a short blurb affirming that this movie is great, encouraging you to see it, etc. Your Intrepid Host is verbose to a fault—but I was willing to hold back my delight back a bit because I thought I would sound repetitive in praising this little gem that far outshines amongst other 'nostalgia bait'.
But alas, I peeked around, listened to some podcast reviews, and found myself most miffed.
Some of the negative—whiny—feedback has sounded an awful lot like this: "I guess you might like it if you're a kid." "I guess kids might like it."
Oh man.
Do you think.
That maybe.
This PG13 movie with a cast of high school aged protagonists named after a series of children's books MIGHT be aimed at kids? OMG guys, we have been bamboozled!
There's nothing wrong with liking a movie, disliking a movie, having mixed feels about a movie. But I do raise quite the eyebrow at this idea that just because the books the movie was based on are from 30 years ago, somehow the movie should've been about pleasing us rather than the kids stumbling upon those books in their local libraries today.
A PG-13 movie aimed at kids?
How can this be?
So I'm going to plunge head first into some of the stuff I loved about this movie. It's chock full of SPOILERS so read at your own risk.
I was pleasantly surprised by how the film stayed solid throughout in terms of tension, plot, and payoff. Our dream-team behind the film, Guillermo del Toro and André Øvredal, are known for their excellent visuals in creepy movies. However, their films' plots either hit it perfectly or really miss the pay off.*
The move isn't an anthology ala Creep Show or Cat's Eye, but is its own independent plot that pays homage to the original Alvin Schwartz folktales and Stephen Gammell's illustrations.
*YEP. YEP I SAID IT.
To sum up: our group of teens go into an abandoned, supposedly haunted house on Halloween night. They find a book that belonged to local legend Sarah Bellows. Sarah Bellows (just like every other inventive badass woman before her) was abused, abandoned, accused of killing kids, and probably murdered. Sarah Bellows used to tell scary stories to local children. Protagonist Stella, clearly finding a kindred spirit in Sarah Bellows, takes the book Sarah wrote her stories in. The stories start to come to life. Creepily and violently.
As a solid touch, the usual cliched solutions don't do the trick here. The book can't be given back or destroyed. It keeps coming back. As do its terrors.
I'll come clean. This does result in a truly stinker of a line "You don't read the book. THE BOOK READS YOU." But stick with me….
The movie takes place in a small town on Halloween in the early 70s. I really liked this choice of setting. It leans heavy into the nostalgia and Halloweenie feels that brought many of us to see the movie in the first place. There's old cars, drive-in movies showing vintage horror movies, homemade costumes, nurses in those creepy and demeaning white cap uniforms, etc.
The choice of time along with some other background elements brings a sense of "I am just a kid but the world is too heavy" feels that we can all identify with—and yes, cliché here: especially these days. It's not a coincidence that the movie takes place in October-November of 1972—aka when Nixon was reelected. The (really really ridiculously good-looking) protagonist Ramon is subjected to racist jeers. Vietnam is ongoing, and worse: we see some of the local boys happily signing up to 'shoot some commies'.
At the same time, our kids are Going Through Shit. Ramon lives transiently and his brother died in Vietnam. Stella (our main Weird Girl whom I love) had her mom leave a few years ago, and her dad has collapsed into himself. Friend Augie has a mom with a new boyfriend and is often left alone for weekends. And then there's buddy Chuck who….is cursed with an older sister and never ever being as funny as the script insists he is.
I appreciate this, because it helps the framing of 'this is a kid's movie, but we're going to meet you at your level rather than talk down to you'. Which is exactly how the movie treats its horror elements. All things considered, the movie makes superb compromises between "we want to grow a garden of nightmares but without killing any kids."*
*However, ¾ of the way through, a sheriff's deputy is murdered which is NEVER ADDRESSED. A monster straight up snaps his neck, his body drops like a sack of potatoes, and no one later is like "oh shit, the missing kids were bad but now we've got some Dateline shit going on here."
"Boy, sure hope me being a racist shit to you doesn't come back to bite me in the ass via a malevolent supernatural monster."
There have been complaints that the movie didn't go with Michael Myers' hack and slash type deaths. But I found the individual not-quite-demises of the victims very inventive workarounds.
I gleefully recount my favorites to you now!
Boy, sure hope me being an all around asshat doesn't come back to bite me in the form of an evil scarecrow!
The town bully, Tommy, is first to go (of course). He is hunted down in the cornfield behind his house by our first cameo, Harold.
Hi Harold!
Tommy comes home drunk after a day of signing up to go to Vietnam and torturing younger peers, just like the budding psychopath he is.
His mom is mad over some quaint farm problems. Tommy didn't take some eggs over to the neighbors. So very inebriated Tommy is sent to deliver the eggs, across a cornfield at midnight on Halloween. Such as it was in the olden days, when kids befell horrible, completely avoidable tragedies. At least it kept life interesting.
Family scarecrow, Harold, was pretty fucked up looking to begin with, but Sarah Bellows' book brings him to life. With a stiff, shuffling gait, Harold stalks Tommy through the cornfield (GET IT).
As I watched this scene, I was enjoying the buildup but was concerned. Would there be payoff? Unlike apparently everyone else, I knew this was a kid's movie. I knew I wasn't going to get the original story's ending where Harold stretches out his victim's flayed skin to dry in the sun.* Would this be a bait and switch? Would Harold do a jumpscare and nothing more?
Oh no. Harold is sick of getting shit on by this punk. In fact, Harold came to play.
Town bully Tommy gets SKEWERED THROUGH THE GUTS with a pitchfork.
Thank god I saw this in a dark theater. If anyone had seen the utter glee on my face when that kid got run through, I suspect I would've been reported to someone.
I was delighted because it meant the movie didn't intend to pull punches anymore than the bare minimum for that PG13 rating.
Tommy doesn't get taken out in a slasher movie fashion. Instead, the pitchfork infects him, slowly turning him into a scarecrow. The film draws this out for a few minutes, treating us to full blown body horror. Tommy tries to run away, but he's choking up straw, sputtering and unable to scream as more and more straw consumes him from the inside out.
*You know! For kids!
The next morning, all anyone knows is Tommy is missing. Our heroes stumble upon the eerie inanimate scarecrow that he's become. That's where what's left of Tommy stays, for the whole movie. His parents never find out what happened to him. And because this isn't an episode on Investigation Discovery, no one logically says "well obviously the parents got sick of his shit and murdered him". Some characters even speculate he went off to his volunteer term in 'Nam early, which if true would be even creepier than his parents killing him.
"Wonder whatever happened to Tommy?"
"That little shit could be dead in a ditch for all I care."
Next up is friend Augie. Augie has the misfortunate of getting matched to one of the many many many Scary Stories having to do with eating a corpse. Living out many a latchkey kid's evening of scavenging for food, Augie finds a stew sitting in the fridge. Most disturbingly, he just starts eating straight from the pot without heating the stew up. Clearly, Augie has some issues.
What we know, and what his friends try to warn him about, is that of course: a corpse's toe is in that stew and he is fated to take a big ol' slurp of worm chow.*
Again, this is a great scene where the movie just lets the inevitable draw out through two sequences.
First, you know that rotting toe is going in this kid's mouth. And you're just gonna have to sit there and squirm until it finally happens. We've all had that moment where something winds up in your mouth while eating that is not allowed in your mouth ever. All of you that have nightmares about a finger winding up in your fastfood burger? This is for you.
Once Augie inevitably takes that awful bite of offal, the next sequence of drawn out dread kicks off. Because as per tradition in such tales, the corpse wants that toe back.
*Where was the afterschool special about this danger?
Augie is hunted through his house by the horrid wraith that's one toe short. He eventually winds up hiding under his bed. But…. the corpse is under the bed with him. Augie is dragged into some hellscape by the corpse, never to be seen again.
Keith Morrison grows ever more suspicious of this town.
Buddy Chuck's older sister falls next. She is subjected to a nightmare I've actually dreamt before: thousands of tiny spiders popping out of your body. She doesn't get supernaturally disappeared, but does get whisked off to a mental asylum after understandably losing her shit from the experience.
No No No! No thank you! No!
The most inventive scene in the film is the one in which Chuck is subjected to The Red Room. The Red Room is adapted from the original Scary Story 'The Dream'. The scene takes place in a hospital, where the red of the Red Room is implemented through emergency lights. I found that really smart, and a great way to add atmosphere to a scene that really needed it. And we need that atmosphere because this story's particular antagonist needs a little extra help to appear frightening.
She's kinda adorable to be honest.
The Pale Lady looks almost like a living muppet. And hey, a living muppet sounds horrifying. Credit where credit is due: on the big screen, I found her to look very real. It's difficult to make a creature like this that is less immediately supernatural, but still has to look otherworldly enough to creep you out. She does not have an immediately threatening presence—she has a mellow, almost lukewarm friendly expression.
The Pale Lady meanders closer and closer to Chuck, who sees her every time he turns down a new hallway to try to escape.
Like a Bumble match that won't take the hint.
Finally, he's cornered. Face to face with the creature.
And I'm thinking "Uh…well okay, now what? She's just a big pale lady. What could she possibly–"
The film suddenly cut to a full length shot.
Chuck was now HALF ABSORBED INTO the Pale Lady. His head had vanished into her, his torso slowly following. No squirming or flailing. The kid was suddenly just half—and then fully—swallowed whole.
And will this quietly spawn many a Vore fetishist in our young audience? Most definitely, but at least they have the privilege of getting started with excellent Guillermo del Toro special effects.
There's more to enjoy about the film, there's a whole other monster I haven't even touched on yet, etc. But I've largely touched on the stuff that really sold me on the film.
One more thing: aside from all the ooky spooky stuff, the standout element was our main character, Stella. Stella is the weird young girl I wish I'd been together enough to embrace at that age. Of course, I'm always happy to see a girl getting to lead a movie for kids. But actress Zoe Colletti brings it. When she cries and gathers bravery through those tears, you believe her. Her delivery complements the creatures and horror to make the whole film work as a whole.
The movie is not perfect. There were things I distinctly did not care for. But I'm very glad this movie exists. This movie assured me that the tradition still holds of Halloweenie movies that treat kids with respect, like Hocus Pocus and The Halloween Tree. As an adult, you can absolutely enjoy and appreciate this movie—but you have to keep in mind that it isn't necessarily for you anymore than Trick or Treating is.
Darn kids, ruining childhood things by enjoying them.
Posted in Movies, ReviewsTagged Monsters, Movie, New Release, Nostalgia, Review, Special Effects
Prev Why It's Okay for Kids to be Scared (of 'Scary Stories to Tell in the Dark')
Next Why Babysitting is Scary
Histories and Legends
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,226
|
{"url":"https:\/\/stats.stackexchange.com\/questions\/402983\/is-the-significance-of-an-interaction-more-important-that-the-fit-of-a-model","text":"# Is the significance of an interaction more important that the fit of a model?\n\nI am new to lme4 and I am not sure if I understand correctly. If I want to know if there is an interaction between A and B, I have to write two models and then compare them with anova and the one with the lowest AIC is the one that fits the data better.\n\nSo I wrote this two models:\n\nmodel1 <- glmer(accuracy ~ var1 + var2 + (1 | participant),\ndata = xdat, family = binomial())\nmodel2 <- glmer(accuracy ~ var1 * var2 + (1 | participant),\ndata = xdat, family = binomial())\n\n\nModel one has a lower AIC value. Model2 did not converge. In the output, model2 tells me there is an interaction at one of the levels of var1 with one of the levels of var2.\n\nCan I say there is no interaction because the best model does not include it, or should I say there is because the model designed to test it includes it and says at one of the levels of the variables it is significant, even if the thing did not converge?\n\n\u2022 If the model has not converged, it may not be wise to trust results from it (depending also on the warning message your received for non-convergence). You should try to refit the model changing the optimization algorithm. In addition, you should specify more quadrature points, i.e., check the nAGQ argument of glmer() - you should set that to at least 11 or 13. You could also give a try in the GLMMadaptive package.\n\u2022 You need not to look at the AIC to see which model is better. If the p-value from the likelihood ratio test is significant, it means that the more complex model (i.e., in this case model2) provides a better fit to the data.","date":"2019-07-24 05:12:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5052284598350525, \"perplexity\": 481.6600959679068}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195530385.82\/warc\/CC-MAIN-20190724041048-20190724063048-00273.warc.gz\"}"}
| null | null |
Shinfield Players Theatre & Arts Centre is located in Shinfield near Reading. Shinfield Players have been providing entertainment to the local community for 60 years. Producing high quality theatre since 1956.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,520
|
<?xml version="1.0"?>
<Error xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<Code>ResourceNotFound</Code>
<Message>No deployments were found.</Message>
</Error>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,076
|
Produced by Turgut Dincer, RichardW, and the Online
Distributed Proofreading Team at http://www.pgdp.net (This
file was produced from images generously made available
by The Internet Archive)
THINKING AS A SCIENCE, BY HENRY HAZLITT
THINKING
AS A SCIENCE
BY
HENRY HAZLITT
[Illustration]
NEW YORK
E. P. DUTTON & COMPANY
681 FIFTH AVENUE
Copyright, 1916
BY E. P. DUTTON & COMPANY
CONTENTS
I • The Neglect of Thinking • 1
II • Thinking With Method • 11
III • A Few Cautions • 51
IV • Concentration • 68
V • Prejudice and Uncertainty • 99
VI • Debate and Conversation • 129
VII • Thinking and Reading • 135
VIII • Writing One's Thoughts • 191
IX • Things Worth Thinking About • 207
X • Thinking as an Art • 237
XI • Books on Thinking • 248
THINKING AS A SCIENCE
I
THE NEGLECT OF THINKING
Every man knows there are evils in the world which need setting right.
Every man has pretty definite ideas as to what these evils are. But
to most men one in particular stands out vividly. To some, in fact,
this stands out with such startling vividness that they lose sight of
other evils, or look upon them as the natural consequences of their own
particular evil-in-chief.
To the Socialist this evil is the capitalistic system; to the
prohibitionist it is intemperance; to the feminist it is the subjection
of women; to the clergyman it is the decline of religion; to Andrew
Carnegie it is war; to the staunch Republican it is the Democratic
Party, and so on, _ad infinitum_.
I, too, have a pet little evil, to which in more passionate moments
I am apt to attribute all the others. This evil is the neglect of
thinking. And when I say thinking I mean real thinking, independent
thinking, hard thinking.
You protest. You say men are thinking more now than they ever were.
You bring out the almanac to prove by statistics that illiteracy is
declining. You point to our magnificent libraries. You point to the
multiplication of books. You show beyond a doubt that people are
reading more now than ever before in all history. . . .
Very well, exactly. That is just the trouble. Most people, when
confronted with a problem, immediately acquire an inordinate desire to
"read-up" on it. When they get stuck mentally, the first thing such
people do is to run to a book. Confess it, have you not often been in
a waiting room or a Pullman, noticed people all about you reading, and
finding yourself without any reading matter, have you not wished that
you had some?—something to "occupy your mind"? And did it ever occur to
you that you had within you the power to occupy your mind, and do it
more profitably than all those assiduous readers? Briefly, did it ever
occur to you to _think_?
Of course you "thought"—in a sense. Thinking means a variety of
things. You may have looked out of your train window while passing a
field, and it may have occurred to you that that field would make an
excellent baseball diamond. Then you "thought" of the time when you
played baseball, "thought" of some particular game perhaps, "thought"
how you had made a grand stand play or a bad muff, and how one day it
began to rain in the middle of the game, and the team took refuge in
the carriage shed. Then you "thought" of other rainy days rendered
particularly vivid for some reason or other, or perhaps your mind came
back to considering the present weather, and how long it was going to
last. . . . And of course, in one sense you were "thinking." But when
I use the word thinking, I mean thinking with a purpose, with an end
in view, thinking to solve a problem. I mean the kind of thinking that
is forced on us when we are deciding on a course to pursue, on a life
work to take up perhaps; the kind of thinking that was forced on us
in our younger days when we had to find a solution to a problem in
mathematics, or when we tackled psychology in college. I do not mean
"thinking" in snatches, or holding petty opinions on this subject and
on that. I mean thought on significant questions which lie outside the
bounds of your narrow personal welfare. This is the kind of thinking
which is now so rare—so sadly needed!
Of course before this can be revived we must arouse a desire for it. We
must arouse a desire for thinking for its own sake; solving problems
for the mere sake of solving problems. But a mere desire for thinking,
praiseworthy as it is, is not enough. We must know _how_ to think, and
to that end we must search for those rules and methods of procedure
which will most help us in thinking creatively, originally, and not
least of all surely, correctly.
When they think at all, the last thing men think about is their
own thoughts. Every sensible man realizes that the perfection of a
mechanical instrument depends to some extent upon the perfection of the
tools with which it is made. No carpenter would expect a perfectly
smooth board after using a dented or chipped plane. No gasolene engine
manufacturer would expect to produce a good motor unless he had the
best lathes obtainable to help him turn out his product. No watchmaker
would expect to construct a perfectly accurate timepiece unless he had
the most delicate and accurate tools to turn out the cogs and screws.
Before any specialist produces an instrument he thinks of the tools
with which he is to produce it. But men reflect continually on the
most complex problems—problems of vital importance to them—and expect
to obtain satisfactory solutions, without once giving a thought to
the manner in which they go about obtaining those solutions; without
a thought to their own mind, the tool which produces those solutions.
Surely this deserves at least some systematic consideration.
Some remarks of Ella Wheeler Wilcox under this head will bear quoting:
"Human thinking is still in as great a state of disorder and jumble
as language was before the alphabet, music before the scale was
discovered, printing before Gutenberg, or mathematics before Pythagoras
formulated its laws." "This systematization of all thought," she tells
us, would be "a more far reaching improvement than all the others, for
it will do for education, health, economics, government, etc., what the
alphabet did for language, movable type for printing and literature,
the scale for music, and the rules of arithmetic for calculation. Being
the exact counterpart of these in its particular field, its mission,
like theirs, will be to bring order out of chaos."
I believe Miss Wilcox exaggerates matters. Incidentally I for one
do not pretend to have discovered anything revolutionary. But the
importance of the subject warrants its formulation into as near
scientific form as we can bring it.
I beg no one to get frightened. Science does not necessarily mean test
tubes and telescopes. I mean science in its broadest sense; and in
this sense it means nothing more than organized knowledge. If we are
to find rules and methods of procedure, these methods must come from
somewhere—must be based on certain principles—and these principles can
come only from close, systematic investigation.
It may indeed be urged that we can think best by disregarding all
"rules," by not paying any attention to method. But the man who
maintains this must give reasons; and once he attempts this he himself
is bordering closely on the science of the matter. In short, the
settlement of even this question is part of the science of thinking.
And what is to be the nature of this science?
For our purposes, all sciences may be divided into two kinds:
_positive_ and _normative_. A positive science investigates the nature
of things as they are. It deals simply with matters of fact. Such a
science is physics, chemistry, psychology. A normative science is one
which studies things as they ought to be. As the name implies, it
seeks to establish a _norm_ or pattern which ought to be adhered to.
It studies means of reaching desired ends. To this class belong such
sciences as ethics, education, agriculture.
Now these normative sciences, with the exception of ethics, are nearly
always referred to either as "arts" or "applied sciences." To both
of these terms I technically but strenuously object. I object to the
term "art" to designate any set of organized rules for doing a thing,
because "art" also means the actual doing of that thing. And this
thing may be done, and often is done, in total ignorance of the rules
governing it. A man may possess the art of swimming—he may be able to
swim—without any previous instruction, without any knowledge of how he
ought to hold his body, arms and legs; just as a dog may do the same
thing.
I object also to the term "applied science," because to me this term
implies that the science it refers to is based on one positive science
only. I can think of no so-called applied science which is so based.
Hygiene, not alone dependent on physiology, must derive some of its
rules from the chemistry of foods, as well as from the sciences of
sanitation and ventilation, themselves normative. Agriculture is based
not only on biology and botany, but on chemistry and meteorology.
The science of thinking, then, if such a science there be, is
normative. Its purpose is to find those methods which will help us to
think constructively and correctly.
One more distinction and our preliminaries are over. There are two
other sciences with which the science of thinking is liable to become
confused; one positive, the other normative.
The positive science is that branch of psychology which deals with
the reasoning process and examines the basis of belief. We shall make
frequent use of this science in trying to find rules for thinking, but
it will not be the only science we shall use, nor will that science be
the subject of this book.
The normative science with which the science of thinking may become
confused is logic. Indeed, logic has sometimes been called the science
of thinking. Now for our purposes logic is a part of the science of
thinking, but it is not the part which we are primarily to consider.
Its function is merely negative; it consists in leading us from
error. The part of the science of thinking in which we are interested
deals with those positive rules which will help to make us creative
thinkers. . . .
Our ship is headed for the port Truth. Our mind is the engine, the
science of thinking the propeller, and logic the rudder. Without our
engine, the mind, the propeller of the science of thinking, which
transforms our mental energy most effectively into motion, would be
useless. Without the propeller, which gives motion, the rudder of logic
would be useless. But all three are needed to reach our goal.
* * * * *
And now I must bespeak a little patience. The next chapter, and the one
following it, are going to deal very largely with method and methods.
They will touch on classification, and a lot of other things to which
the plain man has an aversion; to which, at least, he usually evinces
no very active interest. But it is necessary to consider these things
in order to make our study complete.
II
THINKING WITH METHOD
Most of us, at those rare intervals when we think at all, do so in a
slipshod sort of way. If we come across a mental difficulty we try to
get rid of it in almost any kind of hit or miss manner. Even those few
of us who think occasionally for the mere sake of thinking, generally
do so without regard for method—indeed, are often unconscious that
method could be applied to our thought. But what is meant by method? I
may best explain by an example.
From somewhere or other, a man gets hold of the idea that the proper
subjects are not being taught in our schools and colleges. He asks
himself what the proper subjects would be. He considers how useless
his knowledge of Greek and Latin has been. He decides that these two
subjects should be eliminated. Then he thinks how he would have been
helped in business by a knowledge of bookkeeping, and he concludes
that this subject deserves a place in the curriculum. He has recently
received a letter from a college friend containing some errors in
spelling. He is convinced that this branch of knowledge is being left
in undeserved neglect. Or he is impressed by the spread of unsound
theories of money among the poorer classes, and he believes that
everybody should receive a thorough course in economics and finance.
And so he rambles on, now on this subject, now on that.
Compare this haphazard, aimless thinking with that of the man of
method. This man is confronted with the same general situation as our
first thinker, but he makes his problem a different one. He first asks
himself what end he has in view. He discovers that he is primarily
trying to find out not so much—what subjects should be taught in the
schools? as—what knowledge is of most worth? He puts the problem
definitely before himself in this latter form. He then sees that the
problem—what knowledge is of _most_ worth?, implies that what is
desired is not to find what subjects are of worth and what are not, but
what is the _relative_ value of subjects. His next step, obviously,
is to discover a standard by which the relative value of subjects can
be determined; and this, let us say, he finds in the help a knowledge
of these subjects gives to complete living. Having decided this, he
next classifies in the order of their importance the activities which
constitute human life, and follows this by classifying subjects as they
prepare for these activities.[1]
Needless to say, the results obtained by this thinker will be
infinitely more satisfactory than those arrived at by his unsystematic
brother. Method, then, is essential. But how are we to apply it in all
cases?
Now there are methods without number, and in many cases a problem will
require a method all its own; but we here purpose to take up only those
most general in application.
Before considering these methods of thinking, however, it would
be well to ask ourselves what thinking is. As stated before, the
term is loosely used to cover a wide range of mental processes.
These processes we may roughly divide into memory, imagination and
reasoning. It is the last only with which we have to deal. I admit that
development of the memory is desirable. I admit that development of the
imagination is equally desirable. But they are not the subject of this
book. By "thinking" I mean reasoning. And our present purpose is to
find the nature of this process.
Modern psychologists tell us that all reasoning begins in perplexity,
hesitation, doubt. "The process of reasoning is one of problem solving.
. . . The occasion for the reasoning is always a thwarted purpose."[2]
It is essential we keep this in mind. It differs from the popular
conception even more than may appear at first sight. _If a man were to
know everything he could not think._ Nothing would ever puzzle him, his
purposes would never be thwarted, he would never experience perplexity
or doubt, he would have no problems. If we are to conceive of God as an
All-Knower, we cannot conceive of Him as a Thinking Being. Thinking is
reserved for beings of finite intelligence.
Were we to study the origin and evolution of thinking, we would
doubtless find that thinking arose in just this way—from thwarted
purposes. If our lives and the lives of our animal ancestors had always
run smoothly, if our every desire were immediately satisfied, if we
never met an obstacle in anything we tried to do, thinking would never
have appeared on this planet. But adversity forced us to it.
Tickle a frog's left leg, and his right leg will immediately fly up
and scratch it. The action is merely what psychologists would call a
"reflex." Absolutely no thinking takes place: the frog would do the
same thing if you removed its brain. And if you tickle its right leg
its left leg would fly up to scratch. But if you tickled both legs
at once they could not both fly up and scratch each other. It would
be a physical impossibility. Here, then, is a difficulty. The frog
hesitates; thinking steps upon the scene. After mature deliberation the
frog solves his problem: he holds his left leg still while he scratches
it with his right, then he holds his right leg still and scratches
that with his left.
We cannot, then, think on "general principles." To try this is like
attempting to chew laughing gas. To think at all requires a purpose,
no matter how vague. The best thinking, however, requires a definite
purpose, and the more definite this purpose the more definite will be
our thinking. Therefore in taking up any special line of thought, we
must first find just what our end or purpose is, and thus get clearly
in mind what our problems are.
Advising a man to ask himself what his problems are may seem absurd.
But it is just this confusion as to what they want to know which
has driven men into error time and time again. The history of the
never-ending philosophical controversy between "materialism" and
"idealism" is largely a history of different ways of stating the issue;
the progress made is mainly due to the increasing definiteness with
which it has been stated.
One of the most frequent sources of confusion in stating questions
is in failure to distinguish between what is and what ought to be.
Considering woman suffrage a man will ask himself "What is woman's
sphere?," when he really wants to know not what woman's sphere actually
is, but what it ought to be. Our first step, then, is to get our
problem or problems clearly in mind, and to state them as definitely as
possible. A problem properly stated is a problem partly solved.
What we will do next depends on the nature of the question. In the
example "What knowledge is of most worth?" we proceeded to look for
a criterion of worthiness. And this was really a re-stating of the
question. For instead of asking ourselves "What knowledge is of most
worth?," we began asking "What knowledge best prepares for complete
living?"
Our next move was to classify. This is essential not only to systematic
reasoning but to thinking of any kind. Classification is the process
of grouping objects according to common qualities. But as almost all
objects differ in some qualities and almost all have some qualities
in common, it follows that, contrary to common belief, _there is no
one classification absolutely essential to any group of objects_. An
infinite number of classifications may be made, because every object
has an infinite number of attributes, depending on the aspect we take
of it. Nor is any one aspect of a thing "truer" than any other. The
aspect we take depends entirely on the purpose we have in mind or the
problem we wish to solve. As William James pointed out:
"Now that I am writing it is essential that I conceive my paper as a
surface for inscription. If I failed to do that I should have to stop
my work. But if I wished to light a fire and no other materials were
by, the essential way of conceiving the paper would be as combustible
material; and I need then have no thought of any of its other
destinations. It is really all that it is: a combustible, a writing
surface, a thin thing, a hydrocarbonaceous thing, a thing eight inches
one way and ten another, a thing just one furlong east of a certain
stone in my neighbor's field, an American thing, etc., etc., _ad
infinitum_."[3]
And if the reader insist that these qualities are merely "accidental,"
and that what the thing really is, is just _paper_ and nothing else,
the reply is that the reader is intellectually petrified; that though
"paper" may be our commonest title for it and may suggest our usual
purpose with it, yet that purpose and this title and the properties
which this title suggest have in reality nothing sacramental about them.
So because you have classified something from one aspect do not
imagine that you are necessarily precluded from classifying it from
any other. A man who is studying the theory of money may divide the
medium of exchange into standard money and credit currency. But this
need not keep him from viewing it as coins, government notes, and
bank currency, nor should it prevent him from classifying it into,
say (1) hand-to-hand money, (2) written or printed orders of one
party to pay specified sums to another, and (3) book accounts.[4]
All these classifications will be true; all may be useful for a full
comprehension. Every classification should of course be logical; but it
is far more essential that it be utilizable.
And while we are treating of utility, we might note that this
_pragmatic_ method can be applied with profit to nearly all our
positive problems. Before starting to solve a question—while deciding,
for instance, on the validity of some nice distinction in logic—we
should ask ourselves, "What practical difference will it make if
I hold one opinion or the other? How will my belief influence my
action?"—(using the word "action" in its broadest sense). This may
often lead our line of inquiry into more fruitful channels, keep
us from making fine but needless distinctions, help us to word our
question more relevantly, and lead us to make distinctions where we
really need them.
We are now ready to consider in order a number of constructive methods
in thinking.
One method applicable to almost all problems is what we may call
either the _deductive_ or the _à priori_ method. This method reaches a
conclusion without observation or experiment. It consists in reasoning
from previous experience or from established principles to particular
facts. It may, however, be used to confirm observation and experiment
as well as to take their place. Take the all important questions in
biology of whether or not specific characteristics acquired by an
animal during its life time are inherited by offspring. The a priori
method would examine the structures of the body, the germ plasm from
which the offspring develops, and the relation between them, and would
ask just how a specific change in the body could affect the germ. If it
were found that the tissues that are to continue the race were set off
so completely from the structures of the body as to make inconceivable
any manner by which they could be influenced by changes in these
structures, then this method would decide that acquired characteristics
are not transmitted.
Let us take another example. Both the supporters and opponents of
woman suffrage have often decided the question without consulting at
all the actual results achieved in the States where women vote. They
have settled the question to their own satisfaction merely on a priori
grounds. They have considered woman's supposed mental qualities as
compared with man's, and have decided on her fitness for the ballot
solely from these considerations. It must be remembered, however, that
before women were admitted to suffrage anywhere, deductive or a priori
reasoning was the only kind possible.
It is often helpful to look at a problem from the viewpoint of
different sciences. A problem in political science will very likely
have an economic aspect, whether it concerns taxation, tariff, trusts
or the ownership of land, and so we may look at the question solely
from the viewpoint of economics. But the problem may also have an
ethical aspect. If it is proposed to pass a universal prohibition law,
you may ask, "Has the Government the right to interfere in this way
with personal liberty?" Again, we could take a psychological view: we
would decide from our knowledge of human nature just what the effect of
an alcohol prohibition law would be—whether it would not drive men to
even more dangerous drugs, such as morphine and opium.
And now we come to a whole host of effective methods, all of which may
be classed as comparative. The comparative method is as old as thought
itself, but it is strange that even scientists did not begin to use
it consciously and consistently until almost the present generation.
Nowhere is it better illustrated than in modern psychology. Most of
the so-called branches of psychology are merely different forms of the
comparative method of treatment. "Abnormal psychology" is merely a
comparison of abnormal mental types with normal mental types for the
light they throw on each other. "Child study" is a comparison of the
mind of the child with that of the adult. "Animal psychology" is a
comparison of the actions of animals with each other and with those of
man. And none of these methods is of any value except in so far as it
makes use of comparison.
Often consciously used in the consideration of problems is the
so-called historical method. This method, as its name implies, consists
in obtaining knowledge of a thing by considering its past record. The
word history is popularly used in so narrow a sense, however, being
restricted only to the history of nations, and often merely to the
political history of nations, that we can avoid confusion by calling
this method the evolutionary. In the final analysis the method is
comparative, for it really consists in comparing a thing at one period
of development with itself at another period.
Let us take our example from political science. The historical
method, in its popular sense, has been so much used here, even to the
exclusion of other methods, that it would seem needless to speak of
it. But often the method has been abused and often it has not been
given broad enough treatment. It traces the growth of an institution,
or of an idea—personal liberty, say,—through successive periods.
It notes what the path has been, and judges of the probable future
tendency. But a far broader outlook than we get from this narrowly
conceived "historical" method is furnished by evolutionary sociology.
Here we inquire into the origin of society and of the various trades,
industries, professions and pursuits of all kinds, and to do this we go
far into prehistoric times.
Nowhere is the evolutionary method more strikingly seen than in
biology. Since Darwin's great theory was promulgated the science has
gone forward by leaps and bounds. We have derived untold benefit from
a comparison of man and animals in the light of this hypothesis;
even study of the development of individual man has been aided. The
discovery of the _fact_ of evolution constituted an incalculable
advance, but the method for study which it furnished was of even
greater importance.
I have spoken of the comparison of man and animals "in the light of
this (evolutionary) hypothesis." This brings us to a point which must
be kept in mind in practically all observation. We are often exhorted
to "observe." Presumably we are to do this "on general principles."
Such advice is about as foolish as asking us to think on general
principles. Imagine for the moment what would happen if you started
right now to "observe" as much as you could. You might begin with this
book and notice the size of the type, the amount of margin, the quality
of the paper, the dimensions of the page, the number of pages. But
you have by no means exhausted the number of properties possessed by
this book. You must observe that it is also combustible, that it is
destructible, that it is machine made, that it is American printed,
that it is such and such a price, that it weighs so many ounces,
that it is flat, that it is rectangular, that its thickness is so
much. . . .
The absurdity is obvious. If we started out merely to observe, with no
definite purpose in mind, we could keep it up forever. And get nowhere.
Nine out of every ten observations would never be put to use. We would
be sinfully wasting our time. To observe most profitably, just as to
think most profitably, we must have a definite purpose. This purpose
must be _to test the truth of a supposition_. A concrete example will
make this clear.
A man has been shipwrecked on an island and believes himself to be
alone there. One day, as he is walking along the beach, he discovers
footprints. How did they get there? His first assumption is that they
are his own. It occurs to him, however, that he had not been near this
spot for over a week, and that yesterday's storm would have washed any
footprints away. This objection is confirmed by making a footprint
himself and comparing it with the one observed, and noticing that they
differ markedly. The footprints being those of some one else, how did
the man who made them get there? The first supposition is that he
came in a boat. The idea of a small boat is dismissed because of the
assumed great distance of this island from other land. Therefore the
man must have come in a large vessel. But the footprints lead to a wet
part of the sand and the tide is just going down. In this case they
are very recent—made not more than a half hour ago. This being so the
man who made them could not have had time to get back to any ship and
sail out of sight. If he came in a ship it should be still in view. The
discoverer of the footprints climbs a tree from which he can view the
sea around the entire island. He can sight no vessel. The supposition
or hypothesis that the unknown came in a ship is abandoned. Then the
suggestion comes that the unknown has been on the island during the
entire time that the shipwrecked man thought himself alone. This
suggestion is tested in a manner similar to the others. . . .
The example sums up roughly the general process of all thought, and
brings out the motive and value of observation. Let us analyze it.
The first thing to happen is the arousal of a feeling of perplexity,
the appearance of a problem. The man has been shambling along,
doubtless "thinking" in that loose sense referred to. He has perhaps
kicked several stones loose that would have set a geologist worrying,
and has picked branches from bushes which would have puzzled a
botanist. But this man has not had his curiosity aroused until he has
come to these footprints. His thinking starts with his perplexity.
After this doubt has been aroused the most obvious solution suggests
itself—"my own footprints." But if true, this suggestion involves the
co-existence of other facts, some of which are known and some of which
may be determined. Thus, _if_ they were his own footprints, it must,
among other things, necessarily follow (1) that he had been at that
spot before, (2) that nothing had happened since that time to remove
the prints, (3) that the footprints corresponded to his own. The first
consequence involved—that he had been there before—was a fact, but
the others were not, and so the suggestion was dropped. Then a second
hypothesis occurred—"the man came in a ship"—and this was tried out
in a similar way. Notice that in each case the consequences dependent
on the truth of the suggestion are tried out (1) by memory, (2) by
observation or experiment. Memory came when he thought of the last time
he had walked near the beach and of yesterday's storm. Observation came
when he compared his footprint with the one seen, when he followed the
footprints along the sand and noticed where they led, when he climbed a
tree and looked for a ship. There were a number of other things which
he could have observed. He might have noticed the texture of the sand,
what kind of a tree he was climbing, what sort of clouds were in the
sky. But he did not observe these interesting things simply because
they would throw no light on the truth or falsity of his supposition.
In another problem one of these facts might have been of value.
It is almost possible to sum up the whole process of thinking as the
occurrence of suggestions for the solution of difficulties and the
testing out of those suggestions. The suggestions or suppositions
are tested by observation, memory, experiment. Supposition and
observation alternate. The first facts observed—in the case foregoing,
the footprints—make the problem, they suggest the supposition. A
supposition is that the man came in a boat. _If_ the man came in a boat
such and such would be the case—the boat would still be visible, etc.
If the boat is not visible the supposition is given up and another one
made; if the boat is visible the supposition is confirmed. This is a
case of simple and rudimentary thinking, but it illustrates roughly the
process of thought on even the most complicated problems of science.
The methods we have been discussing may all be considered simply as
means for helping good suggestions occur to us.
Let us illustrate by considering a few methods of rather restricted
application. We are often aided in the solution of a problem by asking
its opposite. If we ask ourselves "What constitutes gracefulness?" we
may find ourselves at a loss for suggestions, because gracefulness
always seems "so natural." But if we ask its opposite, "What
constitutes awkwardness?," suggestions are more apt to occur. If we
find, for instance, that awkwardness consists in undue bodily effort
in making a movement, we may assume that gracefulness consists in ease
of movement. In the same way the question of what makes us forget may
be helped by asking ourselves what makes us remember, and light may be
thrown on the causes of success in business and in life by a study of
the causes of failure.
The method of analogy likewise encourages suggestions. Analogy consists
in noting certain likenesses between things, and assuming that they
also possess other common qualities. Striking use of analogy is made
in dealing with the planet Mars. At each pole there are great white
patches. The size of these varies markedly with the seasons, which
suggests that like the earth, Mars has great areas of ice and snow at
its two poles which melt and re-form. The general surface is reddish,
but three-eighths of it is covered by blue-green tracts, and these are
usually inferred to be seas. These again are connected by an intricate
system of blue-green lines, which some scientists believe to be
canals, but on this there is much controversy. In Mars we have at once
an illustration of the possibilities and dangers of analogy.
In the whole discussion of constructive method thus far, I have left
out the two most common and useful methods of all. The first of
these we may designate by a somewhat formidable title: _empirical
observation_. Empirical, at least for our present purposes, means
merely that which comes within experience. But the term is generally
opposed to scientific. Thus Dewey gives an example: "A says, 'It will
probably rain to-morrow.' B asks, 'Why do you think so?' And A replies,
'Because the sky was lowering at sunset.' When B asks, 'What has that
to do with it?' A responds, 'I do not know, but it generally does rain
after such a sunset.' He does not perceive any _connection_ between
the appearance of the sky and the coming rain; he is not aware of any
continuity in the facts themselves—any law or principle, as we usually
say. He simply, from frequently recurring conjunction of the events,
has associated them so that when he sees one he thinks of the other."[5]
This, however, is not what I mean to imply by the term empirical
observation. I mean rather thinking on the basis merely of facts
which occur in the natural course of events, which have not been
systematically produced by ourselves or others for the purpose
of solving a problem. Logicians usually call this method simply
_observation_, and oppose it to experiment. But I object to calling
this simply observation because experiment itself is really
observation, only in one case we observe merely events which happen to
occur, and in the other we observe the results of events _which we have
made occur_. The true way of distinguishing these two methods would
be to call one _empirical observation_, and the other _experimental
observation_.
This empirical method—if indeed I am justified in calling it a
_method_—is the most common in all thinking. To give examples of it
would be to show how men generally think. But the method has real
value, and may even be the most important of all, for if we thought
without it our ideas would doubtless be original, but very dangerous.
Let us apply it to some of the problems considered under other methods.
Empirical observation is used where experiment is impossible—often,
unfortunately, where experiment is merely inconvenient. In political
science the empirical method would consist in noting the effect of
certain laws,—e.g., tariffs of different countries and of the same
country at different periods—and noting economic conditions at the time
the different tariffs were in effect. Allowance would be made for other
factors which could influence the country's economic condition and the
effect of the tariff could then be determined.
The empirical method of dealing with meteorology, the science of
weather, would consist in making a study of cloud formations, wind
velocity, moisture in the air, temperature, etc., and noting what
conditions usually or perhaps invariably followed certain of these
conditions. From this, conclusions could be drawn as to what weather
to expect following certain conditions.
But valuable as empirical observation is, and often as we must use it,
it should never be employed when we can experiment. When the empirical
method is rightly used allowance always has to be made for certain
irrelevant factors. But "making allowances" is always sheer guess
work. _The experimental method consists not in making allowances for
certain factors, but in eliminating those factors._ In our example
from political science experiment is practically impossible, because
the factors which may influence economic conditions are innumerable,
and even were they few, no country could survive the dangers of being
experimented upon—to say nothing of its permitting it. Experiment is
similarly impossible in dealing with weather conditions directly. It is
impossible in astronomy.
But it could be applied quite easily to most questions. Suppose you
wanted to determine beyond question which of two methods of teaching a
given subject was the better. We shall assume for the moment that you
have unlimited time and money to experiment. It may be thought that
we could settle this simply by teaching one person according to one
method and another person according to the other, and that we could
determine the relative merits of each method from the progress made by
each pupil. This, however, would be practically of no use whatever.
One pupil might be naturally brighter than the other, and so would
naturally learn quicker, even were he taught by an inferior method.
To make the experiment of any use we should first take two _groups_
of pupils—the larger the better. For it is obvious that if we take a
great number of pupils and place them in two groups the differences
between the individuals will tend to offset one another. Let us say the
subject is one in which the progress can be quantitatively measured,
say typewriting, and let us suppose there are fifty pupils in each
group. If after a given time _all_ the pupils in one group had attained
a greater speed with accuracy than _all_ the pupils in the other,
the test would be almost unquestionable. This would be even more
conclusive if the groups were reasonably well balanced. For if all of
one group were men and all of the other were boys, the men might make
more rapid progress than the boys even with a less efficient system.
But it should be easy to divide classes and groups so as to have a
reasonable balance of intelligence between them. The probable result
of any experiment would be that in neither class would all the pupils
make more progress than all the pupils of the other, though you might
find that the preponderating majority in one class improved faster than
those in the other, and this would probably be sufficient to indicate
the superiority of one method, even though one or two pupils in the
second group progressed faster than one or two in the first.
I say "probably" because there are still many irrelevant factors which
might influence the result. For instance, if you had a different
teacher for each group, one group might make greater progress not
because of the method but because of the teacher. This means either
that one teacher should teach both groups, or that we should multiply
the number of groups and the number of teachers, and have half the
teachers teaching half the groups by one method, and the other half
teaching by the other method. Of course here too the more we could
multiply the number the better it would be. Even then there might be
some reasonable question as to the validity of the experiment, for it
might be that one method would tend to encourage faster progress at
the beginning, but that the other would lead to greater progress in
the long run. This could be determined only by carrying our experiment
over a long period. And we might still have irrelevant factors, for the
machines on which one group learnt to typewrite might be superior to
those on which the other group learnt, and this factor would have to be
eliminated in a similar way to the others.
The experimental method has been well summed up by Thomson and Tait in
their _Natural Philosophy_:
"In all cases when a particular agent or cause is to be studied,
experiments should be arranged in such a way as to lead if possible
to results depending on it alone; or, if this cannot be done, they
should be arranged so as to increase the effects due to the cause to be
studied till these so far exceed the unavoidable concomitants, that the
latter may be considered as only disturbing, not essentially modifying
the effects of the principal agent."
In all experiments one must exercise ingenuity in finding other causes
besides the one to be studied which may possibly influence a result,
and in eliminating these. It might benefit the reader considerably if
he were to think out for himself how he would apply experiment in its
most thoroughgoing form to solve a given question, say the inheritance
of acquired characteristics.
* * * * *
I have now cited enough methods to at least indicate what "thinking
with method" means. To satisfy a certain human craving all of these
have been named, though sometimes arbitrarily. Of course each may have
to be modified to some extent to adjust it to different problems. I
must repeat: there are methods numberless, and some problems will
require methods all their own.
But what is important is that every problem should be dealt with by
as many methods as possible. Doubtless you have used, at some time or
other in the course of your thinking, nearly every one of the methods
I have so far suggested. But the point is not that you have never used
these methods at all, but that you have not used them often enough. You
were unaware what method you were using. Consequently you used it only
occasionally. You used it only when you stumbled on it accidentally. To
formulate methods is to bring them to your attention, so that you may
use them always, thoroughly, correctly, consistently.
We have treated political science from most angles. We have applied
more than one method to several other problems. To still further
clarify, exemplify and impress this point, I shall show the application
of method to one more subject.
Suppose you wanted to invent a system of shorthand, and wanted to make
it as perfect as possible. How would you go about it?
Your first step should be to restate your question most advantageously.
You want to create certain characters or symbols, which will (1) take
the shortest time to write, (2) will be easily recognized by yourself
or others, even if written carelessly, and (3) which will not be so
numerous or so complex as to be difficult to learn. You may decide that
such symbols would have even further requirements. Next you should
decide on the methods to use in attacking your problem—this in order
not to forget any. Now assume you have decided on these methods and
that the first is the a priori. Your conclusion might be that it would
be impossible to have a different symbol for every word, and that it
is necessary to have some sort of alphabet. Should this alphabet be
based on that used in longhand? That is, should merely a simpler symbol
stand in place of each letter? Or should a different symbol represent
each sound? Or would it be possible to have a different elementary
symbol for each syllable? Having decided the basis for your symbols or
characters, you will know at least approximately the number required.
Your problem will then become that of making the characters as simple
as possible, so that they may be written most quickly; and yet as
different from each other as possible so that if written carelessly
(as they will be when written swiftly), they may be easily recognized.
You might try writing down all the simplest symbols you can think of.
Or you might ask yourself whether there is any fundamental geometrical
figure from which you can derive your symbols. Or you might study the
simplest and easiest movements of the hand, and base your characters on
these.
This a priori method is most apt of all to provoke real thinking. It
should therefore be taken up before any of the others. Not only is it
best for making you think deeply, but it will be more likely than any
of the others to make you think originally. However, whether attended
by great or little success, this method should be followed by others.
Not the least fruitful of these would be the evolutionary. This, of
course, would consist in studying the history of shorthand, finding out
the direction in which it has been tending, and thus anticipating in
some degree its future development. As this method is comparative we
would naturally be led from it to comparing the shorthand systems of
to-day, and assaying the good and bad qualities of each. These could
only be assayed if we knew something of shorthand theory, and thus our
experience with the deductive or a priori method would be of service.
Implied in here is a method of different nature than any we have yet
discussed, but one of immense help. In turning from the deductive
method to a study of shorthand systems which others have developed,
you have an opportunity to compare the results of your own thinking
with those obtained by others. If you have failed to solve the question
in as good a manner as these others, you can ask yourself wherein and
why your own reflections and ingenuity fell short. If you follow this
method with all problems—i.e., thinking a thing out for yourself before
looking up what others have thought—you will soon improve your thinking
surprisingly. The method is capable of application in every problem,
from inventing an adding machine to trying to find how the plumber got
that $3.46 on the bill.
But to return to shorthand. We still have the empirical and
experimental methods. In this particular case the difference between
them would be simply one of degree. We could find, for instance, what
systems were used by the fastest shorthand writers; but we could get
nothing conclusive from this, for we would have to make allowance for
the natural ability and length of training of these writers. From
merely looking at two outlines or characters, it is often difficult to
tell which can be written faster. This could only be tested by writing
hundreds in a row and finding the time it took to write the same number
of each. Of course such experiment is capable of indefinite expansion.
In dealing with method heretofore, I have at times come dangerously
near to making a false assumption. I have been talking as if a man
who took up political science, shorthand, or any other subject, were
dealing with only one problem. As a matter of fact he is dealing with
a whole series of problems. Just how many it is difficult to say,
because no problem worthy of the name is an indivisible unit, and
may always be broken into smaller problems. The whole science of
æsthetics is included in the simple question "What is beauty?", the
science of ethics is merely the answer to "What is right conduct?",
and metaphysics may be reduced to the problem "What is reality?" But
when we come to deal with any of these we instinctively break them up
into smaller and more concrete problems, making the treatment easier,
just as a general attempts to split his enemy's forces, so that he can
annihilate one section at a time. Often, indeed, the very division of
the larger problem into smaller problems constitutes its solution, for
we finally come to a problem which practically answers itself, and
which we recognize as being included in, or a particular form of, some
more general problem to which we already know the answer.
A man sets before himself the question, "What is the proper sphere of
Government?" Perhaps he will first of all consider certain different
specific activities which might possibly be supposed to come within
the sphere of governmental interference. He might ask himself, for
instance, "Should the Government interfere with freedom of contract?"
Notice that he has here temporarily made his problem narrower, he has
chosen to break it up in order to deal with it part by part. But even
when he came to cope with this smaller problem he would probably find
it necessary to break this up, and he would therefore take a specific
example. Suppose a man works for so much an hour, and that nine hours'
work a day gives him the minimum amount on which he can live and
support his family. Would it be wise to limit the legal working day of
such a man to eight hours? This problem practically answers itself,
and so further division is unnecessary. Of course the answer to this
does not determine the answer to the original question, for other parts
still remain to be considered.
In fact, much of the success of our thinking will depend upon just how
we divide our big problems into subsidiary problems, and just what our
subsidiary or subordinate problems are. This will depend to some extent
on our own natural sagacity, and to some extent on mere chance. No
rigid rules can be laid down. The only advice which can be offered is
that when a thinker breaks up a problem he should do so with an eye to
utility and definiteness.
John Stuart Mill, in an essay on Jeremy Bentham, pointed out that the
secret of the latter's strength and originality of thought lay in his
method, which "may be shortly described as the method of detail; of
treating wholes by separating them into their parts, abstractions by
resolving them into things,—classes and generalities by distinguishing
them into the individuals of which they are made up; and breaking every
question into pieces before attempting to solve it." The method was not
absolutely original with Bentham, but "whatever originality there was
in the method, in the subjects he applied it to, and in the rigidity
with which he adhered to it, there was the greatest."
The systematic thinker is careful of the manner in which he marshals
his difficulties. He knows that certain problems should properly be
considered before certain others, and he saves himself labor and
sometimes error by considering them in that order. Before asking
himself how Government should cure a given social evil, he first asks
whether it is the duty or even the right of the State to attend to
that particular evil at all. In other words, before asking what the
State should do in any particular case, he considers first what the
proper sphere of government is. It must be admitted that a previous
question often cannot be discovered until one has actually attempted
the solution of a problem. In the foregoing instance, it would be
difficult to determine the proper sphere of government by any other
method than a consideration of particular cases where government
interference suggests itself.
In fact, it is only by deep reflection on a subject that we come to
realize most of the problems involved. You walk along the road with
your friend the botanist and he stops to pick what looks to you to be
a common wild flower. "Hm," he muses, "I wonder how that got in this
part of the country?" Now that is no problem to you, simply because
you do not happen to know why that particular flower should _not_ be
there—and what men do not know about they take for granted. Knowledge
furnishes problems, and the discovery of problems itself constitutes an
intellectual advance.
Whenever you are thrashing out a subject, write down every problem,
difficulty and objection that occurs to you. When you get what you
consider a satisfactory solution, see whether or not it answers all of
them.
I have stated that method is essential to good thinking. I have given
rules and examples of methodic thinking. But I do not want to create
a false impression. If a man has not within him the materials of a
thinker, no amount of method can make him one. Half the thinking
process, as pointed out, depends on the occurrence of suggestions. The
occurrence of suggestions depends on how ideas are associated in a
man's mind. While this depends to some extent on the education and the
whole past life and environment of the individual, it depends far more
on inborn mental qualities. All method can do is to awaken the most
fruitful associations of ideas already in mind. Hence the more methods
we adopt—the greater the number of views we take of any problem—the
more solutions will suggest themselves.
There is one further reason why we should take as many different
viewpoints as possible. In our example of the inheritance of acquired
characteristics in animals, if we had been sure that the results of our
deductive reasoning were correct, it would have been a sinful waste of
time to experiment. But when we attack a problem by several methods we
can compare the results from each. If these results agree we have good
evidence that our solution is correct. But if we have adopted quite a
number of viewpoints, and have not let the results of one influence
those of the next, they are almost certain to be at variance. This
means that we have erred in applying one or several methods. How are
we to find which of the methods it was, and how are we to prevent such
errors?
This is the subject of our next chapter.
III
A FEW CAUTIONS
Thus far we have considered only positive and constructive thinking,
and means for obtaining relevant suggestions. We have had almost
nothing to do with cautions, means for avoiding fallacy and error, and
means for testing the truth and value of suggestions. Most writers who
have discussed thinking have dwelt so much on the negative aspect—so
much on what we should not do—and have so slighted the question of
what we should do, that I have perhaps been led to adopt this order,
more from a feeling of revolt than because it is logically better. But
I believe I have logic on my side. Constructive methods make thinking
"go"; cautions steer it in the right path. An automobile without a
steering gear is almost as useless as one without a motor. But an
automobile can go without being steered, whereas it cannot be steered
unless it is going.
But while with automobiles we can clearly divide moving from steering,
we cannot do this with thinking. The two processes are so inextricably
bound up, that we cannot engage in one without engaging in the other;
we cannot even speak of one without implying the other. I have divided
them for convenience of exposition. But in the last chapter we were
forced to deal slightly with cautions, and here we shall have to
consider constructive methods to some extent.
A case in point is classification. In taking this up from a
constructive standpoint, I remarked that all classifications ought to
be logical. But I did not say what I meant by logical, nor did I tell
how a logical classification could be secured. The two most prominent
errors made in classifying are (1) not making classifications mutually
exclusive, (2) not making them cover all the objects or phenomena
supposed to be classified.
The first error is the less common, for though occurring among all
thinkers, it is comparatively infrequent among those who proceed with
caution. It is, moreover, more easily discovered than the second.
Consider the classification of constructive methods into comparison,
observation, and experiment. It is apparent that these methods overlap.
We cannot compare without observing, much of our observation involves
comparison, when we experiment we must of course observe the results
obtained, and the results are usually always compared. All three
methods could be classed under observation. It is well to remember,
however, that the first classification may be useful—even more so than
one strictly logical, and that the nature of a subject will often make
impracticable, divisions which do not overlap in some degree.
The second error—that of not making a classification cover all the
objects or phenomena it is supposed to cover—is not so easy to
detect. It is one to which the greatest philosophers have been heir.
Some of our Socialist friends say there are but two kinds of people:
capitalists and laborers, "the people who live on others and the people
who are lived on." They overlook that class of farmers who own a little
piece of land and do their own tilling. Even if they insist that such a
class "is rapidly becoming extinct," the fact remains that it is still
with us and must be taken into account.
All classifications are made with a certain number of facts in mind,
and fortunate is he who happens to have just the right facts. We cannot
hold many facts in mind at once, and we often generalize upon thousands
of things by taking a supposedly representative dozen. To avoid error
all we can do is to keep constantly on the lookout for examples,
especially those which apparently will not fit into our generalization.
If they go in without straining anything, our classification receives
added warrant. But sometimes you will find that where you have three
classes a new fact will necessitate a fourth, and that often it will
overturn your whole beautiful structure.
There is another phase of thinking, which while chiefly cautionary, is
also in part constructive. We have so often been warned to "avoid the
treachery of words" and to "define all our terms" that a repetition
of the advice seems unnecessary. But we cannot overlook the excellent
counsel of Blaise Pascal. He urges that we not only define our terms,
but that whenever we use them we mentally substitute the definition.
However, this needs to be qualified. If every time we used a term we
stopped to substitute its definition, our thought might be exact but
would hardly move forward very rapidly. It will usually be sufficient
simply to substitute the definition a few times, for after doing this
we shall gradually come to know exactly what we mean by a term, and
further substitution would merely waste time. Of course, all this need
be applied only to terms new, technical or equivocal; or those used in
a mooted proposition.
I have spoken of analogy as a constructive method. This, however,
should be used only for suggestion, for it is most dangerous. Often
we use an analogy and are quite unaware of it. Thus many social
and political thinkers have called society an "organism," and have
proceeded to deal with it as if it were a large animal. They have
thought not in terms of the actual phenomena under consideration,
but in terms of the analogy. In so far as the terms of the analogy
were more concrete than those of the phenomena, their thinking has
been made easier. But no analogy will ever hold good throughout, and
consequently these thinkers have often fallen into error.
The quickest way to detect error in analogy is to carry it out as far
as it will go—and further. Every analogy will break down somewhere. Any
analogy if carried out far enough becomes absurd. We are most likely to
err when we carry an analogy too far, but not to the point where the
absurdity is apparent. Take the analogy employed in our first chapter,
comparing thinking and a ship. For the sake of the image I shall make
this a motor-boat. We might carry this out further. We might compare
the effect on the mind of books and experience to the fuel used for the
engine. The brain, transforming outward experience into thought, might
be paralleled with a carburetor transforming fuel into usable form.
An idea may be compared to a spark. All this is very fascinating. It
may even lead to suggestions of real value. But it is bound soon or
late to develop into the ludicrous. The analogy in question, however,
does not need to be developed to be confuted. For unless a boat has a
propeller and a rudder, its engine is useless. A mind is capable of
attaining truth without even being aware of the existence of a science
of thinking or of logic.
Another way to find whether an analogy is fallacious is to see whether
you can discover a counter analogy. Surely this is the most effective
practice in refuting analogy in argument. This suggests the case of
the man who had a ticket from New York to Chicago, and tried to use it
from Chicago to New York. The railroad refused to accept it, whereupon
the man brought suit. The lawyer for the defendant, in the heat of
the debate, said, "Why, a man might just as well pay for a barrel of
potatoes and then demand a barrel of apples!" Whereupon the attorney
for the plaintiff replied, "It would be rather like a grocer selling
a man a barrel of potatoes and then trying to compel him to eat them
from the top down, refusing to allow him to turn the barrel upside
down and begin eating them from the bottom up." It is best to avoid
analogy except for purposes of suggestion, or as a rhetorical device
for explaining an idea already arrived at by other means.
I have been forced to defend my advice to take as many viewpoints
as possible, by pointing out that the conclusions obtained from
these viewpoints might disagree; in fact would be almost sure to
disagree. Of course, this disagreement might be avoided if we allowed
the conclusions reached by one method or viewpoint to influence our
conclusions in another. But if we do this we give our problem more
shallow treatment, and we are not so sure of a result when we get it.
When a mathematician adds a column of figures from the top down, he
confirms by re-adding from the bottom up. He knows that if he added
in the same manner the second time he would be liable to fall into
the same errors. And in thinking, when we leave one method and take
up another, we should try to forget entirely the first conclusion and
begin on the problem as if we had never taken it up before. After we
have taken up all the applicable methods, then, and then only, should
we begin to compare conclusions.
Time forbids doing this with all problems. Time forbids even attacking
all problems from different points of view. But there are some
problems where this unquestionably ought to be done. The problem
of whether or not characteristics acquired during the life time of
one individual may be inherited by his offspring, if dealt with at
all, is too important to be left to the a priori method alone. This
problem asks whether the children of educated parents will necessarily
be innately superior to the children of uneducated parents; it asks
whether the man of today is superior to the ancient Greek, or even the
present day savage; or, assuming that the <DW64> race is inferior to the
white race, it asks whether generations of education will bring it to
the white race level or leave it unchanged; it asks whether the hope
of improving the human race lies in education or eugenics. No question
can be more important than this in its practical bearings. The answer
to it will profoundly influence our opinions in education, psychology,
ethics, economics, political science—even philosophy and metaphysics.
The answer we obtain to this question from deductive reasoning, no
matter how unanswerable or conclusive it may seem, should be checked
up by nothing short of the most thoroughgoing experiment.
Unfortunately the experiments needed for this particular question
cannot be carried on by the layman. It is equally to be regretted that
scientists have been none too thorough in carrying them out themselves.
But we should remember that any result we arrive at should be subject
to revision, and that if we take up this problem at all, we should at
least make it our duty to read about and criticise all the experiments
that come to our notice.
A question has perhaps just occurred to the reader. If the deductive
method is to be checked up by experiment, and the results of the
experiment are always to be taken, why not experiment first, and omit
theory altogether?
Leaving aside the fact that theory is the best guide for
experiment—that were it not for theory and the problems and hypotheses
that come out of it, we would not know the points we wanted to verify,
and hence would experiment aimlessly—a more serious objection is that
experiment is seldom if ever perfect, for it nearly always involves
some unverified assumption. I have referred to empirical observation
and experiment as two different methods. But the difference is
mainly, if not solely, one of degree. If we experimented to find out
whether acquired characteristics were inherited, it is obvious that
our experiments would have to be confined to animals. If we found,
let us say, that no acquired characteristic was ever transmitted
to offspring, we could not say that this would be equally true of
man, but would be justified in concluding only that the acquired
characteristics of _animals_ are not transmitted to descendants. Nay,
we could not go even this far. We would have to confine ourselves
to the statement that certain acquired characteristics of the few
score animals we had experimented upon were not transmissible. But
even this statement would involve assumption. We could say only
that certain acquired characteristics of the few score animals we
had experimented upon had not been transmitted in these particular
instances. We would have to limit ourselves to a bare statement of
fact; we could draw no conclusion whatever. But if we had attacked
this problem from the deductive standpoint, and had concluded that
owing to certain conditions holding alike in all animals and in man,
acquired characteristics _could not possibly_ be transmitted, we would
have sufficient ground for deriving from our experiments a broad
generalization.
Experiment and deduction are not the only methods which can be checked
up against each other. We can do likewise with the comparative and
the experimental, the historical and the theoretical—in fact, all
viewpoints applicable to any one problem.
* * * * *
When you encounter a question about which there is a controversy, and
where the adherents of both sides nearly equal each other in number
and intellectual status, you may be almost certain that each side has
caught sight of some truth, but that neither has seen the whole truth;
and you should endeavor to unite both sides by a broader and deeper
solution. A classic philosophical example of this method is Herbert
Spencer's attempt to reconcile science and religion, and his effort
to unite the "intuitional" and "experiential" schools of thought. The
intuitionists maintained that the mind had from birth intuitions by
which it knew certain truths independently of experience. Such truths
as the axiom that a straight line is the shortest distance between two
points, or that it is morally wrong to do certain acts, were regarded
as among these intuitions. The "empiricists" or "sensationalists," on
the other hand, maintained that all our knowledge—even of such a fact,
for instance, as that two and two are four, where we cannot conceive
otherwise—is learned solely from the individual's experience, taken in
its broadest sense. Herbert Spencer thought he recognized some truth
in both these doctrines, and came forward with the theory that there
are certain truths which are intuitions so far as the individual is
concerned, but that these intuitions have been inherited from our
ancestors, were originally built up through the ages, and represent
the accumulated experience of the race. Whatever may be thought of
Spencer's success in this case, the value of the method itself is
undoubted. It was frequently used by Kant, Hegel, Fichte and other
German philosophers.
I have remarked that it is almost possible to sum up the entire process
of thinking as the occurrence of suggestions for the solution of
difficulties and the testing out of those suggestions. The constructive
methods discussed were called means for making good suggestions occur
to us. From this standpoint the cautions with which we have just been
dealing may be considered as tests of suggestions.
Let us refer back to the analysis of thinking given in the case of the
man who discovered footprints on the beach. Even there, in order to
give any adequate idea of his thought process, I was obliged to show
that for various reasons he rejected certain suggested solutions. But
this negative method could be more fully developed. Because the man
rejected a certain solution, it does not follow that it was necessarily
wrong. Suppose the final suggestion—that the unknown had been on the
island all the time—were to have been tested out, and that certain
further facts were discovered which tended to disprove it; the man
might find it necessary to look for still another solution. But suppose
this were not forthcoming, suppose that all the possibilities had been
exhausted. It would be necessary to return to some of the original
suggestions. He would have to see whether an error had been made in
testing them. In rejecting the suggestion of a small boat he may have
overestimated the distance of this island from other land. He may have
underestimated the difficulties that a man in a small boat is capable
of surmounting. In rejecting the supposition of a ship, he may have
erred in his judgment of the time the footprints had been on the beach,
or of the time it would take a large vessel to get out of sight.
What is essential is that all suggestions be tested out, either by
memory, observation or experiment, in all their implications, and that
the tendency be resisted to accept the first solution that suggests
itself. For the uncritical thinker will always jump at the first
suggestion, unless an objection actually forces itself into view.
Remaining in a state of doubt is unpleasant. The longer the doubt
remains the more unpleasant it becomes. But the man who is willing
to accept this unpleasantness, the man who is willing carefully
to observe, or experiment if need be, to test the validity of his
suggestions, will finally arrive at a solution much deeper, and one
which will give him far more satisfaction, than the superficial answer
obtained by the man of careless habits of thought.
Thomas A. Edison says he always rejects an easy solution of any problem
and looks for something difficult. But the inventor has one great
advantage over any other kind of thinker. He can test his conclusion
in a tangible way. If his device works, his thinking was right; if his
device doesn't work, his thinking was wrong. But the philosopher, the
scientist, the social reformer, has no such satisfactory test. His only
satisfaction is the feeling that his results harmonize with all his
experience. The more critical he has been in arriving at those results,
the more deep and permanent will be that feeling, the more valuable
will be his thoughts to himself and to the world. . . .
Even in the first chapter I intimated that logic would constitute a
part of the science of thinking. I intimated, moreover, that it would
constitute almost the whole of what may be called the negative side
of thinking—those rules which serve to steer thought aright. Though
cautionary, the advice given in this chapter is not usually given in
books on logic. But though I cannot overemphasize the importance of a
knowledge of logic, I cannot deal with it here. The science can receive
justice only in a book devoted entirely to it.
If he has not already done so the would-be thinker should study a work
on logic, for unless the present book is supplemented by some treatise
on that science it cannot be regarded as complete.
In order not to confuse the reader I shall recommend only one book.
In order to encourage him I shall recommend a small book, one not so
deep as to be incomprehensible or repulsive to the beginner, but at the
same time one which is recognized as a standard treatise:—_Elementary
Lessons in Logic_, by Stanley Jevons.
IV
CONCENTRATION
What is the hardest task in the world? To think.—EMERSON.
We have been dealing with the subject of thinking. We have considered
it from both a positive and negative side. But while we have devoted
our attention to thinking, we have neglected the thinker. In more
scientific terms, we have treated thought from the logical side; we are
now to treat it from the psychological.
Few people will admit specific faults in themselves of any kind,
especially if these happen to be intellectual. But almost any man
is willing to confess that he cannot always "concentrate" when he
wants to, in fact, that he is one of the countless victims of "mind
wandering."
Most of us imagine we know just what we mean by both these terms.
But if we are to judge by most of what has been written, no two
terms are more misconceived. Before trying to find the best means of
concentrating, we must first find just what we mean by concentration.
In a previous chapter I said that suggestions for solutions "occurred."
I did not say how or why. To discover this we must refer to the famous
psychological principle of association.
Any train of thought is made possible by previous connections of ideas
in our minds. While a girl sits at her window a parade passes along a
nearby street. The band is playing, and ere the tune is completed the
band has gone so far that the music is no longer audible. But the tune
still goes along in her mind, and she completes it herself. It suggests
a dance she had been to where it was played, and this suggests that
she danced the two-step to it. The two-step suggests the more modern
one-step, and this leads her to compare the familiar dancing of to-day
with the distant and respectful minuet.
This is an example of a random train of ideas. It is that loose
"thinking" referred to in our first chapter. But even this is made
possible only by the connection of ideas in our mind at some previous
period. No thought can enter our minds unless it is associated in
some way with the previous thought. Psychologists have traditionally
classified associations into four kinds: association by succession,
by contiguity, by similarity and by contrast. The example just given
involves all four. Association by succession means that when two ideas
or impressions of objects have entered the mind in succession, the
second is likely to be suggested whenever the first is thought of. A
tune consists in a succession of notes, and when the first notes are
brought to mind, as by a passing band, the rest will follow—sometimes
in spite of ourselves. Association by contiguity means that when two
objects or ideas have been in consciousness together, one is always
likely to suggest the other thereafter. This was the case with the
music and the dance, or the music and the two-step. Association
by similarity occurs when two ideas resemble each other in some
particular. They need not have occurred together at any past time, nor
after each other. The fact that they have a common element suffices
to bring up one idea when the other is in mind: thus the two-step
suggested the one-step. Association by contrast needs no explanation.
It is exemplified when the idea of present-day dancing brings up the
idea of distant dancing.
Any attempt to show _why_ the mind acts in this way, any explanation of
the way in which the different kinds of association are made possible,
would bring us into physiological psychology, would involve a study of
the brain and the nervous system. For our purposes it is sufficient to
keep in mind that such associations do take place. Without them no idea
can occur. Without them thought is impossible.
The bearing of all this on concentration has yet to be made plain.
We must remember that every idea has more than one associate; in
fact that each idea generally has a cluster of possible associates.
Instead of suggesting the minuet, the one-step may have made the fox
trot or the three-step occur to the young lady. It may have made her
think of a young man with whom she danced it, or the trouble she had
in learning it. Each of these suggestions, in turn, would also have
potential connections with a cluster of ideas. When we are thinking at
random—when we are day dreaming, as in the example given—the strongest
association, or the first to be aroused, is the one we dwell upon. But
when we are thinking with a purpose, in a word, when we are reasoning,
we reject all associations which have no bearing on our purpose, and
select only those which serve it.
Concentration does not, as popularly supposed, mean keeping the mind
fastened on one object or idea or in one place. It consists in having a
problem or purpose constantly before one. It means keeping our thought
moving toward one desired end.
Concentration is often regarded as intense or focused attention. But
the fact is that all attention is focused attention. Psychologists are
fairly well agreed that we can attend to only one thing at a time. Mind
wandering, and so-called distributed attention, is really attention
directed first to one thing, then to another, then to another; or first
to one thing, then to another, and then back again to the original
object, resting but a few moments on each idea.
Concentration may best be defined as prolonged or sustained attention.
It means keeping the mind on one subject or problem for a relatively
long period, or at least continually reverting to some problem whenever
one's thoughts momentarily leave it.
Having decided just what we mean by concentration, our next step is
to inquire whether concentration is worth while. The reader may smile
at this question or he may be shocked, according to his temperament.
But if most men were so convinced that concentration is such an
unquestionable virtue, they would practice it a little more. At least
they would make greater efforts to practice it than they do at present.
The truth is that concentration, _per se_, is of little value.
The value of concentration depends almost entirely on the subject
concentrated on. Almost any one will agree that even were a man to
allow his mind to dwell now on one important problem and now on
another, without stopping a very appreciable time at any, he might
nevertheless be improving his time far more than a man who concentrated
continually on some insignificant and inconsequential question.
But of course this is not really an argument against concentration. It
has no application when you concentrate on the proper subject. For if
you start to concentrate on some question which you have decided is
really important, you should keep at it, allowing no deviation. It may
be that during the course of your thought associations will be aroused
which will suggest or bear upon important problems, problems more
important perhaps than the one you originally started to concentrate
on. But if you immediately abandoned every problem you started to
think of, whenever you came across one which you imagined was just as
important, you would probably never really solve any big question.
Our attention is guided by interest. If a man merely allows his
thoughts to flow at random, thinking only of those things which
spontaneously arouse his interest, he may or may not attend to things
worth thinking about. All will depend upon the path in which his
natural interests run. But the point is that if the subject he thinks
about is valuable, it will be so only by accident; whether or not
his thinking is useful will depend upon mere chance. If however he
consciously chooses a subject—chooses it because he believes it to be
important—then his thinking will be worth while.
But there is another reason why concentration is necessary. Suppose a
man started to put up a barbed wire fence, got as far as driving in
all the posts, then lost interest in the fences and decided to grow
potatoes in his field, plowed up the ground, lost interest in the
field and neglected to plant the seeds; decided to paint his house,
got the porch done, lost interest . . . That man might work as hard as
any other man, but he would never get anything done. So with the mind
wanderer and the concentrator. The mind wanderer thinks of a problem,
loses interest, and abandons it. The concentrator sticks to it until it
is solved.
Much of our mind wandering is due to the fact that we are not fully
convinced of the importance of the problem being attacked, or that
we regard other problems or ideas as more important. Concentration
consists in devoting one's mind to the solution of one problem.
During our train of thought associations bring up new ideas or suggest
problems which do not bear on the question at hand. Now when we wander,
when we follow up these irrelevant ideas or suggested problems, or
when we happen to glance at something or hear something and begin to
think of that, we do so because of a half-conscious belief that the new
idea, problem or fact needs attending to, is important. I have already
pointed out that if this new idea is important it will be so only by
accident. If we were consciously to ask ourselves whether any of these
irrelevant problems were as important as the one we were concentrating
on, or even important at all, we would find, nine times out of ten,
that they were not.
Therefore before beginning to concentrate you should assure yourself
that the problem you are about to attack is one worth solving, or at
least devoting a certain time to. And during that time you should think
only of that problem, and unhesitatingly throw out all irrelevant
suggestions coming either from your course of thought or from external
sights and sounds.
One qualification is necessary. Sometimes an irrelevant suggestion
occurs which is nevertheless really important and worth developing. As
this might be forgotten, and as it might never occur again, it would be
poor counsel indeed to ask that it be thrown aside forever. The best
move in such a case would be to make written note of the suggestion or
problem, so that it could be referred to at some future time. Having
written the idea, you will have it off your mind, and will be able to
continue your line of thought without perturbation.
It has been suggested that a great aid to concentration is writing
one's thoughts. It must be admitted that this certainly helps one to
keep much closer to a subject. Ordinarily we wander without being
aware of it, and bring our minds back to a subject only after sudden
intermittent realizations that we have gone astray. When we write our
thoughts, however, we doubly secure ourselves against mind wandering.
All writing requires a certain effort, and this alone is sufficient
to keep most of us from writing irrelevant thoughts, or anything not
directly bearing upon the subject in hand. When we write, too, we
capture our thoughts in tangible symbols; we make them less elusive
than in their original form. Finally, we keep our entire past train of
thought in view. Like an oarsman, who cannot look ahead, but guides
himself by the objects he is constantly leaving further behind, we keep
to our original course of thought by a survey of the ideas already
written.
In spite of these great advantages, writing has certain serious
handicaps as a practical method for concentrating. First among these
is its slowness. Thoughts flash through our minds much faster than we
can write them. We either lose many ideas by the wayside, or fail to go
as far in our subject as we otherwise would. Another disadvantage is
that we are forced to give part of our attention to the physical act of
writing, and thus cannot concentrate entirely on our subject.
There are two methods of writing comparatively free of at least one of
these handicaps. Both shorthand and typewriting, if mastered to any
degree, are much faster than ordinary writing. This is especially true,
of course, of shorthand. But even with a good stenographer shorthand
has serious defects. Unless one is quite expert it requires even more
attention than longhand, and at that is often unable to keep pace
with thought. Typewriting requires almost no attention from a touch
operator, but it too is open to the charge of slowness, coming in this
respect about midway between short and longhand.
But to those so unfortunate as not to know either shorthand or
typewriting the necessity for still another method is evident. Indeed,
even those acquainted with these two arts cannot always use them.
If every time we were to think we had to have with us a typewriter,
or even a pencil and note-book, we would not engage in any too much
reflection.
Fortunately there is one method superior to any yet named, which
requires no study before its application, and no paraphernalia during
it. It consists in simply talking your thoughts as you think them. One
who has not tried this can have no idea of its effect. It possesses
almost all the advantages of writing. You cannot wander without
realizing the fact immediately. It makes your thinking much less vague
than if you thought silently, increases your vocabulary, always keeps
pace with your ideas, and requires practically no attention.
It may be objected that silent thinking itself is put in unspoken
words. But this is not true. Part of silent thinking consists of
unspoken words, but part of it consists of images, concepts and
attitudes which pass through our minds and which we do not take the
trouble to name. In silent thinking, too, there are also what appear
to be occasional dead stops. All these processes drift into each other
indefinably and are unrecognizable. When we talk we realize whether our
images or concepts are vague or definite by our ability to name them,
and we realize when our thought comes to a "dead stop" by the fact that
we miss the sound of our own voice.
Another practice can be used with talking. The degree of concentration
we give to any subject depends upon the degree of natural interest we
take in it. Mind wandering comes because we are also interested in
other subjects. No matter how slight our interest in a thing, we would
always concentrate on it if we were interested in nothing else. To
secure sustained attention, then, we should (1) stimulate or increase
interest in problems we want to concentrate on, (2) decrease or remove
temporarily any interest in the things we do not want to think about.
Men often complain that noises distract their attention. While not
impossible, it is inconvenient and unpleasant to shut off our ears. But
men are far more distracted by sights than they are by sounds. And they
never think of merely shutting their eyes. The next time you attempt
to concentrate—silently or by talking—try shutting your eyes and see
whether or not you are helped.
Talking has one disadvantage—it cannot always be used. To practice it,
you must either lock yourself up in your room, or sit alone in a forest
or field, or walk along unfrequented streets and by-ways. You can by no
means allow any one to hear or see you talking to yourself. If you are
caught doing this some asinine idiot is sure to mistake you for one.
We are brought back again, then, to the necessity of occasionally
thinking in silence. There is one other reason why we shall sometimes
need to do this. Thoughts of certain kinds are so elusive that to
attempt to articulate them is to scare them away, as a fish is scared
by the slightest ripple. When these thoughts are in embryo, even the
infinitesimal attention required for talking cannot be spared. But
later, as they take more definite and coherent form, they can and
should be put into words, for otherwise they will be incommunicable and
useless.
No definite rule can be laid down, however, as to what should be spoken
and what thought of silently. This depends to a large extent upon the
individual thinker. Some will probably find that talking helps them in
almost all their thinking, others that it is often an actual hindrance.
The same is true of closing one's eyes. If you do not know which is
better for you, find out by experiment.
At those times when you suddenly catch yourself wandering, it would
be a good plan to stop occasionally and trace back your train of
thought to the point where it left its original direction. In this way
you would get some valuable insight into the _how_ and _why_ of mind
wandering; you would be helped in recognizing its appearance sooner
the next time it occurred.
Whenever a person is left alone for a short time, with no one to talk
to and no "reading matter"; when for instance, he is standing at a
station waiting for his train, or sitting at a restaurant table waiting
for his order, or hanging on a subway strap when he has forgotten to
buy a newspaper, his "thoughts" tend to run along the tracks they have
habitually taken. If a young man usually allows a popular tune to float
through his head, that will be most likely to happen; if he usually
thinks of that young lady, he will most likely think of her then; if
he has often imagined himself as some great political orator making a
speech amid the plaudits of the multitude, he is likely to see a mental
picture of himself swinging his arms, waving flags and gulping water.
The only way a man can put a stop to such pleasant but uneducative
roamings, is to snap off his train of day dreaming the first moment he
becomes aware of it, and to address his mind to some useful serious
subject. His thoughts will be almost sure to leak away again. They may
do this as often as fifteen times in half an hour. But the second he
becomes aware of it he should dam up the stream and send his thoughts
along the channel he has laid out for them. If he has never done this
he will find the effort great. But if he merely resolves now that
the next time his mind wanders he will stop it in this manner, his
resolve will tend to make itself felt. If he succeeds in following this
practice once it will be much easier a second time. Every time he does
this it will become increasingly easy, until he will have arrived at
the point where his control over his thoughts will be almost absolute.
Not only will it be increasingly easy for him to turn his mind to
serious subjects. It will become constantly more pleasurable. Frivolous
and petty trains of thought will become more and more intolerable.
This whole idea of forcing our thought has been questioned by no less a
thinker than Herbert Spencer. Let us hear what he has to say regarding
his own practice:
"It has never been my way to set before myself a problem and puzzle out
an answer. The conclusions at which I have from time to time arrived,
have not been arrived at as solutions of questions raised; but have
been arrived at unawares—each as the ultimate outcome of a body of
thoughts which slowly grew from a germ. Some direct observation, or
some fact met with in reading, would dwell with me: apparently because
I had a sense of its significance. It was not that there arose a
distinct consciousness of its general meaning; but rather that there
was a kind of instinctive interest in those facts which have general
meanings. For example, the detailed structure of this or that species
of mammal, though I might willingly read about it, would leave little
impression; but when I met with the statement that, almost without
exception, mammals, even as unlike as the whale and the giraffe,
have seven cervical vertebræ, this would strike me and be remembered
as suggestive. Apt as I thus was to lay hold of cardinal truths, it
would happen occasionally that one, most likely brought to mind by an
illustration, and gaining from the illustration fresh distinctiveness,
would be contemplated by me for a while, and its bearings observed. A
week afterwards, possibly, the matter would be remembered; and with
further thought about it, might occur a recognition of some wider
application than I had before perceived: new instances being aggregated
with those already noted. Again after an interval, perhaps of a month,
perhaps of half a year, something would remind me of that which I
had before remarked; and mentally running over the facts might be
followed by some further extension of the idea. When accumulation of
instances had given body to a generalization, reflexion would reduce
the vague conception at first framed to a more definite conception;
and perhaps difficulties or anomalies passed over for a while, but
eventually forcing themselves on attention, might cause a needful
qualification and a truer shaping of the thought. Eventually the
growing generalization, thus far inductive, might take a deductive
form: being all at once recognized as a necessary consequence of some
physical principle—some established law. And thus, little by little, in
unobtrusive ways, without conscious intention or appreciable effort,
there would grow up a coherent and organized theory. Habitually
the process was one of slow unforced development, often extending
over years; and the thinking done went on in this gradual, almost
spontaneous way, without strain. . . ."[6]
But compare this method with that of John Stuart Mill; who speaks of
"the mental habit to which I attribute all that I have ever done, or
ever shall do, in speculation; that of never abandoning a puzzle, but
again and again returning to it until it was cleared up; never allowing
obscure corners of a subject to remain unexplored because they did not
appear important; never thinking that I perfectly understood any part
of a subject until I understood the whole."[7] Mill's method was, in
short, "that of conscious and vehement effort directed towards the end
he had in view. He solved his problems by laborious application and
study."[8]
William Minto writes of Adam Smith: "His intellectual proceedings
were calm, patient, and regular: he mastered a subject slowly and
circumspectly, and carried his principles with steady tenacity through
multitudes of details that would have checked many men of greater
mental vigor unendowed with the same invincible persistence."
With such thinkers differing so markedly in their methods, the ordinary
man is left bewildered. He may indeed decide that effort or no effort
makes little difference. Let us, however, look to the psychology of the
question, and see whether we can find any guiding principle.
Spencer, defending his method, says: "A solution reached in the way
described, is more likely to be true than one reached in pursuance of
a determined effort to find a solution. The determined effort causes
perversion of thought. When endeavoring to recollect some name or
thing which has been forgotten, it frequently happens that the name or
thing sought will not arise in consciousness; but when attention is
relaxed, the missing name or thing often suggests itself. While thought
continues to be forced down certain wrong turnings which had originally
been taken, the search is vain; but with the cessation of strain the
true association of ideas has an opportunity of asserting itself. And,
similarly, it may be that while an effort to arrive forthwith at some
answer to a problem, acts as a distorting factor in consciousness and
causes error, a quiet contemplation of the problem from time to time,
allows those proclivities of thought which have probably been caused
unawares by experiences, to make themselves felt, and to guide the mind
to the right conclusion."
Spencer's first argument, that an effort to recollect something is
often without results, while the thing is remembered later when we are
not trying to think of it, is true as to fact. But it does not show
that the effort was unfruitful. As pointed out in the discussion of
association, one idea is associated with not only one other idea but
with an entire group. This may give a possible explanation of why it
is so often difficult to recollect anything when we make a determined
effort. The attempt partly arouses a whole cluster of ideas, each
of which tends to return, but is prevented from doing so by all the
others. It is analogous to a crowd of people all struggling to get
through a narrow doorway. They cause such a jam that for a time no
one succeeds. When the pushing and jostling cease one person at a
time is able to pass through. When effort is abandoned, probably all
but one of the associates become dormant, and this one slides into
consciousness at the slightest provocation.
Whether or not this explanation is true, it is a fact that though an
effort may not produce results at the time, still if it had not been
made, the associate which finally comes to mind would probably never
have occurred at all. The reader has possibly found that when learning
some skilled movement, such as bicycle riding, skating or swimming, his
first attempts seemed without result, but after an interval of a week
or a month, when trying again, he suddenly discovered that he could do
what he wanted from the very start. Surely no one would contend that
this could happen without the previous effort!
I must also question Spencer's remark that "with the cessation of
strain the true association of ideas has an opportunity of asserting
itself." The brain has no hidden mechanism by which it can separate the
true from the false. To be sure, if we use no effort the most usual
and strongest associations will be more likely to assert themselves,
and it may be that often these will have more warrant than unusual and
weaker associations. Outside of this, there is no superiority.
But the main reason why we cannot follow the method of Herbert Spencer
is that we are not all Herbert Spencers. His thought naturally tended
to serious and useful channels. Consequently he did not have to force
it there. If the reader is one of those rare and fortunate beings whose
thoughts run only to useful subjects, and who always concentrate from
pure spontaneous interest, I sincerely advise him not to force himself.
And if such a being happens to be reading the present chapter I assure
him he is criminally wasting his time, and that he should drop the
book or turn to the next chapter with all possible haste. But if the
reader numbers himself with the miserable majority whose minds are ever
running away with them, he will find it necessary to use effort in
thinking—at least for a while.
One remark of Spencer is undoubtedly true. This is "that an effort
to arrive forthwith at some answer to a problem, acts as a distorting
factor in consciousness and causes error." And here, strange to say,
his practice is in substantial agreement with the apparently opposite
method of John Stuart Mill. For note that Mill speaks of "again and
again returning to it [a puzzle] until it was cleared up."
Both imply their agreement rather than state it outright; Spencer
by his use of the word "forthwith" and Mill by his words "again and
again." Here the practice of both differs from that of the vast
majority of men. Yet neither thinker seemed to be clearly conscious how
it differed. The average man (that mythical creature!) when he has just
been confronted with a problem, may wrestle with it with all the vigor
of a great thinker. But as he sees difficulties multiplying about him,
he gradually becomes more and more discouraged. Finally he throws up
the problem in disgust, contenting himself with the reflection that it
cannot be solved, or that it will take somebody who knows more than he
to solve it.
A real thinker, however, if confronted with the same problem, will look
for a solution from every possible viewpoint. But failing an answer
he will not give up. Instead he will let the subject drop for a while,
say a couple of weeks or perhaps longer, and then refer to it again.
This time he will find that certain obscurities have become a little
clearer; that certain questions have been answered. He will again
attack his puzzle with energy. And if he does not obtain a complete
solution he will once more put it aside, returning to it after another
interval, until finally a satisfactory solution presents itself.
You may fail to see any difference between thinking for two hours
separated by two weeks, and thinking for two consecutive hours. As
an experiment, then, the next time you come across a puzzle which
you fail to solve at first tilt, write down all the unsatisfactory
solutions suggested, and all the questions, difficulties and objections
met with. You may leave this for a few weeks. When you return to it
a few of the difficulties will look less formidable, and some of the
questions will have practically answered themselves. (Of course some
of the difficulties may look more formidable, and a few new questions
may have arisen.) If a solution is not found at the second attempt,
the problem may again be sent to your mental waiting room. But if it is
only of reasonable difficulty a solution is bound, soon or late, to be
discovered.
It is difficult to say just what effects this change in thought, when
apparently one has engaged in no reflection during the interval. The
attempted solution probably gives a certain "set" to our minds. Without
being aware of it we observe facts relating to our problem. Ideas
which occur to us in other connections are unconsciously seen in their
bearing on the unsolved question. In short, "those proclivities of
thought which have probably been caused unawares by experience" make
themselves felt.
It may be imagined that if we think too much we will be liable
permanently to injure our mighty intellects. This has sometimes
happened. But there is no serious danger of it. Thinking on one useful
subject for a long while will not hurt you any more than thinking on a
thousand different useless subjects for the same period. But of course
you should not try to concentrate when you are sleepy, when you have a
headache, when some other bodily pain distracts your attention, or when
your mind is in any way tired. If you attempt to concentrate at these
times you will endanger your mental and physical health. Not only this,
but the thinking done during such periods will be of such poor quality
that it will be practically useless if not harmful. This applies even
to cases where mental fatigue is almost inappreciable. Thinking done
in the evening seldom approaches in efficacy the thinking done in the
first hours of the morning. But you should always make sure your mind
is actually tired. It may merely be tired of a particular subject.
An objection of a different kind may be raised against concentrating
at every opportunity. It has often been noticed that names have been
recalled and problems solved when we were thinking of something
else. It may be urged that such solutions would not have occurred
when concentrating, because the exact associations which led up to
them would not have been present. This is occasionally true. But
there are still reasons why I must maintain my position. No matter
how well a man may have trained himself to concentrate, there will
always be short periods when his mind will wander, and these will
suffice for any accidental associations. Moreover, the fact that these
mind wandering periods _occasionally_ do good does not excuse their
existence. The most fallacious ideas, the most demoniacal practices,
the most despicable characters of history, have _occasionally_ done
good. The fact is that for every useful association which occurs during
mind wandering, ten associations just as useful will occur during
concentration. The only reason useful mind wandering associations
appear frequent is that they are unexpected, therefore more noticed
when they come.
It has been frequently said that many of the world's greatest
inventions were due to accident. In a sense this is true. But the
accident was prepared for by previous hard thinking. It would never
have occurred had not this thinking taken place. It is said that the
idea of gravitation came to Newton because an apple fell on his head.
Perhaps. But apples had been falling ever since there were apple trees,
and had probably been falling on men's heads ever since men had
acquired the habit of getting their heads in the way. The idea of the
steam engine is supposed to have come to Watt while observing a tea
kettle. But how many thousands before him had not seen steam coming
out of kettles? The idea of the pendulum for regulating time occurred
to Galileo from observing a swinging lantern in a cathedral. Think how
many others must have seen that lantern swinging! It is probable that
in all these cases the invention or idea had been prepared for, had
been all but formed, by downright hard thinking in previous periods of
concentration. All that was needed was the slightest unusual occurrence
to make the idea complete and conscious. The unusual occurrence, the
accident, which has so often received the credit for the invention or
the idea, merely made it come sooner, for with the thinking these men
did, it was bound to come eventually. . . .
Of course I really do not seriously expect anybody to concentrate at
every opportunity. I don't myself. I merely wanted to establish the
fact that it's the best thing. But every man, even the tired business
variety, should set aside at least half an hour a day, or three and
a half hours a week. I realize what a great hardship it is for some
people to devote one-forty-eighth of their time to such a useless
pastime as thinking. But if they will make the sacrifice for seven
consecutive days they will find themselves bearing up nobly at the end.
There is even a possibility that they may be encouraged to extend the
time.
V
PREJUDICE AND UNCERTAINTY
"From time to time there returns upon the cautious thinker, the
conclusion that, considered simply as a question of probabilities,
it is decidedly unlikely that his views upon any debatable topic are
correct. 'Here,' he reflects, 'are thousands around me holding on
this or that point opinions differing from mine—wholly in most cases;
partially in the rest. Each is as confident as I am of the truth of
his convictions. Many of them are possessed of great intelligence;
and, rank myself high as I may, I must admit that some are my
equals—perhaps my superiors. Yet, while every one of us is sure he is
right, unquestionably most of us are wrong. Why should not I be among
the mistaken? True, I cannot realize the likelihood that I am so. But
this proves nothing; for though the majority of us are necessarily
in error, we all labor under the inability to think we are in error.
Is it not then foolish thus to trust myself? When I look back into
the past, I find nations, sects, philosophers, cherishing beliefs in
science, morals, politics, and religion, which we decisively reject.
Yet they held them with a faith quite as strong as ours; nay—stronger,
if their intolerance of dissent is any criterion. Of what little worth,
therefore, seems this strength of my conviction that I am right? A like
warrant has been felt by men all the world through; and, in nine cases
out of ten, has proved a delusive warrant. Is it not then absurd in me
to put so much faith in my judgments?' "[9]
I trust the reader will pardon this second rather extended quotation
from Herbert Spencer, but the thought expressed must be kept in mind if
we are to approach our present subject in the proper spirit. . . .
Our subject is prejudice. Our object is to free ourselves as much as
possible from our own prejudices. But before we can get rid of a thing
it is first necessary to recognize that thing when we see it.
Prejudice is often confused with intolerance. They are not the same. A
man may be prejudiced and not intolerant. You may think that your alma
mater, your city, or your country, is the greatest in the world, for
little other reason than simply that it is _yours_. Your opinion is
prejudiced. But you may not protest if any other man thinks that _his_
alma mater, or _his_ city, or _his_ country, is the best in the world.
In fact you may not have much respect for him if he doesn't think so.
And your opinion is tolerant.
On the other hand, a man may be intolerant and not prejudiced. You may
decide, solely on the evidence and on grounds of pure reason, that
paper money—fiat money—is always a harmful form of currency, and you
may be justly wrathful against the man who advocates it. You may even
wish him suppressed. Yet you may be able to answer all his arguments.
But you fear that if he is allowed to air his views they will take hold
on minds as shallow as his own. You fear that once they have taken
root it will be difficult to dislodge them, and that in the meanwhile
they may do harm by being put into practice. You are intolerant. But
you are not prejudiced. It is well to remember this distinction when
accusations of prejudice are flying through the ozone.
One thing more must be kept in mind. Prejudice has less connection with
truth and falsity than is generally supposed. The fact that a man is
unprejudiced does not make his opinion right. And the fact that a man
is prejudiced does not necessarily make his opinion wrong; though it
must be admitted that if it is right it will be so only by accident.
It is often thought that prejudice can be immediately recognized. Locke
says, "Every one is forward to complain of the prejudices that mislead
other men or parties, as if he were free and had none of his own.
. . . This is the mote which every one sees in his brother's eye, but
never regards the beam in his own."[10] However, slight consideration
will convince us that because one man accuses another of prejudice, it
does not follow that the accused is guilty. The general practice is to
accuse of prejudice any one whose views happen to differ from our own.
Let us consider a formal dictionary definition of prejudice: "Judgment
formed without due examination; opinion adverse to anything, without
just grounds or sufficient knowledge." This is not altogether
satisfactory. A man may form a judgment without sufficient knowledge
and still be unprejudiced. He may be perfectly open minded and willing
to change his opinion if other evidence is adduced. But even if the
formation of a judgment without sufficient knowledge is prejudice,
it is often justified. At all events, every one will agree that the
foregoing definition helps us little in discovering our own prejudices.
All of us, for instance, believe our judgment on any given question
has been formed with due examination, each being his own judge of what
constitutes "due."
It is difficult to find any satisfactory definition. Perhaps the best
I can do is to point out various specific forms of prejudice and their
causes. The first form of prejudice I shall name consists in a love
for, and a desire to hold, some opinion. We may roughly ascribe this
desire to three causes:
(1) We desire an opinion to be right because we would be personally
benefited if it were. Promise a man that if he invests his money in
the Lookgood Gold Mine he will receive dividends of over 40 per cent.
annually, and he is in danger of becoming extremely gullible. He shirks
looking up the previous record of the promoters or directors because
he has a secret and indefined fear that if he does he will find their
pictures in the Rogues' Gallery. Advertise in a magazine that any thin
man can gain seven to fourteen pounds a week by drinking Fattilac and
you will receive hundreds of answers enclosing the fifty cents for a
trial bottle. Not one desperately slim man in ten will stop to ask
himself how the miracle can be performed. In fact, he will do his
worst to argue himself into the matter. He will tell himself that
the advertisement is in a reliable magazine, that the company would
not dare to make an assertion like that unless it could make good,
that . . .
But we may pass over the more obvious benefits, and proceed to those
causes of prejudice less consciously selfish or directly beneficial.
If an economist were to write a book attempting to prove that bankers
were really unnecessary and could be dispensed with, it is a rather
sure guess that a banker would not regard very highly the intellectual
powers of that economist. If he considered his arguments at all, it
would be only with the view of refuting them. In an even less conscious
way, a rich man is likely to oppose socialism or communism, not so much
because he has evidence of intrinsic worth against them, but because
he fears that if such systems of society were put into effect he would
lose most of his wealth. The man who has nothing is likely to look with
favor upon these schemes, because they offer him promise of better
things.
The mere fact that we are ignorant of a certain thing will prejudice us
against it, while knowledge of it will prepossess us in its favor. Ten
chances to one a person who has been taught Esperanto will favor the
adoption of an international language—and the adoption of Esperanto in
particular. Most of the remarks on the uselessness of the classics come
from those ignorant of them; while those who, in order to get a college
degree or for some like reason, have been forced to study Greek and
Latin, will generally always exaggerate their importance. Most of the
opposition to simplified spelling is due to the fact that having taken
the time and toil to master our atrociously inconsistent spelling,
people have a vague fear that if a phonetic system were adopted,
children, the ignorant classes and persons of poor memories would be
able to spell just as well as they, without one quarter the trouble of
learning. Not that they are conscious of this childish and unworthy
attitude, for usually they are not, but the motive is operative none
the less.
Of course in all the foregoing cases of prejudice, as in those to
follow, none of the victims ever uses any of his real reasons in
argument, though he will bring forward nearly every other reason on
earth to justify his belief. And to do him justice, it must be admitted
that he is often unaware of the true cause of his inclination to one
side rather than another.
Though it is less directly selfish, the patriotic bias may fairly be
classed with the prejudices we have just been considering. At this
time the most stupendous war of all history is raging. But I know of
no German or Austrian or Turk or Bulgarian who has so far admitted
that the British or the French or the Russians or the Italians or
the Belgians or the Servians or the Montenegrins or the Japanese can
by any possibility have right on their side, nor do I know of any
Japanese or Montenegrin or Servian or Belgian or Italian or Russian
or Frenchman or Englishman who believes that the Bulgarians or the
Turks or the Austrians or the Germans are in the right. Philosophers
and men of science are no exception; Münsterberg, Eucken and Haeckel
write publicly in favor of Germany and fifty of England's foremost
authors unanimously sign a pronunciamento in support of their native
country—yet nobody is surprised.
(2) Another reason why we desire an opinion to be right is because we
already happen to hold it. As one writer expresses it, "We often form
our opinions on the slightest evidence, yet we are inclined to cling to
them with grim tenacity." There are two reasons for this.
When we have formed an opinion on anything, the chances are that we
have communicated it to some one, and have thereby committed ourselves
to that side. Now to reverse an opinion is to confess that we were
previously wrong. To reverse an opinion is to lay ourselves open to the
charge of inconsistency. To be inconsistent—to admit that our judgments
are human and fallible—this is the last thing we can ever think of.
"Inconsistency," said Emerson, "is the hobgoblin of little minds." And
if by this he meant inconsistency in the sense of changing opinions
already formed, we must agree with him.
The hypothesis maker has a specific form of this fear of inconsistency.
This type of theorist makes a supposition to account for certain facts.
When he meets with certain allied facts for which the supposition
apparently does not account, he either ignores said facts, or cuts and
trims them, or bullies them into his theory. Hypotheses _per se_ have
never done any harm. In fact they are indispensable in all thought,
especially as an aid to observation. But it is the desire to prove an
hypothesis correct, simply because it is _our_ hypothesis, or because
it is a fascinating hypothesis, which has done harm. Darwin says that
he had made it a habit "whenever a published fact, a new observation
or thought came across me, which was opposed to my general results, to
make a memorandum of it without fail and at once; for I had found by
experience that such facts and thoughts were far more apt to escape
from the memory than favorable ones."
The second reason for desiring to cling to an opinion because we
already hold it is one which could probably best be explained by
physiological psychology and a study of the brain. We feel almost
a physical pain when a tenet we have long cherished is torn up and
exposed. The longer we hold an opinion, the harder it is for us to
get rid of it. In this respect it is similar to habit. Nor is the
comparison an analogy merely. An opinion is a habit of thought. It
has the same basis in the brain, and is subject to the same laws, as
a habit of action. It is well known that the opinions of a man over
forty are pretty well set. The older a man grows, the harder it is for
him to change an opinion—or for others to change it for him.
The side of a controversy we see first is usually the side we see
last. This is because the arguments we meet do not have to shake up
or dislodge anything in our brain (unless we are very critical, and
we generally aren't). But once let an opinion gain entrance, and any
opinion contrary to it will have to dislodge the old one before it can
find a place for itself.
And as Mark Twain has remarked, "When even the brightest mind in our
world has been trained from childhood in a superstition of any kind,
it will never be possible for that mind, in its maturity, to examine
sincerely, dispassionately, and conscientiously any evidence or any
circumstance which shall seem to cast a doubt upon the validity of
that superstition." Of course Mark Twain was wrong. Of course we are
The Reasoning Race, as he cynically intimates we are not. To religion,
for instance, the most important question which can engage our
understanding, each of us always gives independent thought. It is a
mere accident, of course, that almost all of the 400,000,000 Chinamen
are Buddhists. It is a mere accident that the overwhelming mass of East
Indians are Brahmans. It is only by chance that practically all Turks,
Persians and Arabians are Mohammedans. And it merely happened to happen
that England is Protestant and Ireland is Catholic. . . . But it is
unsafe to bring this question of religion too near home.
We now come to our third cause of desire:
(3) We desire an opinion to be wrong because we would be forced to
change other opinions if it were not; or we desire an opinion to be
right because then we would be able to retain our other opinions.
This is a most widespread form of prejudice. But I believe it is,
fortunately, the most defensible. Its defensibility, however, depends
mainly on the opinions we fear to change. These we may divide into two
kinds:
(a) Those which have been formed without thought; borrowed opinions,
etc. The greatest opposition to the theory of evolution came from
those conservative Christians who saw that it undermined any literal
interpretation of Genesis. If these Christians had investigated the
sources of that book, had considered its probable authority, had given
thought to the possibility of inspired writing, and had finally decided
in favor of the Biblical narrative; then—right or not—their opposition
to Darwin's theory would have been free at least from this sort of
prejudice. But most of this opposition had come from persons who had
not thought of Genesis, but had accepted it from the first, because it
had been dogmatically hammered into their heads since childhood. Hence
it was prejudice, pure and simple.
(b) The second kind of opinions we fear to change are those resting
mainly upon evidence. William James gives an example:
"Why do so few 'scientists' even look at the evidence for telepathy,
so-called? Because they think, as a leading biologist, now dead, once
said to me, that even if such a thing were true, scientists ought to
band together to keep it suppressed and concealed. It would undo the
uniformity of nature, and all sorts of other things without which
scientists cannot carry on their pursuits."[11] Darwin writes that when
a youth he told Sedgwick the geologist of how a tropical Volute shell
had been found in a gravel pit near Shrewsbury. Sedgwick replied that
some one must have thrown it there, and added that if it were "really
imbedded there, it would be the greatest misfortune to geology, as it
would overthrow all that we know about the superficial deposits of the
Midland Counties"—which belonged to the glacial period.[12]
Some readers may object to calling the last case prejudice. They may
say that Sedgwick was perfectly justified. That, however, is not the
present question. Prejudice itself may sometimes be justified. But
Sedgwick tacitly admitted that he not only believed the shell had not
been imbedded, he actually _desired_ that it had not been. And our
desires always determine, to a great extent, the trouble we take to get
evidence, and the importance we attach to it after we have it.
Emerson's remark, that inconsistency is the hobgoblin of little minds,
is true in a double sense. For not only is it harmful to fear to
change an opinion which we have entertained, it is even harmful at
times to fear to hold simultaneously two opinions incongruous with
one another. If a thought springs up in your mind, and you come to
see after a time that it is inconsistent with another thought, do not
immediately try to throw out one or the other. Instead, think the new
thought out in all its bearings and implications, just as if you had
never had the first. Perhaps follow the same practice with the first
idea. By and by one will reveal its falsity and the other its truth.
Or more likely you will find that there was some truth in each idea,
and you will reconcile the two in a truth higher, deeper, or more
comprehensive.
* * * * *
I have set down these three cases of prejudice to help the reader in
recognizing the same or similar prejudices in himself. And the mere
recognition of prejudices as prejudices will do much toward their
elimination. But though we all strenuously maintain our anxiety to get
rid of prejudices, the real reason most of us have them is that we do
not want to get rid of them. We are all willing to get rid of prejudice
in the abstract. But when some one troubles himself to point out any
particular concrete prejudice of ours we defend it and cling to it
like a dog to his bone. The only way we can get rid of this desire
to cling to our prejudices, is thoroughly to convince ourselves of
the superiority of the truth; to leave not the slightest doubt in our
own minds as to the value of looking with perfect indifference on all
questions; to see that this is more advantageous than believing in
that opinion which would benefit us most if true, more important than
"being consistent," more to be cherished than the comfortable feeling
of certainty. When we really do desire to get rid of our prejudices we
will put ourselves on the path of getting rid of them. And not before
then.
One more prejudice has yet to be considered. This may be called the
prejudice of imitation. We agree with others, we adopt the same
opinions of the people around us, because we fear to disagree. We fear
to differ with them in thought in the same way that we fear to differ
with them in dress. In fact this parallel between style in thought
and style in clothing seems to hold throughout. Just as we fear to
look different from the people around us because we will be considered
freakish, so we fear to think differently because we know we will be
looked upon as "queer." If we have a number of such dissenting opinions
we will be regarded as anything from a mere crank to a fanatic or one
with a "screw loose." When our backs are turned people will wisely
point their index fingers to their temples and move them around in
little circles.
Our fear of freak opinions is only equalled by our dread of ideas
old-fashioned. A little while ago it was considered popular to laugh
at the suffragettes. And everybody laughed. Now it is getting to be
popular to laugh at the anti-suffragettes. A little while ago it was
considered quite _comme il faut_ to fear socialism. Now it is becoming
proper to remark, "There is really quite a good deal of truth in their
theories." And soon we shall doubtless all be out and out socialists.
Nor is the prejudice of imitation confined to the layman. If anything,
it is even more common among so-called "thinkers." I remember
quoting some remark of Spencer to an acquaintance, and getting this:
"Yes, but isn't Herbert Spencer's philosophy considered dead?" This
same acquaintance also informed me that John Stuart Mill had been
"superseded." He candidly admitted—in fact seemed rather proud of the
fact—that he had read practically nothing of either philosopher. I am
not trying to defend Spencer or John Stuart Mill, nor am I attempting
to bark at the heels of any of our present-day philosophers. But I am
willing to wager that most of these same people now so dithyrambic in
their praise of James, Bergson, Eucken and Russell will twenty-five
years hence be ashamed to mention those names, and will be devoting
themselves solely to Post-neofuturism, or whatever else happens to be
the passing fadosophy of the moment.
If this is the most prevalent form of prejudice it is also the most
difficult to get rid of. This requires moral courage. It requires the
rarest kind of moral courage. It requires just as much courage for a
man to state and defend an idea opposed to the one in fashion as it
would for a city man to dress coolly on a sweltering day, or for a
young society woman to attend a smart affair in one of last year's
gowns. The man who possesses this moral courage is blessed beyond
kings, but he must pay the fearful price of ridicule or contempt.
There is another form of this prejudice of imitation radically opposed
to this. Just as with fashions in clothes there are people who strive
to imitate others, so there are people who devote themselves entirely
to being "different." Their greatest fear is that they will be taken
for "one of the mob." They dress themselves as uniquely as possible
in order to acquire "individuality." We have these same people in
the realm of thought. They are in constant trepidation lest they say
something that everybody else says. They say things not for the sake
of truth but for humor or paradox. Their great delight is to affirm or
defend something "new" regardless of its truth; something deliciously
radical which will shock everybody else and startle even themselves.
The worst part of this is that these people gradually come to regard
their propositions as true, just as a liar finally comes to believe his
own lies.
The only cure for such a mental condition is a constant sincerity in
every opinion we advance. People are often led into the fault by a
motive not incommendable in itself—the desire for originality. But
they choose the wrong path to their goal. If you make originality and
radicalness your aim, you will attain neither truth nor originality.
But if you make truth your aim you will very likely get truth, and
originality will come of itself.
There are hundreds of prejudices, hundreds of forms of prejudice.
There is, for instance, the prejudice of conservatism, which manifests
itself in a vague fear that if the present order were changed in any
particular—if women were given the vote, if socialism were to triumph,
if a new filing system were to be installed at the office—all would be
lost. But I cannot deal adequately with all the forms of bias which
flock to mind.
The distinguishing mark of the great thinkers of the ages was their
comparative freedom from the prejudices of their time and community.
In order to avoid these prejudices one must be constantly and
uncompromisingly sounding his own opinions. Eternal vigilance is the
price of an open mind.
* * * * *
Prejudice is not the only danger which lies in wait for the would-be
thinker. In his very efforts to get rid of prejudice he is liable to
fall into an even greater intellectual sin. This sin is uncertainty.
As uncertainty and doubt are nearly synonymous, the reader will
probably be surprised at this statement because of the praise I have
hitherto accorded to the doubtful attitude. But the doubtful attitude,
necessary and praiseworthy as it is, should not be maintained always.
We think in order to have opinions. We have opinions in order to guide
action; in order to act upon should occasion require. Herbert Spencer,
even after his remarks quoted at the beginning of this chapter, which
imply the need of extreme caution, adds, ". . . In daily life we are
constantly obliged to act out our inferences, trustless as they may
be— . . . in the house, in the office, in the street, there hourly
arise occasions on which we may not hesitate; seeing that if to act is
dangerous, never to act at all is fatal. . . ."
There are other reasons why we cannot afford to keep the doubtful
attitude. If our lives were interminable, if we had limitless time for
thinking, we could afford to remain in doubt indefinitely. But life is
fleeting. So if you have examined facts obtainable on such a question
as psychic phenomena, have kept your mind open for a certain time, and
have decided that communication with the dead is impossible, you are
justified in discontinuing to look for evidence on that question. Every
hour devoted to examining such evidence would be an hour taken away
from thought on some other subject, and the law of diminishing returns
is just as applicable in thinking as in economics.
Another trouble with the attitude of doubt is that when not properly
utilized it hinders rather than aids the acquisition of truth. This is
especially the case when it takes the form of _fear of prejudice_. If
guided by this fear, in our anxiety not to discriminate in favor of one
side of a question we are apt to discriminate in favor of the other.
In an attempt to give an opposing argument due consideration, we are
liable to give it undue consideration. Instead of removing prejudice
with reason we may be trying to balance one prejudice with a counter
prejudice. When a person disagrees with him, a very conscientious
thinker, fearing that he may be prejudiced, and in order to prove
himself broad-minded, will often say regarding an objection, "Well,
there may be something in that." Now your only excuse for ever saying,
"There may be something in that," will be as an attitude to assume in
experimenting or observing, or looking up material or arguments to find
whether there actually _is_ anything in it. Then, if you do not find
anything in it you are justified in saying so—and you ought to.
It is useless to stimulate doubt unless you intend, on grounds
of reason, to settle the doubt. _The doubtful attitude should be
maintained only so long as you are actively searching for evidence
bearing on a question._ Maintained at any other time or used in any
other way it means merely uncertainty, indefiniteness, vagueness, and
leads nowhere.
It is important that we be unprejudiced. It is even more important that
our views be definite. And if our definite views are wrong? . . . But
the words of Thomas Huxley on this subject cannot be improved:
"A great lawyer-statesman and philosopher of a former age—I mean
Francis Bacon—said that truth came out of error much more rapidly
than it came out of confusion. There is a wonderful truth in that
saying. Next to being right in this world, the best of all things
is to be clearly and definitely wrong, because you will come out
somewhere. If you go buzzing about between right and wrong, vibrating
and fluctuating, you come out nowhere; but if you are absolutely and
thoroughly and persistently wrong, you must, some of these days, have
the extreme good fortune of knocking your head against a fact, and that
sets you all straight again."[13]
When you find yourself fluctuating back and forth between two opinions
you might find it helpful to hold an internal debate. State to yourself
as strongly as possible the case for the affirmative, and then put
as convincingly as possible the case for the negative, holding a
refutation if necessary. You may even elaborate this by writing the
arguments for both sides in parallel columns. Of course you should
never use an argument which you can see on its face to be fallacious,
nor a statement which represents merely a prejudice and nothing more.
You should use only such arguments as you think a sincere debater would
conscientiously employ. By thus making your reasons articulate you will
often find that there is really no tenable case at all for one side,
and you will seldom fail to reach a definite conclusion. This method of
arriving at a decision may be voted childish and even artificial, but
nothing is to be despised which can render intellectual help.
One word more on this. There is a type of individual, most often met
with among writers, who fears to make a statement of his thought
definite, because he has a faint suspicion that it may be wrong.
He wishes to allow himself plenty of loopholes to slip out of an
intellectual position in case any one should attack it. Hence he never
says outright, "Such and such is the case." Instead, his talk or
writing is guarded on all sides by such expressions as "It is probable
that," "it is possible that," "the facts _seem_ to indicate that"; or
"such and such is _perhaps_ the case." Not satisfied with this he makes
his statement less positive by preceding it with an "I believe," or
worse yet, with an "_I am inclined_ to believe."
This is often done under the impression that it is something noble,
that it signifies broadmindedness, lack of dogmatism, and modesty.
It may. If it does, so much the worse for broadmindedness, lack of
dogmatism, and modesty. Never yield to the temptation to word your
thoughts in this manner. If you truly and firmly believe that "such
and such is the case" _say_ "such and such is the case"; not "it is
possible that such and such is the case," or "such and such is perhaps
the case," or "it is my belief that such and such is the case." People
will assume that it is your belief and not somebody else's.
Suppose you have made a positive statement. And suppose you later find
it to be wrong? Well then, acknowledge that it is wrong. Acknowledge
that you have done something human; that you have done something
which every man before you has done; that you have made a mistake. I
realize such a confession is hard. It is the severest blow you can
deal to yourself, and few people will think the better of you for
doing it. Most of them will say, "See, he acknowledges himself that
he was wrong." And with these people, both you and your theory will
be far more discredited than if you had clung to it until the end of
your life, no matter how obviously, how flagrantly, it opposed itself
to facts. But a few people will appreciate your sacrifice. A few
people will admire your bigness. And you will grow. You will grow as a
thinker. What is more, you will grow morally. And the time will come
when you will have fewer and fewer occasions to reverse yourself, for
you will learn to think longer before you advocate an opinion.
* * * * *
The question of the avoidance of prejudice and the necessity of
breaking off doubt, remains still unsettled. There can be no doubt
that the two desideratums conflict; that to cut off doubt, or even to
refrain from stimulating it, is to encourage by so much the dominance
of prejudice.
The answer to this question will depend entirely upon the particular
problem under consideration. No rules can be laid down. Everything will
depend upon the importance of the question, upon the possibility or
frequency of occasions when we may be called to act upon the answer,
and upon the way in which the answer will affect conduct when we do act
upon it. Where the importance of the question is trifling, it would be
foolish to sound our prejudices too deeply, or to go to any elaborate
pains to collect evidence. Where immediate, unhesitating action is
required, remaining in doubt might be fatal. Any decision would be
better than no decision. When the importance of the question is vital,
or when the possibility of having to act on the answer is distant,
we can afford to preserve our doubts, to suspend final judgment, for
years—perhaps during our entire life; and we should spare no pains to
investigate fully all that relates to the question.
Just how much trouble to take, how long to keep alive the attitude
of doubt in any particular question, will have to be decided by the
individual. His own judgment must be the sole criterion.
VI
DEBATE AND CONVERSATION
The mind engages in many activities which have power either for evil or
good. Just what influence they will exert depends on how we use them.
One of the most important of these activities is debate.
Debate brings in that unequaled form of incentive for all action which
psychologists call "social pressure" and which here means nothing more
than the desire to excel a fellow-being in some line of endeavor. When
debating we concentrate, and we do so without conscious effort. We are
too interested in defeating our opponent to wander from the subject.
We are forced to think rapidly. Not least of all, we are compelled to
think articulately.
But with all its advantages, debate is one of the most potent sources
of prejudice. In the heat of controversy, we adopt any and every
argument that comes handy. Every statement of our opponent is
considered only in the light of how it can be refuted. We are willing
to use almost any objection against him, so long as we believe he will
see no flaw in it. It is of utmost importance that we find how to avoid
these pitfalls.
The first thing we must do is to adopt a complete change of attitude
toward an opponent's arguments. Whenever we meet with a fact which
we would not like to cite in a debate; because, to put it mildly, it
would not help our side; we should carefully investigate that fact.
We should consider whether if true it changes the aspect of things.
We should get rid of the idea that in order to vindicate our side we
must answer every contention our opponent advances. For this opponent
of ours will very likely be a man in full possession of his senses; at
least some of his arguments will be rational. When they are, we should
be willing to acknowledge it. Their truth does not necessarily make his
side right. His arguments may be irrelevant; they may be outbalanced by
some other reason or reasons. Attempts to prove too much are liable to
put us into the position of the lawyer whose client is alleged to have
been sued for putting a hole in a borrowed umbrella. The lawyer proved
first, that his client did not borrow the umbrella; second, that there
was a hole in it when he got it; third, that there was nothing the
matter with it when he returned it.
After you have had a friendly argument with an acquaintance, you
take leave either with the satisfaction that you have bested him, or
with a vague consciousness that though you were right, he was just a
trifle more skillful at bringing forward arguments. But having this
satisfaction or dissatisfaction, you seldom think any more of the
matter until the next time you meet him. Now this practice is helpful
neither to your debating nor your thinking. After you have taken
leave of your acquaintance, and are left to the quietude of your own
thoughts, you should mentally run over your controversy. You should
dispassionately consider the bearing and weight of his arguments; and
then, reviewing your own, ask yourself which were valid and relevant
and which were not. If you find you have used a sophism you should
resolve never to use it again, even though your opponent may have
been unable to answer it. The question of morals aside, this is poor
practice if you ever hope to become a thinker. In the end, it will tell
against you even as a debater.
You can use your debates for constructive material as well as for
criticism. After a controversy you can go over the arguments of your
opponent which you could not refute, or refuted but lamely, and think
of the answers you might have given. Of course you should take care
that these answers are not sophistical. The question will very likely
come up again; if not with the same friend, then with another, and when
it does you will find yourself prepared.
But the best debater, or at least he who gets the most from debating,
is the man who looks for evidence and thinks not for debate, but to
obtain a correct conclusion. After he has reached a conclusion in
this manner, he does not advance every possible reason to support it.
He does not even utilize the reasons on which others base a similar
belief, if he does not himself accept these reasons. He states merely
that evidence and those reasons which have led him to accept his
conclusion, nothing more.
While we are considering debate, I may well say a few words about
conversation in general. We do not and cannot always argue with our
friends, even though we scorn the dictums of formal etiquette. But
because we do not argue, it does not follow that we gain nothing. In
fact, ordinary conversation has advantages numerous over debate, not
the least of which is the comparative freedom it gives from prejudice.
But the value of conversation depends both on what we talk about,
and whom we talk with. Too much of our talk is on petty matters, is
uneducative. And even if we converse on worthy topics, it will profit
us little if we do not talk with worthy people. When we commune with a
dull mind, our thoughts are forced, in some degree, down to the level
of that mind. But dull people do not usually talk of weighty matters,
nor do active intellects dwell long on trifles. Therefore if we
rightly choose our companion we can conscientiously leave our path of
conversation to choose itself.
One aspect of conversation remains to be treated—its corrective
power. "There is a sort of mental exposure in talking to a companion;
we drag our thoughts out of their hiding-places, naked as it were,
and occasionally we are not a little startled at the exhibition.
Unexpressed ideas are often carefully cherished until, placed before
other eyes as well as our own, we see them as they really are."[14]
VII
THINKING AND READING
Up to now I have dealt with thinking almost as if it could be carried
on without external aid. As with cautionary and constructive thought, I
have perhaps been led to do this because of a reaction from the usual
insistence upon reading as indispensable to mental improvement, and the
corresponding neglect of the need for independent thinking. Men thought
before there were books, and men can still think without reading, but
they cannot. . . . I was about to remark that they could not read
without thinking, but on second thought I am inclined to doubt it.
However, we have clung to the natural order, for we first considered
unaided thinking, then the help given by conversation and dispute, and
finally we are to examine the aid rendered by reading. There can be no
doubt that this order follows the development of thought both in the
individual and in the human race.
While no complaint can be made of lack of quantity in what has been
written on reading, most of it has not taken up the subject from the
proper standpoint; still less has dealt with it in the right manner.
There has been counsel galore urging people to read; and recently there
has been a great deal of advice on what to read. But comparatively very
little has been said on _how_ to read. At one time reading was regarded
an untainted virtue, later it was seen that it did us no good unless we
read good books, and now there is a dawning consciousness that even if
we read good books they will benefit us little unless we read them in
the right way.
But even where this consciousness has been felt, little attempt has
been made to solve the problem systematically. Leisurely discourses,
pretty aphorisms, and dogmatic rules have been the forms in which the
question has been dealt with. Such conflicting adages as "A good book
should be read over and over again"; and "The art of reading is the art
of skipping," are not very serviceable. The necessity of some sort of
orderly treatment is evident.
Before we consider how to read, some queer person may ask us to put
the previous question, "Should we read at all?" Now the value of
reading has, in times past, been seriously doubted by thinkers and
non-thinkers. The philosopher Democritus put out his eyes so that,
ceasing to read, he might think. We are not going to follow his
example. But we can readily sympathize with him when we think of the
many learned men who have read themselves into dreamy stupidity; men
who know what everybody else thought, but who never have any thoughts
of their own. We must admit that the arguments of these cranks are at
least good medicine for the prevalent belief that the more a man reads
the more he will know and the better thinker he will become.
Learning to think by reading is like learning to draw by tracing.
In each case we make the work of another man our basis, instead of
observing directly from Nature. The practice has its value, it is true;
but no man ever became a great artist by tracing, and no man will ever
become a great thinker by reading. It can never become a substitute
for thought. At best, as John Locke says, "Reading furnishes the mind
only with materials of knowledge, it is thinking makes what we read
ours."[15]
Our problem may be divided in two parts: (1) What ratio should our
reading bear to independent thinking, and (2) how should we read when
we do read?
It may be thought that we can learn something about the first question
by investigating the practice of great thinkers. But the outcome
of such an investigation is likely to be disappointment. Kant, for
instance, was an omnivorous reader; so were Huxley and Sir William
Hamilton; and outside the circle of philosophers, men as unlike as
Gibbon, Macaulay, Milton and Thomas A. Edison. On the other hand,
Spencer seldom ever read, and Hobbes is famous for his remark that if
he had read as much as other men he would have known as little. Auguste
Comte was unique in that he read copiously until he conceived his
Positive Philosophy, and then hardly at all until the end of his life.
Even were it found that most great thinkers adhered to nearly the same
practice, it would prove little; for how could we tell whether they
were good thinkers on account of, or in spite of it?
We can agree a priori, however, with the remark of Schopenhauer that
"the safest way to have no thoughts of one's own is to take up a book
every moment one has nothing else to do." And we may agree with him
further: "A man should read only when his thoughts stagnate at their
source, which will happen often enough even with the best of minds. On
the other hand, to take up a book for the purpose of scaring away one's
own original thoughts is a sin against the Holy Spirit. It is like
running away from Nature to look at a museum of dried plants, or gaze
at a landscape in copper-plate."[16]
It would be folly to lay down any fixed mathematical ratio between
the time we should devote to reading and the time we should give to
thinking. But one hour given to reading plus one hour given to thinking
would be certainly more beneficial than two hours devoted entirely to
reading.
You can find quite a number of serious-minded men who put by a certain
period each day for reading. But how many of them put by any time at
all for thinking? It would be unjust to say they do not think. But at
best their thinking is merely accidental—and apparently considered so.
Surely it is as important that we lay aside a definite period each day
for thinking as it is that we lay aside some time for reading. But how
much this time should be and whether it should bear any specific ratio
to the time given to reading can best be decided after a consideration
of the problem of how to read.
This problem has unfortunately been much misconceived. Those who have
laid stress on the maxim, "A good book should be read over and over
again," have done so in the belief that this is the best way to get
the most out of a particular book. But the object of reading is not to
get the best out of any one book, but out of reading in general. A
realization of this end will change our problem somewhat.
It will bring us to a consideration, for example, of the law of
diminishing returns. While the more we re-read a book the more we get
out of it, it must be remembered that with a few possible exceptions,
every time we re-read it we add less to our knowledge than we did
the previous time. This means that we can usually make much faster
progress by reading other books, in which case we do not merely read
over what we already know for the most part. Whether re-reading is ever
justified, and when, is a question which will be considered a little
later.
The law of diminishing returns applies to an entire subject as well
as to a single book. That is to say, past a certain point, every book
we read on a particular subject, while it will probably add to our
knowledge, will not yield as much return as a book of equal merit on
another subject, new to us.
The problem of reading asks how we can acquire the greatest number of
ideas, and how we can arrive at truth rather than the verdict of an
author. It assumes a limited time and asks how we can use that time
most profitably. Not least of all, it asks how we can best combine our
reading with original thought.
From the remarks already made, it is evident that we cannot prescribe
any one method for dealing with all books. Even works of similar nature
and merit will be treated in different ways, depending on the order
in which we read them, and like conditions. The mastery of any book
will not be an end in itself. It will be subordinated to the larger
end of obtaining the best from reading as a whole. But for the sake of
clearness, I shall for the present consider our end as the mastery of
some particular subject, and shall indicate a plan of reading to best
serve that end. Needful qualifications will come later.
I shall first outline a typical plan of study, and then review and
explain it in detail.
Assuming you have chosen a subject, your first step should be to do a
little unaided thinking on it. Next I would advise the selection of a
comprehensive text book. This should be read critically and written
note made of the problems taken up which you do not believe have
been adequately treated, or the solutions of which are in any way
unsatisfactory. These you should think out for yourself. A second book
may in some cases be read in the same thorough manner as this first
one, and the problems recorded in the same way. After that all books
on that subject may be read "hop, skip and jump" fashion, for the new
problems or solutions they suggest.
I do not expect the foregoing plan to be strictly adhered to, for the
nature of the subject studied will make certain changes necessary.
However, it demands more detailed explanation and perhaps defense.
Let us take up the first step advised—giving a little unaided thought
to the subject. My only reason for advising "a little" thinking, is
that I know if I asked more the reader would probably do nothing at
all. Indeed many readers will fail to see the necessity of thinking
about a subject before studying it. Many may even question the
possibility of doing so. "How is a man to think about a subject on
which he knows nothing?" you ask. Let us, however, consider.
The very fact that you want to study a subject implies that the
phenomena with which it deals are not clear to you. You desire to
study economics, for instance, because you feel that you do not
understand everything you should about the production, distribution and
consumption of wealth. In other words, something about these phenomena
puzzles you—you have some unsolved problems. Very well. These problems
are your materials. Try to solve them.
"But how can I solve them when I know nothing of economics?"
Kindly consider what a science is. A science is nothing more than the
organized solution of a number of related problems. These problems and
their answers have been changed and added to the ages through. But when
the science first started there was no literature on it. It originated
from the attempts of men to solve those problems which spontaneously
occurred to them. Before they started thinking these men knew nothing
of the science. The men who came after them availed themselves of the
thoughts of those before, and added to these. The whole process has
been one of thought added to thought. Yet, in spite of this, people
still cling to the belief, even if they do not openly avow it, that
we never can make any headway by thinking, but that in order to be
educated, or cultured, or to have any knowledge, we must be reading,
reading, reading.[17]
I almost blush for this elaborate defense. Everybody will admit the
necessity for thinking—in the abstract. But how do we regard it in the
concrete? When we see a man reading a good book, we think of him as
educating himself. When we perceive a man without a book, even though
we may happen to know that he is engaged in reflection, we do not look
upon him as educating himself, though we may regard him as intelligent.
In short, our habitual idea of thought is that it is a process of
reviewing what we already know, but not of adding anything to our
knowledge. Of course no one would openly avow this opinion, but it is
the common acting belief none the less. The objections to thought are
inarticulate and half-conscious. I am trying to make them articulate in
order to answer them.
To return, then, to the remark that we should use as materials for
unaided thinking the problems which occur spontaneously. You will
find when you begin to solve these that other problems will arise,
and that up to a certain point, the deeper you go into a subject—the
more critical you are in your thinking—the more problems will occur.
Perhaps it would be too much to ask you to solve all of these. Yet
even a little of this preliminary thinking will be of immense help
in reading. It will give you a far better sense of the importance of
different problems which a book considers, and you will not judge
their significance merely by the space it devotes to them. An author
may indeed bring before us certain problems which had not hitherto
occurred, and stimulate in us a sense of their importance. But this
artificial stimulation can never take the place of natural and
spontaneous wonder. Once we have obtained a solution of a problem
which has arisen spontaneously and from within, we do not easily forget
it. Our independent thinking, too, will have given us an idea of the
difficulties presented by problems, and will make us more critical in
reading and more appreciative of the solutions of an author. Not least
of all, if we read first we are extremely liable to fall into the
routine and traditional ways of considering a subject, whereas if we
first think, we are more likely in our insophistication to hit upon an
idea of real originality.
One last objection to thinking before reading remains. Schopenhauer has
answered it in his forcible manner:
"A man may have discovered some portion of truth or wisdom after
spending a great deal of time and trouble in thinking it over for
himself, adding thought to thought; and it may sometimes happen that
he could have found it all ready to hand in a book and spared himself
the trouble. But even so it is a hundred times more valuable, for he
has acquired it by thinking it out for himself. For it is only when
we gain our knowledge in this way that it enters as an integral part,
a living member, into the whole system of our thought; that it stands
in complete and firm relation with what we know, that it is understood
with all that underlies it and follows from it, that it wears the
color, the precise shade, the distinguishing mark, of our own way of
thinking, that it comes exactly at the right time, just as we felt the
need for it; that it stands fast and cannot be forgotten."[18]
Despite the strong case that Schopenhauer makes out, I am satisfied
with my former advice—that a little thinking will suffice. Not only
because, as already said, the reader will probably do nothing if
advised to do more; but because after a certain amount of thinking
has been done, it is more profitable to avail ourselves of the wisdom
of the ages, stored in books, and to do our thinking after we have
acquired the main outlines of this wisdom. For when we think a problem
out, with the feeling that even after we have obtained a solution we
shall probably find it in a book later, we have not the incentive that
we have when we feel we have covered most of the old ground and that
thinking may bring us into new territory.
The practice of Gibbon remains to be considered: "After glancing my
eye over the design and order of a new book, I suspended the perusal
until I had finished the task of self-examination; till I had revolved
in a solitary walk all that I knew or believed, or had thought on the
subject of the whole work, or of some particular chapter. I was then
qualified to discern how much the author added to my original stock,
and I was sometimes satisfied by the agreement, sometimes armed by the
opposition of our ideas."[19]
The trouble with this method is that it is not critical enough;
that is, critical in the proper sense. It almost amounts to making
sure what your prejudices are, and then taking care to use them as
spectacles through which to read. We always do judge a book more or
less by our previous prejudices and opinions. We cannot help it. But
our justification lies in the manner we have obtained these opinions;
whether we have infected them from our environment, or have held
them because we wanted them to be true, or have arrived at them from
substantial evidence and sound reasoning. If Gibbon had taken a
critical attitude toward his former knowledge and opinions to make sure
they were correct, and had then applied them to his reading, his course
would have been more justifiable and profitable.
In certain subjects, however, Gibbon's is the only method which can
with profit be used. In the study of geography, grammar, a foreign
language, or the facts of history, it is well, before reading, simply
to review what we already know. Here we cannot be critical because
there is really nothing to reason about. Whether George Washington
ought to have crossed the Delaware, whether "shall" and "will" ought
to be used as they are in English, whether the verb "avoir" ought to
be parsed as it is, or whether Hoboken ought to be in New Jersey, are
questions which might reasonably be asked, but which would be needless,
because for the purposes we would most likely have in mind in reading
such facts it would be sufficient to know that these things are so.
We might include mathematics among the subjects to be treated in this
fashion. Though it is a rational science, there is such unanimity
regarding its propositions that the critical attitude is almost a waste
of mental energy. In mathematics, to understand is to agree.
* * * * *
We come to the second step outlined in our plan of study—the selection
of a comprehensive text book.
Every large subject has gathered about it a vast literature, more
than one man can ever hope to cover completely. This literature may
be said to consist wholly of two things: information as to facts, and
opinions on those facts. In other words, any book you read on that
subject will probably contain some facts new to you and will contain
also the thoughts and reflections of the author. Of course you should
endeavor to learn as many facts as possible. But it is not necessary
to know all that has been thought about the subject. You are supposed
to have a mind of your own; you are supposed to do some thinking for
yourself. But though it is not necessary that you know all that has
been thought, it is well that you know at least part of what has been
thought, and so far as possible, the best part. For as just pointed
out, if you attempt to think out an entire subject for yourself you
will expend great energy and time in arriving at conclusions which
have probably already been arrived at during the generations that the
subject has had its being. Therefore you should endeavor to get, in as
short a time as possible, the greatest number of important facts and
the main outlines of the best that has been thought.
So if you sincerely intend to master any subject, the best way to begin
is by the selection of the most comprehensive and authoritative work
you can secure.
The man who desires to study any subject is commonly advised to read
first a small "introductory" book, then a larger one, and finally the
largest and most authoritative volumes. The trouble with this practice
is that you will have to study each book in turn. If you take up the
most thorough book first you need merely glance through the smaller
books, for the chances are that they will contain little that is new
to you, unless they happen to be more recent. The only justification
for reading a small book first is that the larger books are apt to be
technical and to assume a certain knowledge of the subject. However,
_the_ authoritative treatise or treatises on a subject usually refer
far less to the smaller books than the smaller books do to them. Any
greater depth of thought which the larger works may possess can be made
up for by increased concentration on the part of the reader. Of course
if a man does not intend to master a subject thoroughly, but only to
get some idea of its broad outlines, the case is different. He would
then be justified in reading a small work.
Another advantage of beginning a subject with the study of a
comprehensive and authoritative volume or main textbook, is that you
avoid confusion. The man who has mastered one foreign language, say
French, will always find his knowledge of great benefit to him for the
study of another language, such as Spanish. But any one who has begun
at about the same time the study of two or more foreign languages must
remember his confusion, and how his vague knowledge of one tongue
hindered him in the acquisition of the other.
So with reading. When we peruse a book in the usual casual way we do
not master it. And when we read a book on the same subject immediately
after it, the different viewpoint is liable to cause bewilderment and
make us worse off than before the second book was started. We do not
like to devote a lot of time to one book, but would rather run through
several books in the same time, believing that we thereby gain more
ideas. We are just as mistaken as a beginner in swimming who would
attempt to learn several strokes before having mastered one well enough
to keep afloat.
A main text being of such importance, its choice involves
responsibility. But how are we to know whether one book is superior to
another until we have read both? And if we are confronted with this
difficulty even when familiar with a subject, how much greater must be
our task when we know nothing of it? These difficulties do not appear
so formidable in practice.
Failing other means, the best method of selecting a main text is by
reputation. If we do not even know what book has the best reputation,
we can easily find out by referring to so acknowledged an authority as
the Encyclopedia Britannica, and consulting the bibliography in the
article on the subject.
But reputation does not furnish the only means of selecting. By merely
glancing through a book, stopping here and there to read entire
paragraphs—a task of ten or fifteen minutes—we can form an estimate
which later reading will usually justify. For an author betrays himself
in every line he writes; every slightest remark reveals in some manner
the breadth and depth of his thought. But just how well we can judge
a book in this way depends both on our own ability and on the time we
devote to glancing through it.
A few general requirements in a main text have been implied in stating
the purpose of having one. The book with the best reputation is not
necessarily the best for you. In economics Adam Smith's _Wealth of
Nations_, though easily the most famous book on the subject, would
hardly be suitable as a main text because it has been superseded.
But though recency is always an asset, this does not mean that the
most recent book is always or even usually the best. The common idea,
though it is usually but vaguely formulated, is that the writer of the
more recent book has had all the previous books to draw upon, and has
therefore been able to extract the best from all of them and add to
this his own thoughts. The fallacy of this has been pointed out in the
trenchant language of Schopenhauer:
"The writer of the new book often does not understand the old books
thoroughly, and yet he is unwilling to take their exact words; so he
bungles them and says in his own bad way that which has been said
very much better and more clearly by the old writers, who wrote from
their own lively knowledge of the subject. The new writer frequently
omits the best things they say, their most striking illustrations,
their happiest remarks, because he does not see their value or feel
how pregnant they are. The only thing that appeals to him is what is
shallow and insipid."
The value of recency will depend on the subject; while it would be
essential in aviation, its importance would be far less in ethics.
It is not well to take as your main text a book presenting a number of
different and conflicting viewpoints. One purpose of a main text is to
avoid confusion. Do not start the study of psychology, for instance,
by reading a history of the subject giving the views of different
thinkers. Begin by taking up one definite system.
Finally, be sure to select a book covering the entire field. Do not,
for instance, take a volume on the tariff to begin the study of
economics.
* * * * *
We pass now to the third step advised—to read critically. By this I
do not mean that we should read skeptically or to confute everything
an author says. I mean simply that we should resist our natural
tendency to have our minds swayed by every opinion he expresses. I mean
that before allowing an idea to slip into our minds we should first
challenge its truth; we should examine its evidence.
Perhaps you have listened to a debate. After the affirmative had
made his impassioned plea you were all for the affirmative. When the
negative came forward and presented his case, you found yourself
favoring _him_. . . . Why do debaters always try to get the last say?
Why is it that in a formal debate, the affirmative, which usually has
the last say, is most often the side that wins? I could state the
reason bluntly. But if I did the honorable judges of such controversies
would not feel that their critical powers had been complimented.
The tendency to absorb the opinions of others manifests itself to just
as great a degree in reading. I have held debating up as an example
merely because it brings out more strongly, more strikingly, the
effects of such a tendency. But how can it be resisted?
If we have thought out a subject thoroughly, if we have acquired a
stock of clear and definite ideas on it, criticism in reading will
largely take care of itself. By dint of our own thinking we will know
what is relevant and what is not; we shall be able to judge the truth
and importance of the various arguments offered. The chances are,
however, that we shall not have given much previous thinking to the
subject, and that even if we have we shall not have gone as far as the
author, who doubtless availed himself of other books. Consequently
certain problems which he takes up will not even have occurred to us,
and hence will not have received our consideration.
But where our thinking has not helped us, and even where it has, we
should look critically upon every statement of an author, instead of
lazily acquiescing in it. The difference between critical and ordinary
reading, is that in the former we look for objections, in the latter
we wait until they happen to occur to us. Even then we do not hold our
objections steadily in mind; we are as likely as not to accept later
arguments based upon one we have previously objected to. In order to
avoid this perhaps the best we can do when we object to any statement
or believe we have found a fallacy, is to make written note of it
in the margin. To some extent this will prevent forgetting it. Too
few or too many marginal notes are both extremes to be shunned. If
we make too many we shall be apt to lose a true sense of proportion
and fail to distinguish essential criticisms from nonessentials. The
only way we can keep clear of this extreme is to avoid quibbling and
hair-splitting, making only such written criticisms as we feel we could
unblushingly defend before the author himself. Often however we may
feel that a statement is untrue, or that an argument is fallacious, and
yet be unable to point out just where or how it is so. In this case
perhaps the best plan would be merely to put a question mark in the
margin in order to remind ourselves that the statement has not been
fully accepted.
We ought to make sure what we object to because it is a peculiarity of
the human mind that it does not require evidence for a statement before
accepting it; it generally accepts any statement which has no evidence
against it. Unless we reject a statement and know why we have done so,
it is liable to insinuate itself in our reasoning, and the longer it
remains the more difficult it is to get rid of it. This is why it is so
important to avoid as many pitfalls as possible at the beginning of a
subject.
The reader may find that even when he reads critically he will accept
a certain statement at the time; and then perhaps much later, say a
month, an objection to that statement will occur to him, or he will see
that it at least ought to be qualified. For an explanation of this we
must go back to an analysis of the thinking process. Every idea which
enters the mind, either from independent thinking or from reading, is
accepted as true if it is in full conformity with our past experience
_as we remember it_. In all thinking or reading, the new idea arouses
associates on its entrance. An hypothesis or principle, for instance,
arouses in our minds past experiences of particular instances. If all
these conform it is accepted. But in ordinary uncritical reading or
thinking, only a few associates are aroused. In critical reading, we
look for as many associates as possible, especially those which do not
conform. It is this purpose kept in mind which helps to recall and
awaken these associates. No matter how critical our attitude, however,
we cannot at any given time recall every relevant associate, though
later a "non-conforming" associate is likely to occur to us by pure
accident.
While you are criticising a book line by line, and after you have
finished reading it, you should note the importance and relevancy
of the arguments accepted and rejected. While an author may make a
statement with which you disagree, its truth or falsehood may not
affect the rest of what he has to say, or it may affect merely a few
corollaries drawn from it. In other cases the truth of his entire
conclusion may depend upon it. Again, an author may incontrovertibly
prove something—which is entirely without bearing on the subject. This
means that you should keep the precise question constantly before your
mind.
Often you will find an author making a statement which really amounts
to nothing more than a mere airing of his prejudices, or at best the
bare statement of a conclusion. If he says, "Socialism is the greatest
menace of our civilization," and leaves it go at that, not telling how
or why, you should mentally note this as a statement, as a statement
merely; you should not allow it to influence your opinion either way.
Finally, remember that though you may be able to refute every argument
an author brings forward in support of a conclusion, his conclusion may
still be correct. It is possible for a man to be right for the wrong
reasons.
While I believe all the foregoing suggestions are judicious and
necessary, I am willing to admit that their wisdom may reasonably
be doubted. But there is one practice about which there can be no
controversy—that of making sure you thoroughly understand every idea of
an author. While most people will not verbally contradict this advice,
their actual practice may be a continual contradiction of it. They will
be in such haste to finish a book that they will not stop to make sure
they really understand the more difficult or obscure passages. Just
what they hope to gain it is difficult to say. If they think it is
wasting time to try to understand every idea, it is surely a greater
waste of time to read an idea without understanding it. To be sure,
the difficulty of understanding may be the fault of the author. It may
be due to his involved and muddled way of expressing himself. It may
be the vagueness of the idea itself. But if anything this is all the
greater reason why you should attempt to understand it. It is the only
way you can find whether or not the author himself really knew what
he was talking about. To understand thoroughly the thought of another
does not necessarily mean to sympathize with it; it does not mean to
ask how that other came by it. It means merely to substitute as far
as possible concrete mental images for the words he uses, and analyze
those images to discover to what extent they agree with facts.
Better to carry this out, you might follow another practice of immense
value. Whenever you are puzzled as to an author's meaning, or whenever
you do not care to accept his solution of a problem but are undecided
as to what the solution is, or whenever you want to carry an idea
further than he has, or above all, whenever an original and important
relevant thought is suggested to you, you should take your eyes from
your book—shut it if necessary—and let your thinking flow on; give
it fair play, even if it takes an hour before your vein of suggested
thought exhausts itself. Of course this practice will prevent you from
finishing a book as soon as you otherwise would. And if finishing a
book be your aim, I have nothing to say. But if your end is to attain
true, sound knowledge, knowledge which you will retain; if your object
is to become a thinker, the practice will prove of unspeakable benefit.
It will not interfere with concentration. Remember your object is to
concentrate primarily on the subject, not on the book; you intend to
become a thinker, not an interpreter or a commentator or a disciple of
any author.
And there are two reasons why this thinking should not be put off
until after you have finished a book. The first and more important
is that after you have finished reading, most of the ideas will have
unrecallably dropped out of mind. The second is that when you are
undecided about the solution of a problem, you will often find later
arguments depending upon that solution. Unless its truth or falsity is
decided in your own mind you will not know how to deal with these later
arguments.
I have spoken of feeling that an argument is fallacious, and of being
unable to point out just where it is so. To cease reading for a while,
and to endeavor to make these inarticulate objections articulate,
is excellent practice for training analytic powers and developing
clearness of thought.
Another way of reading a book is what I may call the anticipating
method. Whenever a writer has started to explain something, or
whenever you see that he is about to, stop reading and try to think
out the explanation for yourself. Sometimes such thinking will
anticipate only a paragraph, at other times an entire chapter. School
and college text-books, and in fact formal text-books generally, often
contain lists of questions at the end of the chapters. Where you find
these, read them before you read the chapter, and where possible try
to answer them by your own thinking. This practice will make you
understand an explanation much more easily. If your thinking agrees
with the author's explanation it will give you self-confidence. It will
make you realize whether or not you understand an explanation. If you
were not able to think the thing out for yourself you will appreciate
the author's explanation. If your thinking disagrees with that of the
author you will have an opportunity to correct him—or be corrected. In
either case your opinion will rest on firmer grounds. Not least of all
you will be getting practice in self-thinking.
After reading and criticising a book, it is a good practice to study
one taking a different viewpoint, or written even in direct opposition.
You will doubtless find that it points out many fallacies and
controverts many statements in the first book, which you allowed to
pass unchallenged. Ask yourself what the trouble was. Was your attitude
too receptive? Did you swallow words without substituting clear mental
images? Did you fail to trace out the consequences of a statement? All
these questions will help you do better the next time.
Because of your ignorance of the facts, your failure to refute a
conclusion will sometimes not be your fault. But even here, though you
cannot contradict an author's statement of facts, you can criticise
conclusions drawn from those facts.
Take an instance. In making an inquiry into the causes of fatigue,
Professor Mosso of Turin selected two dogs as nearly alike as possible.
One he kept tied, and the other he exercised until it was thoroughly
tired. He then transfused blood of the tired dog into the veins of the
rested one, and produced in the latter every sign of fatigue. From this
he concluded that fatigue was due to certain poisons in the blood.
Now we cannot contradict the fact of this experiment: that the rested
animal was made to look tired. But we can question the inference
drawn. The truth of the conclusion aside, was the evidence sufficient
to establish it? Might not, for instance, similar results have been
produced upon the rested dog if blood of another rested dog had been
transfused into it? Had Mosso made such an experiment? Other objections
should easily occur to one.
Questions which admit of treatment by studying both sides are
too numerous to mention. The literature of philosophy furnishes
particularly good material. Examples which at present occur to me
are Sir William Hamilton's philosophy versus Mill's _Examination
of Sir William Hamilton's Philosophy_, and Herbert Spencer's
_First Principles_ versus William James' essay, _Herbert Spencer's
Autobiography_ and Henri Bergson's criticism of Spencer in his
_Creative Evolution_.
Uncritical students of the history of philosophy often find themselves
agreeing with each thinker in turn, no matter how much he contradicts
previous thinkers, and end by acquiescing in the last system they read
about. I remember a philosophy class which completed its studies with
Pragmatism. Of course it was merely a coincidence, but at the end
of the course fully nine-tenths of the students declared themselves
Pragmatists!
It is almost needless to remark that an author who pretends to point
out fallacies in another is not necessarily right. There are men who
pride themselves on "reading both sides of a subject"; but unless they
have been critical, their knowledge is not half as clear or as likely
to be true as that of a man who has read only one side, but who has
read it critically.
* * * * *
We have now to consider the next step outlined in the suggested plan of
reading—"written note should be made of the problems taken up which you
do not believe have been adequately treated, or the solutions of which
are in any way unsatisfactory. These you should think out for yourself."
When reading a book you will often come across a statement, perhaps
an entire chapter, with which you disagree. This disagreement should
be recorded in the form of a question; as for instance, "Is such
and such the case?" You may doubt whether an author's explanation
really explains. You may have a vague inarticulate suspicion that he
is sliding over facts, or that his solution is too superficial. This
suspicion should also be recorded in the form of a question. Often
again, while reading, a problem connected with the subject will occur
to you which the author has not even considered. This too should be
recorded.
All these questions should unfailingly be written, either in the margin
or on a piece of paper or notebook kept always at hand. You should
then set aside a definite time for thinking and attempt to solve the
questions for yourself.
And in thinking for yourself you should not make the author's remarks
the basis of your thinking. You should deal with a problem almost
as if it had never occurred to any one else but you. Simply because
somebody else has been satisfied with a certain solution, that is no
reason why you should be. You should deal directly with the facts, data
and phenomena under consideration; not with the opinions of others
about those facts, data and phenomena. You should not ask yourself
whether the pragmatists are right, or whether the nominalists are
right, or the socialists, or the evolutionists, or the Democrats, or
the Presbyterians, or the hedonists, or what not. You should not ask
yourself which "school" of thinking you ought to belong to. You should
think a problem out _for yourself_, in every way that phrase implies.
At the end you may, incidentally, find yourself agreeing in the main
with some school of thought. However, this will be only accidental, and
your thought will be much more likely to be true. But you should never
agree with a school of thought any more than independent thinking leads
you to.
Of problems dealt with in this manner, some will take ten minutes,
others a week. If you encounter a particularly obstinate problem it
may be best to leave it for a while, say a week or two or even longer,
and go on with other problems. When problems are thus recurrently
treated it may take months, even years, before a satisfactory solution
is reached. In such cases you should be willing to give months and
even years to their solution. If a problem is not important enough to
devote so much time to you may be forced to abandon it; but you should
constantly keep in mind the fact that you have not solved it, and you
should be willing to admit to others that you have not solved it. Never
allow mere intellectual laziness to stifle your doubts and make you
think you have solved a problem, when you know in your heart of hearts
that you have worked yourself into the state of belief merely to save
yourself mental discomfort.
When most of your problems have been solved and your views made
definite you may resume your reading. You may proceed to other books on
the subject.
As to the suggestion that another book on the subject might be dealt
with in the same manner as this first one: this will depend largely on
the individual subject. It will depend on just what books have been
written on that subject. If none completely or adequately covers the
field, or if there are two or more good books representing radically
different viewpoints, more than one book probably ought to be studied
in this comprehensive manner. But this must be left to the reader's
discretion.
* * * * *
We come now to the last part of our plan—"after that all books may be
read 'hop, skip and jump' fashion, for the new problems or solutions
they suggest."
I have already implied the necessity for this in formulating the law
of diminishing returns. After we have read several books on a subject
it would be manifestly foolish to continue reading books on that same
subject _in toto_. We would merely be going over again knowledge
already in our possession, instead of using our time more profitably
by entering new territory. But any good book will contain _something_
unique; some facts or principles to be found nowhere else; or perhaps
merely an unusually clear way of explaining some old principle, or a
new light on it. This we should endeavor to get without wasting our
time by plowing through the entire volume.
Theoretically our problem is difficult; on its face it would seem
impossible. We are to read all the important parts of a book; that
is, the parts most important _for us_, and nothing but the important
parts. But until we read it how are we to know whether any given part
of a book is important? In practice, however, our difficulty is not so
formidable.
We can eliminate the greater mass of the relatively useless part of a
book by a glance at its table of contents. If we see there titles which
suggest subjects or aspects of subjects in which we are not interested,
or that we feel we already know enough about, or that are simply
outside the particular purpose we have in consulting that book at all,
we can omit those chapters and confine ourselves to the others. . . .
When we were children first learning to read we had to look at every
letter in a word, then spell it out. Finally its meaning dawned upon
us. As we became more proficient we did not have to look at every
letter; we could read words as wholes with the same rapidity as the
separate letters. Accurate psychological tests have determined that a
man can read such words as "and" and "the" with even greater rapidity
than any single letter composing them. We finally reach the point where
we can read short phrases at the same rate as we formerly could single
words.
But the secret of the scholar who can cover efficiently much more
ground than ordinary men is not so much that he reads _faster_, as
that he reads _less_. In other words, instead of reading every word
he glances down a page and sees certain "cue" words or rather "cue"
phrases, for the eye and mind take in phrases as wholes. If he is
familiar with the subject (and he is not to employ this method unless
and until he is) he knows immediately, by "a sort of instinct" as
Buckle called it, whether any new or valuable thought is on that page.
When he finds that there is he involuntarily slackens his pace and
reads that thought at ordinary reading pace or even slower. Sometimes
indeed he will read whole chapters slowly, word for word, if the
contents are sufficiently novel and important to warrant it.
Read by this "hop, skip and jump" fashion a book the size of the
present volume might take an hour or even less. But it is almost
impossible to give even an approximate estimate of the time such
reading ought to take. Of course the longer you spend the more you
will get out of a book, but the return per time invested will be less
and less. On the other hand if you read the book too fast you may be
wasting your time altogether; you may end by understanding nothing at
all. Much will depend upon the originality and depth of the book, upon
the reader's familiarity with the subject, and upon his native mental
qualities.
Many may object to practicing the foregoing method because they have
a vague feeling that it is their duty to read every word in a book. I
suspect that the real reason for this is simply so that when asked they
can conscientiously say they have _read_ the book. Whereas if they had
followed this skipping method they would be able to say only that they
had "glanced through it" or at best that they had "read parts of it."
To this objection I have nothing to say, for I am confining my remarks
to those in search of truth and knowledge rather than conversation and
the good opinion of those who believe that reading from cover to cover
is the only path to wisdom. I might point out in passing, however, that
if we do follow this method there will be a half dozen books which we
can say we have "glanced through" to one which we would otherwise have
been able to say we had "read."
This way of dealing with a book is constructive and positive as opposed
to the negative method of critical reading. For we read for suggestion
only; we carry forward some line of thought of an author, which is
better for intellectual development than trying to find if he was
wrong and where he was wrong. Not only is this positive method more
interesting; in some respects it is better even for criticism. For in
carrying forward an author's line of thought, noting its consequences
and implications and considering different cases where it applies, we
find whether or not it leads to absurd conclusions; whether or not all
concrete instances conform with it. It should be kept in mind that this
method is not to be followed until the main text-book has been studied.
Consequently when it is followed your mind will have been fortified by
previous reading and thinking; valuable thoughts of an author will tend
to impress you and be remembered, while his trite or erroneous ideas
will tend to be ignored.
But after all, what is important is not your attitude or method at
the time of reading a book, but the thinking done later. The critical
attitude has its shortcomings, for when we are on the lookout for an
author's mistakes we often miss the full significance of his truths.
On the other hand when "reading for suggestion" we may too often allow
an error to pass unquestioned. But both these disadvantages may be
overcome if we do enough thinking afterward.
Only one thing I must insist on: make sure you understand every
sentence of a book. Do not "guess" you understand it. Do not slide
over it in the hope that the author will explain it later. Do not work
yourself into the belief that after all it is not really important.
Rather than this, better by far do not read the book at all. Not only
will you get little or nothing from it but you will be forming the
worst of intellectual habits—that of thinking you understand when
you do not. If you have made every reasonable effort to understand
an author and then have not succeeded, write in the margin "I do not
understand this," or draw a line alongside the sentence or passage. If
you have to do this too often you should put the volume aside for a
time. It is either too advanced for you or it is not worth reading.
As to the thinking you do after reading. Often problems connected with
the subject of a book you have read may arise spontaneously in mind,
or an objection to a statement may suddenly occur to you when thinking
on some other topic. Of course when this happens you should not stifle
your thoughts. But besides this, definite periods should be put aside
for thinking on what you have read and on the problems you have
written. I cannot insist on this too strenuously or too often.
A good task to set before yourself is to take every idea you agree with
in a book and try to treat it as a "germ." Tell yourself that you will
develop it beyond the point where the author left off. Of course this
will not always be possible. You will seldom succeed. But there is
nothing like hitching your wagon to a star, and it will do no harm to
set this up as an ideal.
* * * * *
A few miscellaneous problems remain to be considered.
How should we deal with authors with whom we disagree fundamentally?
Herbert Spencer relates that he twice started Kant's _Critique of
Pure Reason_, but disagreeing fundamentally with the first and main
proposition he ceased reading. Now to do this is to give an author too
much credit for consistency. For even if every other proposition he
sets forth is ostensibly a corollary from his leading one, some of them
will contain much truth. It is impossible to be consistently wrong.
Add to this the possibility that the author may be right on his first
proposition after all. However, no book with a viewpoint radically
different from our own should be used as a main text, for we would
get little benefit from it. If the book is by an obscure author we
may safely lay it aside altogether. But if it is by so famous and so
bepraised a philosopher as Kant we should at least glance through the
entire volume for suggestions.
How many times ought we to read a book? I have already partly answered
this in formulating the law of diminishing returns. Few books are worth
re-reading. Rather than read one book twice on any given subject it
will most often be more profitable to read another book on it. For the
second will not only serve as a review of previous knowledge, but will
furnish you with new ideas, different aspects and new problems.
Certain books, however, can never be replaced by others. They occupy
this position either because they deal with a subject not elsewhere
dealt with or because they take an entirely novel aspect, or solely
because they are the works of supreme genius, for while the main
conclusions reached in works of this last type may be found elsewhere,
the _manner of thinking_ can never be. These books should be read
twice. The main text-book selected on any subject will usually be
chosen because it is the best and most comprehensive work on that
subject. For this reason it should be read a second time even if such
reading is only of the hop, skip and jump variety.
We should not re-read a book immediately upon the first completion
but should always allow a long interval to elapse. There are several
reasons for this. After an interval we acquire perspective; we are
in a position to know whether a book has done us any good and just
about how much. We may find after this interval that a work of which
we thought quite highly at the time of reading has really not helped
us appreciably either in thought or action. We may find that we have
outgrown the need of it. Even if we finally decide to re-read we shall
find the wait of immense help to our memory. If we re-read a book after
an interval of six months, three years after our second reading we will
remember its contents much better than if we had read it three times
in unbroken succession. Add to this that in the lapse of time we shall
have forgotten most of the work, and shall therefore approach it the
second time with greater interest than if it were still fresh in mind;
that our experience, reading and thinking in the meantime will make us
see every sentence in a different light, enabling us to judge our own
marginal criticisms (if we have made any) as well as the book, and the
advantage of waiting cannot be doubted. I do not believe it will ever
be necessary to read a book more than twice, that is, so far as thought
and knowledge are concerned. With books read for their style or for
mere amusement the case is different.
How long should one read at a sitting? Some men find that their
thought is choked by reading. Some find it stimulated. But results
vary according to the length of time reading is carried on. Reading
for very long periods at a stretch often deadens original thought. The
writer finds that he nearly always derives benefit from reading for
short periods, say ten or fifteen minutes. This is in some measure due
to the increased concentration which short periods allow. On the other
hand, some people find that a certain momentum is acquired during long
reading periods. The reader can only experiment to find how long a
period best suits his individual case.
How about concentration? This has been considered in relation to
independent thinking, but in reading the problem is somewhat different.
In thinking our task is to choose relevant associates. In reading the
associates are chosen for us. Our task is to stick to them, instead of
following the associates which occur to us either from what we read
or from sights and sounds about us. But associates which occur to us
from what we read are of two kinds: relevant and irrelevant, and the
former should of course be followed out. This however should be done
deliberately, in the manner I have previously indicated, and when the
vein of suggested thought has been exhausted we should bring attention
back to our book. The problem of concentration is not a very serious
one in reading. It may sometimes be difficult to concentrate on a book.
But it is infinitely easier than concentrating on a problem by unaided
independent thought.
* * * * *
The plan of reading I have laid out is merely suggestive. What I
chiefly wanted to show was that all books cannot be treated alike, that
we cannot lay down dogmatic inflexible rules to apply to every volume.
Our method of reading will vary with the nature of a book or of the
subject it treats. It will depend upon the books we have already read
and even upon the books we contemplate reading later.
The good you get out of reading will depend entirely on how you allow
it to affect you. If every book you read suggests more problems,
gives you worth-while questions and topics to think about in spare
moments, enriches your intellectual life and stimulates your thought,
it is performing its proper function. But if you read solely to answer
problems you cannot answer for yourself, if every time you are puzzled
about anything you run to a book to have it explained, and accept
without question the explanation there given; in short, if you use your
reading to save yourself from thinking, you had better stop reading
altogether. Smoking is a far less harmful form of dissipation.
I have not yet definitely indicated the ratio which time given to
reading should bear to time devoted to thinking. I have avoided this
because of the many factors to be taken into account. But if the reader
happens to have a spare hour to devote to the improvement of his mind,
he will not go very far wrong if he gives thirty minutes to reading and
thirty minutes to thinking. His thinking may be on the subject he has
read, or part of it may be on other problems. That is not so important.
But the reader must not imagine that his thinking need be restricted
to these thirty minutes or any other thirty minutes. The glorious
advantage of thinking is that it can be fitted in at any odd moment.
The entire apparatus for carrying it on is always with you. You do not
even need a book for it. I remind the reader of this at the risk of
repeating myself.
* * * * *
It was pointed out at the beginning of this chapter that the reading
of any book is not an end in itself, but should be subordinated to the
larger end of obtaining the best from reading in general. But for the
sake of clearness our end was temporarily considered as the mastery of
some particular subject. I indicated a plan of reading to best serve
that end. I also promised that needful qualifications would come later.
In stating the law of diminishing returns it was pointed out that it
applied to whole subjects as well as to books, that "past a certain
point every book we read on a subject, while it will probably add to
our knowledge, will not yield as much return as a book of equal merit
on another subject new to us."
While this is true it applies to but a small extent when subjects are
read by the method just outlined, for while we do not get as much out
of any book as we would out of one of equal merit on another subject,
we read it so much faster that the return per time and energy expended
is practically as great. This fast reading is made possible by our
previous knowledge on the old subject. If the book on the new subject
were read in the same manner, we might get little or nothing from it.
With this objection out of the way I suggest that the reader get a
specialty. Books read in the ordinary unsystematic fashion, now on
this subject and now on that, leave little permanent impression. Even
if they do, we feel that though our range of reading may be wide we
have at best but a smattering of many things. In the final analysis
a smattering of knowledge is in most cases of no more use than total
ignorance. Better by far be ignorant of many things and know one thing
well, than know many things badly.
Besides the utility of having a specialty is the pleasure we derive.
There is always an intense satisfaction in feeling that one is an
"expert," an "authority" in some subject. When some Congressman makes
an inaccurate remark which trespasses on your specialty you can write
a letter to the _Times_ or the _Sun_ explaining the error of his ways,
and incidentally exhibiting your own limitless erudition. When your
friends get into an argument on some question within your chosen field
they will remark, "Ask John Jones. He ought to know." And even when
you have to confess abysmal ignorance on some question outside of your
domains, you may still have the satisfaction of believing that people
are excusing you within themselves with an "Oh, well, but he knows a
lot about someology."
One writer estimates that "fifteen minutes a day or a half hour three
days a week devoted to one definite study will make one a master in
that field in a dozen years."[20] This statement should interest
those people who "haven't the time" to take up any specialty outside
their own business, but who spend at least half an hour every day in
newspaper or magazine reading—with nothing to show for it at the end
of twenty years.
Just what subject you make your specialty I am not at present
concerned. It may be aeronautics, astronomy, banking, Greek history,
differential calculus, social psychology, electricity, music,
philosophy of law, submarines, soap manufacture, religion, metaphysics,
sun-motors, education, literary style or the moon. But whatever it
is, it ought to be a subject in which you are interested for its
own sake—which most frequently means one which you do not make your
vocation. If you get tired of it, drop it and take up something in
which you are interested. Your thinking and study should be pursued as
a pleasure—not as a duty.
If your subject is a narrow one, if let us say it is merely a branch of
what is generally considered a science, you should first get a clear
idea of the broad outlines of the science before taking the specialty
up. Should you, for instance, select the tariff, begin your study by
using as your main text a book on general economics.
Even if you make your specialty an entire science you will derive
great help by reading in other sciences. In ethics, for instance, a
knowledge of psychology, biology and sociology will prove of surprising
value. This means that for the sake of knowing the specialty itself,
if for nothing else, you should not pursue it exclusively. If ever you
find yourself in danger of doing this it would be well to lay down a
rule that every third or fourth book you read must be one which does
not deal with the subject you have chosen as your own.
VIII
WRITING ONE'S THOUGHTS
Reading maketh a full man, conference a ready man, and writing an
exact man.—BACON.
Any attempt to formulate a science or art of thinking would not be
complete without at least some discussion of writing. Indeed writing is
so closely bound up with thinking that I have been compelled to refer
to it several times in the discussion of thought and reading.
I have already spoken of writing as an aid to concentration. I was wont
to depreciate it on account of its slowness. But this is practically
its only fault. Thoughts come to us when writing which we get in no
other way. One is often surprised, when reading something one has
written at a previous time, at some of the remarks made. We seem to
have temporarily grown wiser than ourselves.
But the great advantage of writing is that it preserves thought. What
printing has done for humanity in preserving the knowledge of the ages,
writing will do for the individual in preserving his own reflections.
When some thought has occurred to us we believe at the time we are
thinking it that it is ours forever. We cannot conceive that it shall
ever be forgotten. Perish that belief! I have sometimes had an idea
occur to me (really!), and have believed it absolutely new, at least
so far as I was concerned. But on looking over things written before,
I have found that I had had almost identically the same thought at
another time. Not only did I forget the idea; I did not even recognize
it at its second appearance. To be sure, in these cases the thoughts
came a second time. But thoughts are seldom so obliging.
Therefore when an idea occurs or when you have solved a problem, even
a problem suggested by a book, you should immediately put the idea
or solution in writing. You may of course wait until the end of the
day. But the safest way of capturing an idea is to write it the minute
after it flashes through your brain, or it may be lost forever.
It was with this in mind that in the chapter on reading I advised
immediately writing not only ideas but problems which occurred to one.
The discovery of a new problem is just as important and necessary for
intellectual advance as the solution of an old one. If we do not write
our problems we are apt to forget they exist; we put ourselves in
danger of assuming without question some proposition which is not true.
To facilitate the writing of your thoughts and meditations I suggest
a notebook kept specially for that purpose. In addition to this you
should always carry about with you some blank paper and a pencil, so
as to be ever ready to jot down anything. To write an idea does not of
course imply that you cannot later reject it, or change it, or develop
it further.
The elusiveness of thoughts is most strikingly brought out when writing
them down. When we are writing a long sentence we have in mind the
exact words with which we are going to finish it. But our attention is
called for the moment to the physical act of writing, and presto!—the
words are gone; we are compelled to end our sentence in a different
way. I have mentioned the advantages of shorthand and typewriting for
keeping pace with thought. I need merely repeat my advice to use these
acquirements if you have them. Thoughts, I must repeat, are fleeting.
No device for trapping them should be despised.
Not least among the advantages of a notebook in which to write thoughts
is the permanent historical record it gives. Every thought we write
should be dated, day, month and year, like a letter. When we come
to read over ideas jotted down from time to time in this manner, we
shall see before us an intellectual autobiography. We shall see how
our recent thoughts compare with those written sometime ago. We shall
see just what our opinions were at certain times, and how they have
changed. And we shall see whether our mental progress has been marked,
or whether we have been standing still.
It may be considered absurd to suggest that every thought you write in
your note-book be put in the best style you can command. We are wont
to differentiate "style" and "matter." It is doubtful whether this
distinction is quite valid. It is doubtful whether we know just what we
mean when we make it. Indeed Arnold Bennett goes so far as to say:
"Style cannot be distinguished from matter. When a writer conceives
an idea he conceives it in the form of words. That form of words
constitutes his style, and it is absolutely governed by the idea. The
idea can only exist in words, it can only exist in one form of words.
You cannot say exactly the same thing in two different ways. Slightly
alter the expression, and you slightly alter the idea. Surely it is
obvious that the expression cannot be altered without altering the
thing expressed! The writer, having conceived and expressed an idea,
may, and probably will, 'polish it up.' But what does he polish up? To
say that he polishes up his style is merely to say that he polishes up
his idea, that he has discovered faults and imperfections in his idea,
and is perfecting it. The idea exists in proportion as it is expressed;
it exists when it is expressed, and not before. It expresses itself. A
clear idea is expressed clearly and a vague idea vaguely."[21]
Mr. Bennett, I suspect, is a victim of exaggeration. But this much is
true: Thought and style are mutually dependent to a far greater degree
than is generally supposed. Not only will an improvement in a thought
improve its wording; an improvement in wording will improve the thought.
Now as to the application of this. I have referred to the occurrence
in reading of "inarticulate" objections. The sole reason these are
inarticulate is because the objection is too vague even to find
expression. In a case like this we should word our objection the best
we can, no matter how ridiculous or indefensible it at first sounds.
But we should word it in as many ways as possible; we should say it
in all different sorts of ways; we should write it in every different
kind of way. Gradually our objection will become definite, clear,
forceful. In short, we shall not only have improved our way of stating
our thought; we shall have improved the thought itself. To study
clearness of statement or acquisition of vocabulary is to study means
of improving thought. Your notebook should not be used solely for the
entry of "thoughts" as such, but any striking way of wording a thought
which occurs to you should likewise be immediately written.
But while there is some truth in Arnold Bennett's statement that the
wording is the thought, from another point of view its very opposite
is true. The wording is _never_ the thought. Strictly speaking,
"thought" is something which can exist only in the mind. It can never
be transferred to paper. What then is it that we write? If words and
sentences are not thought, what are they? If they are not thought how
is it possible to transfer thought through the medium of writing?
The fact is that words, though they are not thought, are the
_associates_ of thought. You hear the word "horse." Very likely the
visual image of a horse arises in mind. This image, idea, notion,
"concept," will depend on your experience of particular horses. It will
never be a logical abstract of these. It will never be a horse without
color, particular size, sex or breed, as is sometimes thought. It may
however have different elements in it from different horses you have
seen. It may be the image of just one particular horse you remember.
But no such thing as a general concept exists in the mind. We have a
particular image which stands for all horses. The name of course is
general. It—or its definition—may be called the logical concept. But
the name itself is not used in thought. It is an arbitrary symbol which
serves merely to arouse a particular image associated with it, and
this image is dealt with as if general. This image we shall call the
concept. It is the working concept: the psychological as opposed to the
logical concept.
As your concept of a horse will depend on your experience of particular
horses, another person's concept will depend on _his_ experience of
that animal. And as his experience can never be exactly the same as
yours, his concept, though it may be similar to yours, will not be the
same. Not only will no one else have the same mental image or concept
as you _but you yourself will never have exactly the same image twice_.
This image will vary with the setting in which it occurs—with the
associates which happen to arouse it. If you are reading about a great
battle and the word "horse" is mentioned, a certain kind of horse will
suggest itself to you. If you are reading about a grocery wagon and see
the word "horse" another kind will suggest itself. This whether the
animal is described by adjectives or not. At one time you may think of
the horse as in motion, at another time as at rest.
Unfortunately many so-called psychologists seem to consider the
concept, even this image-concept, as something fixed in the individual,
or at best as only changing with actual experience of the thing
conceived. The truth is that the image or images aroused on hearing
any word are not the same for two seconds at a time. They are fluid,
dynamic; never static, immobile. They are associates of the words
in a constant state of flux.[22] When the concept of one individual
varies from one moment to the next, how must the concepts of different
individuals differ from each other!
I have instanced the idea of a horse because it is so simple and
concrete. In actual thinking we never meet with a simple separated
concept or with a single word; we deal with at least an entire
sentence. This means that our images vary even more widely at different
times than was the case in the example. It means that the images of
other people are at a correspondingly greater variance from ours.
As to the application of all this to writing. We have an idea; thinking
it important we decide to jot it down. Now we cannot jot down the idea,
but only words associated with it. We cannot even write all the words
associated with it, for there are too many. So we write a comparative
few; and we say we have written the idea. _But all we have really
written is something associated with the idea._ When we read this over
at a later time we shall not have the same ideas aroused as were in
mind originally, but at best only similar ideas. For the associates of
words, like all associates, are constantly changing; and thanks to the
frailties of human memory exactly the same associates are never aroused
twice. So after a long interval they will be much different than at the
time we wrote. The reader will often have the experience of "writing a
thought" and thinking it very important, but on reading it at another
time he will fail to see why he ever considered it worth putting on
paper. The truth is that at the time he wrote the idea it probably
_was_ important, because he had the right concepts. But when he came
back to the words he had written they failed to re-suggest the former
concepts and associates.
This difference between words and thought is even more strikingly
brought out when the written thought is read by some other person than
the writer. The writer is likely at least to have approximately the
same concepts as at the time of writing. And he is greatly aided by his
memory in recalling the concepts and associated ideas previously in
mind, the words suggesting these. But when a person reads what some one
else has written, he translates the words into the concepts previously
connected with them in his own mind. Thus an author can never literally
transfer an idea. He can merely put down certain arbitrary symbols,
which will serve to arouse a similar thought in his readers. How
greatly the reader's thought differs from the author's it is difficult
if not impossible to determine, for minds can only communicate by
words. It is this difference in associated concept which often makes a
reader fail to appreciate the profoundest thoughts of an author, and
even, on the other hand, occasionally to see depth where it does not
exist.
We come now to the solution of the problem to which this rather
extended discussion has been preparatory. How is an author to convey,
as nearly as possible, his actual idea? And the answer is: _he should
word it in as many different ways as possible_.
If a person had never been to a city and you wanted to give him an idea
of it, you would show him photographs taken from different viewpoints.
One photograph would correct and supplement the other. And the more
photographic viewpoints he saw the more complete and accurate would be
his idea—the more his concept would approximate the actual city. But he
could never more than approximate; he could never obtain the idea of a
man who had visited that city.
An author's language is a photograph of his thought. He can never
actually transfer an idea, but by wording it in different ways he can
show different photographs of it.
If, for example, a second wording does not conform with the first
concept which a reader has formed, the reader will be obliged to modify
that concept. And if the idea is repeated in a number of different ways
he will have to modify his concept so much that he will gradually more
and more approximate the idea of the author.
I remember the story in some educational treatise of an inspector who
entered a school room, asked the teacher what she had been giving her
class, and finally took up a book and asked the following question,
"If you were to dig a hole thousands and thousands of feet deep, would
it be cooler near the bottom or near the top, and why?" Not a child
answered. Finally the teacher said, "I'm sure they know the answer but
I don't think you put the question in the right way." So taking the
book she asked, "In what state is the center of the earth?" Immediately
came the reply from the whole class in chorus, "The center of the earth
is in a state of _igneous fusion_." . . .
There is, and has been for the past generation, a great cry in
educational circles that we should teach things, not words. In some
instances this is inadvisable, even impracticable. But if the teacher
in the foregoing story had taken the trouble to word her idea in at
least more than one way, she might have implanted a real idea in her
pupils. She would at least have found that as it was they had none.
* * * * *
One more question remains. If you are writing a composition, a letter,
an essay, or even a book, what is the best way to get down all your
thoughts, without losing any of value; to get them down in the best
order and in the best style? In other words what is the path of
greatest efficiency in transferring thoughts from your mind to paper?
We have already considered such devices as shorthand. Of course
dictation, where it is possible, is an obvious advantage. But I mean
here to consider the aspects of the problem which apply more especially
to compositions of some length.
It is related of Auguste Comte that he composed his books by thinking
them over down to the minutest details, down to the very phraseology
of the sentences, before penning a single word, but that when he came
to writing he could turn out an astounding amount of work in a given
time. Unless a person have a remarkable memory, however, he will forget
most of what he has thought by the time he comes to writing it. Comte's
method might nevertheless be profitably applied to short sections of
compositions. And where conciseness or perspicuity are desired, it will
often be found useful to think out an entire sentence before writing a
word of it.
Perhaps the best way of ensuring efficiency in writing is by the card
system. This consists in writing on a separate card every valuable idea
that occurs to you, immediately after it occurs. When you finally come
to writing you can arrange these cards in any order desired, throwing
out the ideas you no longer consider important, and adding those which
are necessary to complete or round out the work.
IX
THINGS WORTH THINKING ABOUT
The man who cannot wonder, who does not habitually wonder, is but a
pair of spectacles behind which there is no eye.—CARLYLE.
Up to now I have treated exclusively of _how_ to think, but have made
no mention of _what_ to think. I have treated of the best methods of
dealing with different subjects and questions; I have not considered
what subjects or problems are most worth dealing with.
Of course the important thing is that you do think. It is not
absolutely essential that the results of your thinking are results
which can be directly made use of. Thinking is an end in itself. Most
men imagine that "thinking for the sake of thinking" may appeal to
philosophers, but means nothing to them, as they like to think only
when by so doing they can forward some practical end. These people do
themselves an injustice.
Perhaps you, O reader, are among them. If so, let me appeal to your
personal experience. Have you ever tried to solve a toy puzzle, tried
to take the two wire hooks apart without bending them? Or have you
ever stopped to tackle a problem on the family page of your evening
or Sunday newspaper? "A grocer buys fifteen dozen eggs, he sells—"
you know what I mean. You admit that you have. Exactly. You have been
thinking for the mere sake of thinking.
If you protest that you didn't care about the thinking, that you took
no pleasure in the thinking, which was merely incidental, but that
what really urged you on and gave you pleasure was the solution of
the puzzle, you are again deceiving yourself. The thinking was not
incidental. Thinking and problem solving are identical. The fact is
that you set yourself to solving a problem, to removing a mental
hindrance, for the mere sake of getting the answer, with absolutely no
thought of what you were going to do with the answer when you got it.
But if you can derive so much pleasure from thinking which you cannot
put to use, how much greater should be your pleasure when your
conclusions can be utilized? For when you think of something useful,
you have not only the present pleasure of solving your problem, but
the ulterior pleasure of applying your solution to action, or to the
solution of some further problem. And while I again admit that thinking
is an end in itself, this does not prevent it from being at the same
time a means to some further end. After all is said there is really no
reason why we should be prejudiced against problems or subjects that
are useful.
The mere decision that we should think of useful questions is
insufficient. Very few questions are without _some_ use. Even the
solution of the family page puzzle might some day be useful in solving
a similar problem arising in your own business; and even if this never
came to pass you might spring the puzzle on your friends, and make
yourself socially more interesting. Thought given to a question in a
debating book now before me, "Resolved, that Ferocious Wild Beasts are
more to be dreaded than Venomous Reptiles," might result in knowledge
which would come handy in selecting equipment if one decided to journey
to the wilderness of South America. But there are millions of problems
of as much use as these; and it is not within the power of one lone
mortal, of years three score and ten, to compass even a corner of them.
Our question is not—what problems are of use?, but—of _how much_ use
are certain problems?, or stated in another way,—what is the _relative_
utility of problems?
Any adequate consideration of this question would involve the selection
of some criterion for utility, and the testing of individual problems
by that criterion. But to treat such a question with anything like
justice is beyond the scope of this book; it would require almost
a volume in itself. It is almost the same as the problem, What
knowledge is of most worth?, and the most masterly treatise on that
question which has ever been written can be found in Herbert Spencer's
epoch-making little work, _Education_. I sincerely hope that the reader
study this. But I hope even more earnestly that before he does so he
first think the problem out independently, for it is one of the most
important he can put before himself.
But our present question—that of the relative importance of
problems—is slightly different from that of the relative importance
of knowledge. The first deals with thought and the second with
information, or the materials of thought; the first with a process of
getting knowledge and the second with knowledge itself.
I believe for example that a knowledge of his own body and of the
laws of health is the most valuable a man can have, but there are few
problems concerning the body which I would include in the first rank.
There are several reasons for this. In the first place, while it may
be true that such questions _taken as a whole_ are more important than
any other class of questions, taken separately they are relatively
minor; there are no one or two questions of all-encompassing importance
to which all the others are subsidiary. Moreover, such questions,
while they undoubtedly require thought for their solution, depend to a
relatively great extent on observation and experiment. No sane medical
student would sit down and follow out a lengthy course of reasoning
as to where the heart is; he would merely observe or dissect, or
consult the book of a man who had dissected, and save mental fatigue.
Not least of all, questions of physiology require extensive, highly
technical and detailed information—information which requires years of
special study to acquire—before any thinking that is at all safe can be
put upon them. So in estimating the relative value of problems, there
are other considerations besides the value of knowledge.
But it is not my purpose here to discuss the general principles upon
which the selection of worth-while questions should be made. That
task I leave to the reader. I have chosen rather the concrete path of
suggesting a list of questions which I consider of great import. I
believe that no matter how much thought the reader gives to any one of
them he will not be losing his time.
I have elsewhere pointed out that the more knowledge a man has the
more problems he will have. It is equally true that unless a man has
some knowledge on a subject he will not be able to appreciate or even
understand some of its most important problems. It is only when we
begin to think of subjects that we discover problems and realize their
significance. In stating most of the following problems, therefore, I
have often thought it necessary to add a few sentences in explanation,
and have sometimes stated a question in a variety of forms in order to
more clearly convey the thought.
* * * * *
_Are specific characteristics, acquired during the lifetime of an
individual, inherited by his offspring?_ I have referred so often to
this problem and its importance that further explanation is hardly
necessary. "Characteristics" of course refer to intellectual and moral
as well as physical characteristics.
_What is the influence of the individual mind on society and of social
environment on the individual?_
_Does the form of government determine the character of a people, or
does the character of a people determine their form of government?_
Or do government and character react on each other, and how? The same
question may be asked of all other social institutions. Does the
religion of a people determine their character, or does the character
of a people determine their religion? This whole problem is somewhat
similar to that immediately preceding, regarding the interaction of the
individual and the social mind.
_Is society for the benefit of the individual or is the individual for
the benefit of society?_
_Should the jurisdiction of the government be extended or curtailed?_
Or should it be extended in some directions and curtailed in others?
Does the answer to this problem depend on the answer to the previous
one? Another form of the same problem is: What is the proper sphere of
government?
_Should the government grant monopolies?_ Patents, for example?
_What would be the most practicable plan for abolishing or minimizing
war?_ Those who do not wish to beg the previous question may first
ask whether it is always desirable to prevent war, whether war is
always an evil. What is the effect of war on the physical future of
the race? on national and individual character? on government? on
national liberty? on personal liberty? What are the ethics of war?
for aggression? for territorial conquest? for "national honor"? for
defense of a weaker nation? for defense against invasion? What is the
effect of preparedness? of universal preparedness? of preparedness of
an individual nation? In each case what are the principles on which the
extent of preparedness should be determined? What are the fundamental
causes of war? How can they be removed? Is it possible to remove all of
them?
_Which is the rightful owner of land, the community or the individual?_
To state the problem in another form: Should private land ownership be
abolished?
_Who should be entitled to vote?_ This of course is a question similar
to woman suffrage, but it is much broader. It deals not only with the
qualification of sex, but of age. Should any one under twenty-one have
the vote? The validity of property and educational qualifications
should also be considered.
_How should the relations of the sexes be regulated?_ Put in slightly
narrower and perhaps less objectionable form: What would be just laws
governing marriage and divorce?
_What is the effect of attempted State interference with the law of
supply and demand?_ Does the unrestricted working out of this law
forward ultimate justice? Just what is the validity and the meaning
of the expression "The _law_ of supply and demand"? The question
could be taken up in connection with minimum wage laws, railroad rate
regulations, "extra crew" laws, etc.
_Which is the best policy: free trade, revenue tariff, or protective
tariff?_ Or under what conditions is each best? With what classes of
commodities?
_What would be an equitable and sound currency system?_ This question
is somewhat technical, and would have to be considered in the form of a
number of subsidiary problems. Ought money to have an intrinsic value?
What is the effect of "fiat" paper currency on money of intrinsic
value and on prices? The effect of credit? The effect of fluctuations
in the supply of gold? Ought there be a double standard or a multiple
standard? etc.
_Should conduct be judged by the pleasure or happiness it yields?_
Stated in another form, almost a different problem: Is utility a good
moral guide?
_Should conduct be judged by its tendency to produce individual
well-being, or should it be judged by its tendency to produce the
well-being of all humanity, or of all sentient beings?_ This problem
cannot be lightly dismissed in favor of universal well-being.
This becomes apparent when we attempt to give an undogmatic and
non-question-begging answer to the query: Why should a man act for the
benefit of others?
No science is more provocative of thought than ethics. The question of
whether acts should be declared good or bad as they tend to produce
pleasure or happiness, either individual or in humanity as a whole, or
whether "virtue" or "morality" is an end in itself, is one of the most
subtle and elusive we can attempt to solve; no matter which answer we
give we are brought into logical and psychological dilemmas from which
it seems impossible to escape. This is also true of the problem of
whether our knowledge of what constitutes right and wrong comes from
experience or from intuition.
The broadest form of the ethical problem, which includes the two
preceding italicized problems, is:
_What is the proper criterion for determining right and wrong conduct?_
Or even less dogmatic: Can there be a criterion for determining right
and wrong conduct, and what is it?
Somewhat allied with the ethical problem is that problem of problems:
how to live? By this is meant how to put the most into life and get
the most out of it; what vocation to follow; what hobbies, amusements,
avocations to take up; how to plan time by months, by weeks, by days,
by hours. How much time and energy do certain activities deserve? How
much can we afford to give them? Restated: what activities are of most
worth?
Of course every one does think of problems connected with the art of
living. But he thinks of them as little unconnected questions. Barely
indeed does any one go about the solution of the general problem of
living in an orderly, systematic manner. To insist upon the broad
practical bearings of the problem would be unnecessary, absurd. By
its very nature it is the most "practical" question we can ask. Any
particular solution or treatment may be impractical, but this does not
affect the question itself.
_What are the respective influences of environment (education,
experience, etc.) and innate tendencies in determining character?_
Which is the greater determinant?
_Does pleasure depend upon the satisfaction of instinctive desires, or
do desires for certain activities depend upon the pleasure accompanying
the previous performance of such activities?_ Does an activity or the
possession of an object give us pleasure because we have previously
desired it, or do we desire an activity or an object because we have
previously obtained pleasure from it? Or do pleasure and desire
interact, and just how? The solution of this psychological problem is
of tremendous importance in ethics.
_Does the mind depend entirely on the brain?_ That is, are all
thoughts, emotions, feelings, due to material changes in the brain? The
answer we give to this problem may determine our answer to the question
of immortality.
_What knowledge is of most worth?_ I have so fully discussed the
importance of this question and the method of proceeding with its
solution that further explanation is needless.
One sphere of thought where the thinker is compelled to be original;
where it is practically impossible for him to fall into beaten tracks,
is invention. But there is useless as well as useful invention. A
man's ambition may range all the way from inventing a machine to
harness directly the limitless power of the sun, down to devising a
tenacious tip for shoelaces. But he should be careful about inventing
something already patented. He should be even more careful to avoid
inventing something for which there is no demand. One of Edison's first
patents was for a machine to register quickly the votes of legislative
assemblies. And it worked. But the legislative assemblies didn't want
it, because they didn't want their votes quickly registered. That
would have ended good old filibuster methods. Another invention of
great uselessness which has been several times attempted is a machine
to write words just like the human hand writes them. There are really
so many useful things which do not exist and for which there is a
demand, that it seems quite a pity nine out of ten patents in the
files at Washington are for things inutile. If the would-be inventor
cannot himself think of something really needed, almost any big patent
attorney house will send him an entire book of suggestions on "What to
Invent."
Invention usually requires highly technical knowledge, not to speak of
facilities for experiment and a well-supplied purse. But nothing gives
more solid satisfaction to its creator than a successful appliance.
While the conscientious philosopher is constantly harassed by doubts as
to whether, after all, he has discovered truth; the inventor need not
worry. His machine either works or it does not work, and he _knows_ the
truth of his thought thereby. On the other hand the philosopher will
always have _some_ thoughts. Be they true or not they may at least be
interesting and worth recording, whereas the inventor may toil on for
years and years with absolutely nothing to show for his exertion at the
end. . . .
There are a number of problems that are not of great "practical"
importance, but whose theoretic value is so transcendent as to compel
attention. Among these are certain problems in psychology, but more
especially in metaphysics, philosophy and even religion, insofar as
religion can be said to have problems.
_Is there a God and is it possible for man to learn anything of His
nature?_ Some readers may object to the first part of this question.
But I state it because I am anxious to avoid dogmatism.
_Is the soul immortal?_ What do we mean by the soul? Does science
disprove the life after death?
_What is the test of truth?_ How shall we know truth when we have it?
What after all is "truth"?
_Are our wills free, or are our actions predetermined?_ Some may object
to this way of stating the question. Much confusion exists as to the
meaning of the problem. A different way of stating it would lead to
different treatment. What is the "will"? What do we mean by "free"?
What do we mean by "predetermined"?
_The problem of existence._ How did the universe come into being? This
is the last problem in which interest can be stimulated from without.
No matter in how many different ways he phrases it, a writer cannot
convey this sense of mystery to another. It must arise from within.
Most of the time we accept, we take for granted, the universe and the
existent order of things, and it requires the greatest effort to keep
alive our mystification and doubt for even short periods.
* * * * *
The list of questions foregoing is of course merely suggestive. It is
impossible to select, say twenty-five questions, and pronounce them the
twenty-five most important that can be asked. I fully realize there are
questions of greater importance than some I have propounded. But I have
not gone so far as to advise that every one of these should be thought
over. The list has been given merely for thought stimulation, and to
indicate what is meant by "worth while" questions.
Unfortunately I have not been able to explain why most of these are so
important. To have done so would have required too much time for each
individual problem. It would have drawn us too far out of our subject.
The reader must find out or sense the importance for himself.
Practically all of the problems given in the list come under one of
the sciences, especially if we count metaphysics or philosophy as a
science, which it is in so far as it is organized knowledge. This may
seem somewhat narrow. Now I admit there are important problems which
are not included in any science. But there are very few. As soon as
deep thought is given to a problem its treatment becomes systematic.
It either falls into one of the sciences or a new science evolves
about it. John Stuart Mill once started a journal in which he promised
himself to put one thought a day, but he did not permit himself to
record there any thought on a problem falling within one of the special
sciences. None of the thoughts he put in the journal is of any great
value. It came to an abrupt end in about two months.
It may be objected that though the questions selected are most
important _in themselves_, there are other things more worth thinking
about, because of the mental discipline they yield. Now putting aside
the fact that questions important in themselves should be dealt with
ultimately—that mental discipline would be useless unless applied to
important problems—I must voice my suspicion that the most useful
questions are also the best for training the mind. It may be true
that punching the bag will help a prizefighter in boxing. But other
things equal, a man who has spent one week in actual boxing is better
prepared to enter the prize ring than one who has devoted a month to
bag punching. The best practice for boxing is boxing. The best practice
for solving important questions is solving important questions.
Nor do I admit the contention is valid that one problem rather than
another should be thought of because it is "deeper." We cannot
truthfully say that psychology is a "deeper" science than ethics,
or that metaphysics is deeper than psychology, or vice versa. Most
subjects and most problems are just as deep as we care to make them.
Their depth depends entirely on how deep we go into them. This applies
especially to the so-called philosophical sciences. We may give them
shallow treatment or we may give them profound treatment. But we
shall usually find that the deepest questions are the most important
questions. For the most important questions have generally attracted
the greatest minds; consequently they have been given the deepest
treatment; and when a man reads the attempted solutions of these great
minds his thoughts tend toward this deeper plane. Of course certain
problems, especially in mathematics, can be dealt with by only one
method. In this case we may properly speak of some problems being
objectively deeper or at least more difficult than others.
Some objections may be offered to several of the questions in my list,
on the ground that they are invalid. Such problems as the immortality
of the soul and the problem of existence may be declared inscrutable,
unsolvable. Such a problem as "Is society for the benefit of the
individual or is the individual for the benefit of society?" may be
said to imply that society is something which has been voluntarily
formed like the State. It may be declared that this is not the case; it
may be objected that this question is meaningless. All these objections
may be justified. But their truth cannot be determined until we
actually attempt a solution. The determination of the validity of a
problem is part of the problem.
* * * * *
We come now to the question of what is most worth reading. The simplest
answer is that that is most worth reading which is most worth thinking
about, and therefore we should read those books which deal with such
problems as I have indicated. But this counsel needs to be supplemented.
A conservative estimate places the number of books in the world at
4,500,000. (This estimate was made before the war broke out, and the
war-books by now have doubtless brought the number to 5,000,000.) This
does not mean books as collections of printed sheets of paper bound
together—books as physical objects—for if it did the number would be
immensely greater. It means 4,500,000 (or more) separate and distinct
treatises. If you were to read one book every two weeks, you would read
about twenty-five a year, and if you read for fifty years you would
cover 1,250. One book in every three thousand six hundred! (3,600!)
From this it is apparent that even the most omnivorous reader, even
the reader who can cover a book swiftly by efficient skipping, will at
least have to ask himself before beginning a volume, "Is this a book
in a thousand? Can I afford to read this at the cost of missing nine
hundred and ninety-nine others?" And most men who ask this question
will have to substitute the number five thousand, or even ten thousand.
Nine-tenths of our reading is on mere chance recommendation, passing
whim or by sheer accident. We catch sight of a book on a library table.
Having nothing better to do we pick it up; we start perusing it. Every
book read in this way means a sinful waste of time. To be sure, a book
read in this chance manner _might_ (accidentally) be very good—even
better than some you would have planned for; but this will happen
seldom, and is never a justification of the practice. By going a round
about way to a place a man might stumble across a lost pocketbook, but
this would not justify taking round about ways.
The first thing needed, then, is that we should plan our reading.
Perhaps the best way to do this would be to make out a list of the
books we intend to read for the coming year, or say a list of from a
dozen to twenty-five volumes, and then read them in the order listed.
Another good plan is to jot down the title of every book we intend to
read, and keep the list about with us. Then when we meet with a book
which we think would be good to read, or which we feel we simply must
read, we can before starting it glance at our list. The formidable
array we find there will probably induce us either to give up entirely
our intention to read the book before us, or at least to put it
somewhere on the list which will allow more important books to be read
first.
Some people cannot endure planning their reading in this manner. It
grates on them to think they are tied down to any sort of program;
it seems to deprive them of the advantages of spontaneous interest.
Well, if you cannot plan your reading prospectively, at least plan it
retrospectively. If you cannot keep a list of books you _intend_ to
read, at least keep a list of books you _have_ read. Refer to this
from time to time. See whether you have been reading uniformly good
literature. See whether you have been reading too much on one topic and
not enough on another, and what topics you have been long neglecting.
But at best this method is a poor substitute for planning your reading
prospectively.
We should plan not only with regard to topics and subjects, but with
regard to authors. Obviously if two men of equal ability both study
the same subject, one will get more out of his study than the other if
he reads authors who treat the subject on a deeper plane—provided of
course he understands them.
Whether consciously or not, we tend to imitate the authors we read. If
we read shallow books we are forced, while reading them, to do shallow
thinking. Our plane of thought tends toward the plane of thought of
the authors we study; we acquire either habits of careful critical
thinking, or of dogmatic lack of thinking.
This emphasizes the importance of reading the best books, and _only_
the best books. Our plane of thinking is determined not alone by the
good books we read, but by all the books we read; it tends toward
the _average_. Most men imagine that when they read a good book they
get a certain amount of good out of it, and that this good will stay
with them undiminished. Provided they read a certain number of serious
books, they see no reason why they should not read any number of
superficial or useless books, or any amount of ephemeral magazine or
newspaper literature. They expect the serious reading to benefit them.
They do not expect the shallow reading to harm them. This is just as if
they were to buy and eat unnutritious and indigestible food, and excuse
themselves on the ground that they ate nourishing and digestible food
along with it.
The analogy may be carried further. As it is the average of the
physical food you digest which ultimately determines the constitution
of your body, so it is the average of the mental food you absorb which
determines the constitution of your mind. One good meal will not offset
a week of bad ones; one good book will never offset any number of poor
books. Further, as no one has a perfect memory, you do not retain all
you read any more than you retain all you eat. Therefore if you do
not want your mind to retrogress, you should not rest satisfied with
books already read, but should continue to read books at least as good
as any previous. As at any given time your bodily health—so far as it
depends on food—is mainly determined by the meals of the last few days
or weeks, so is your mental health dependent on the last few books you
have read.
One of the first things we should look to in selecting books is their
comprehensiveness. To quote Arnold Bennett: "Unless and until a man has
formed a scheme of knowledge, be it but a mere skeleton, his reading
must necessarily be unphilosophical. He must have attained to some
notion of the interrelations of the various branches of knowledge
before he can properly comprehend the branch of knowledge in which
he specializes."[23] As an aid in forming this scheme of knowledge,
Mr. Bennett suggests Herbert Spencer's _First Principles_. I heartily
endorse his choice. I would add to it the essay on _The Classification
of the Sciences_ by the same author.
These works are classics, and one of the most regrettable of
difficulties is that of getting people to read the classics. Mention
to a man Darwin's _Origin of Species_ or _Descent of Man_, and he
will reply, "Oh, yes, that's the theory that says men descended from
monkeys." Satisfied that he knows all there is to know about it, he
never reads any of Darwin's works. Now passing over the fact that
the theory does not assert that man descended from monkeys and never
intended to assert it;—what a compliment to Darwin's thought and
brevity to assume that all his books can be summed up in a phrase!
But Darwin is not the only sufferer. If we come across the title of a
classic often enough, and hear a lot of talk "about it and about" and
a few quotations from it, we gradually come to believe we know all
the contents worth knowing. This is why Shakespeare, and in fact most
of the classics, are so seldom actually read, and why we go for our
serious reading to a book on "How to Read Character from Handwriting"
or to a sensational volume on prostitution by one of our modern
"sociologists." The only way we can keep ourselves from such stuff is
to lay out some definite end, some big objective, to be attained; and
before reading a book we should ask how that helps us to attain it.
I have not given a formal list of books worth reading, nor do I intend
to; one of the reasons being that the work has been done so well by
others. Ever since Sir John Lubbock published his list of one hundred
best books, the number of selections has been legion. Charles Eliot's
selection for his _Five Foot Shelf_ is to be commended, and a little
volume by Frank Parsons _The World's Best Books_. Of course our purpose
is special:—to find the best books for making thinkers; but the remarks
already made should aid the reader sufficiently in making his own
selection from these lists. As previously pointed out, if the reader
is studying a specialty he can usually find a fairly well selected
bibliography at the end of the article on that specialty in any
standard encyclopedia.
* * * * *
The reader probably sees clearly by now that it is impossible to do
his own thinking in every case; that if he is to have sound knowledge
on important questions he must have the courage to be ignorant of
many things. How much trouble to go to in any particular case it is
difficult to say.
We can lay it down as a general principle that questions of the
highest importance, such as those of which I have given a suggestive
list—questions which deal with facts known or easily ascertainable,
and which depend for their right solution more on thinking than on
anything else—a man should solve for himself, and should take the
greatest caution in so doing. On the other hand, questions of the
highest importance which depend for their solution mainly on full and
detailed knowledge of highly technical facts which lie outside of one's
specialty, should be dealt with by consulting authorities and taking
their word for it.
There still remains the great mass of questions which are relatively
unimportant, but continually coming up in our daily life, the answers
to which greatly influence our conduct. Time forbids us not only
from thinking these out for ourselves, but even from consulting an
authority—for the selection of an authority often involves almost as
much intellectual responsibility as self-thinking. The only thing we
can do is to accept the verdict of popular opinion.
Custom, convention and popular belief, no matter how many times they
have been overthrown, have fairly reliable foundations. Popular ideas,
to be sure, are products of mere unorganized experience. They are
empirical; seldom if ever scientific. But though they are founded on
experience which is _unorganized_, they are founded on so much of it
that they are worthy of respect. Society could not long exist if it
persisted in acting on beliefs altogether wrong, though it is safe to
say that popular ideas are never more than approximately right. But
unless and until you have either thoroughly thought over a question for
yourself or have consulted an acknowledged and trustworthy authority,
it is best tentatively to accept and act on common belief. To think
and act differently, merely for the sake of being different, is
unprofitable and dangerous, all questions of ethics aside.
X
THINKING AS AN ART
I discovered, though unconsciously and insensibly, that the pleasure
of observing and reasoning was a much higher one than that of skill
and sport.—DARWIN'S _Autobiography_.
To know is one thing; to do another. To know the science of thinking
is not to possess the art of thinking. Yet I doubt not that there are
readers who having finished, would deem it sufficient that they had
the knowledge, and would feel they had gotten all the good or harm out
of this book that there is in it. They would put it aside. They would
think no more of it.
The trouble with these good people (unfortunately I speak of the
overwhelming majority) is that they expect information to apply itself.
They expect that once they have learnt a thing they will act according
to their knowledge. This is the very last thing a normal human being
does.
The only way we can ever get ourselves to apply knowledge is to do so
by what will at first be a conscious effort. We shall have to devote
much attention to it. Old established custom will have to be broken.
We do not act according to knowledge; we act according to habit. Even
after we have decided, for instance, that we ought to give a little
independent thinking to a subject before reading about it, we shall
very likely continue to read books without previous thought.
Some people may imagine that the reason we do not practice what we
learn is that we do not remember what we learn. They are mistaken. When
learning German, I had much difficulty in knowing what prepositions
required the genitive, dative or accusative cases. I finally learnt all
of them alphabetically in their respective groups, and could rattle
them off at a rate which would make most native Germans blush for envy.
The only trouble was that when I came to an actual sentence requiring
one of these prepositions I continually forgot to apply my knowledge.
Some one would have to point an error out to me before it would occur
to me to do so. Even then I would have to think long before the proper
case occurred.
But while it is not true that we fail to practice a thing merely
because we fail to remember it, it is true that if we do not practice
we are not very likely to remember it. The only way we could remember
would be by constant re-reading, for knowledge unused tends to drop out
of mind. Knowledge used does not need to be remembered; practice forms
habits and habits make memory unnecessary. The rule is nothing; the
application is everything.
Practice being the thing needful, it is essential that we put aside a
certain amount of time for it. Unless you lay out a definite program,
unless you put aside, say, one-half hour every day, for pure downright
independent thinking, you will probably neglect to practice at all.
One-half hour out of every twenty-four seems little enough. You may
think you can fit it in with no trouble. But no matter how shamelessly
you have been putting in your time, you have been doing _something_
with it. In order to get in your thirty minutes of thinking, you will
have to put aside something which has been habitually taking up a half
hour of your day. You cannot expect simply to add thinking to your
other activities. Some other activity must be cut down or cut out.[24]
You may think me quite lenient in advising only one-half hour a day.
You may even go so far as to say that one-half hour a day is not
enough. Perhaps it isn't. But I am particularly anxious to have some
of the advice in this book followed. And I greatly fear that if I
advised more than a half hour most readers would serenely neglect my
advice altogether. After you have been able for a month to devote at
least one-half hour a day to thinking, you may then, if you choose,
extend the time. But if you attempt to do too much at once, you may
find it so inconvenient, if not impracticable, that you may give up
attempting altogether. Throughout the book I have constantly kept in
mind that I wish my advice followed. I have therefore laid down rules
which may reasonably be adhered to by an average human, rules which
do not require a hardened asceticism to apply, and rules which have
occasionally been followed by the author himself. In this last respect,
I flatter myself, the present differs from most books of advice.
Above all I urge the reader to avoid falling into that habit so
prevalent and at the same time so detrimental to character:—acquiescing
in advice and not following it. You should view critically every
sentence in this book. Wherever you find any advice which you think
needless, or which requires unnecessary sacrifice to put into practice,
or is wrong, you should so mark it. And you should think out for
yourself what would be the best practice to follow. But when you agree
with any advice you see here, you should make it your business to
follow it. The fact that part of the advice may be wrong is no reason
why you should not follow the part that is right.
Most people honestly intend to follow advice, and actually start to
do it, but . . . They try to practice everything at once. As a result
they end by practicing nothing. The secret of practice is to learn
thoroughly one thing at a time. As already stated, we act according
to habit. The only way to break an old habit or to form a new one
is to give our whole attention to the process. The new action will
soon require less and less attention, until finally we shall do it
automatically, without thought—in short, we shall have formed another
habit. This accomplished we can turn to still others.
As an example let us take the different methods of looking at questions
considered in the second chapter. Most readers will glance over these
methods, and agree that they are very helpful—and the next problem
which perplexes them will probably be solved by no method at all, or
will be looked at from one standpoint only.
About the best, perhaps the only way by which the reader could get
himself to use habitually every valuable method possible, would be to
take one of the methods, say the evolutionary, and consciously apply
it, or attempt to apply it, to a whole list of problems. In this way
he could learn the possibilities and limits of that particular method.
Again, he could take an individual problem and consciously attempt to
apply every possible method to its solution. He could continue such
practice until he had so formed the habit of using method that it
would be employed almost unconsciously. Concentration, method in book
reading, and all the other practices here advocated should be learned
in the same conscious, painstaking way, one thing at a time, until
thoroughly ingrained. It must be left to the reader's own ingenuity to
devise the best methods of acquiring each particular habit.
Of course it is possible to do a thing well—it is possible to follow
the rule for doing it—without knowing the rule. If a man take a live
interest in a subject he will naturally tend to look at it from a
number of different viewpoints. If he be eternally on the lookout for
errors and fallacies in his own thinking he will gradually evolve a
logic of his own. And this logic will be concrete, not abstract; it
will be something built into, an integral part of, concrete thought,
and he will be constantly strengthening the habit of using it. Compared
with the logic of the books it may be crude, but it will not consist of
mere rules, which can be recited but which are seldom applied.
So with grammar. Instance the writer's experience with German. Few
native Germans could recite offhand what prepositions govern the
genitive, dative and accusative, even if they knew what was meant by
these terms. But they would (most of them) use these cases correctly,
and without the least thought. The educated Englishman or American
flatters himself that his correct speech is due to his study of
grammar. This is far from true. His speech is due to unconscious
imitation of the language of the people with whom he comes into
contact, and of the books he reads. And needless to say, the cultivated
man comes into contact with other cultivated men and with good
literature; the ignoramus does not.
Most of our thinking is influenced in this way. The great thinkers
of the past improved their innate powers not by the study of rules
for thinking, but by reading the works of other great thinkers, and
unconsciously imitating their habitual method and caution.
The fact to remember is that a rule is something that has been
formulated after the thing which it rules. It is merely an abstract of
current practice or of good practice. Rules are needful because they
teach in little time what would otherwise require much experience to
learn, or which we might never discover for ourselves at all. They help
us to learn things right in the beginning; they prevent us from falling
into wrong habits. The trouble with unsupplemented imitation, conscious
or unconscious, is that we tend to imitate another's faults along with
his virtues. Rules enable us to distinguish, especially if we have
learned the reason for the rules.
But practice and rules should not be compared as if they were opposed.
The true road is plenty of practice with conscientious regard to rule.
It may be insisted that this has its limits; that there is a point
beyond which a man cannot improve himself. I admit that practice has
its limits. It may be true that there is a point beyond which a man
cannot advance. But nobody knows those limits and no one can say when
that point has come.
No two individuals profit in the same degree by the same practice.
With a given amount one man will always improve faster than another.
But the slower man may keep up with his more speedy brother by more
practice. I shall not repeat here the fable of the hare and the
tortoise. But any one who has discovered a flaw in his mental make-up,
any one who believes that he cannot concentrate, or that his memory is
poor, and that therefore he can never become a thinker, should find
consolation in the words of William James:
"Depend upon it, no one need be too much cast down by the discovery of
his deficiency in any elementary faculty of the mind. . . . The total
mental efficiency of a man is the resultant of all his faculties. He
is too complex a being for any one of them to have the casting vote.
If any one of them do have the casting vote, it is more likely to be
the strength of his desire and passion, the strength of the interest
he takes in what is proposed. Concentration, memory, reasoning power,
inventiveness, excellence of the senses—all are subsidiary to this.
No matter how scatter-brained the type of a man's successive fields
of consciousness may be, if he really _care_ for a subject, he will
return to it incessantly from his incessant wanderings, and first and
last do more with it, and get more results from it, than another person
whose attention may be more continuous during a given interval, but
whose passion for the subject is of a more languid and less permanent
sort."[25]
XI
BOOKS ON THINKING
The reader who desires to study further on the subject of thinking will
find a wide field before him—but he will have to search in cosmopolitan
quarters. While much has been written on thinking, it has been in an
incidental manner, and has found its way into books written mainly
to illuminate other subjects. Among the few books or essays devoted
exclusively or mainly to thinking may be mentioned:—John Locke, _The
Conduct of the Understanding_; Isaac Watts, _The Improvement of the
Mind_; Arnold Bennett, _Mental Efficiency_; T. Sharper Knowlson, _The
Art of Thinking_; Arthur Schopenhauer, _On Thinking for Oneself_, in
his _Essays_. The last is especially recommended. It is only about a
dozen pages long, and is the most stimulating essay written on the
subject. This, together with John Locke's _Conduct_ (which, by the way,
is also fairly short) may be considered the two "classics" in the
meager literature on thinking.
There is an extensive literature on the psychology of reasoning, on the
"positive" science of thinking. The best single work on this subject is
John Dewey's _How We Think_. William James' chapter on _Reasoning_ in
his _Principles of Psychology_ might also be consulted with profit. S.
S. Colvin's, _The Learning Process_ contains some interesting chapters
bearing on thought.
On method, the amount of literature is even more imposing than that
on the psychology of reasoning. Probably the most thorough book is
Stanley Jevon's _The Principles of Science_, though this, consisting
of two volumes, will require quite some ambition to attack. A good
recent short work is J. A. Thomson, _Introduction to Science_.
Herbert Spencer's short essay, _An Element in Method_, in his
_Various Fragments_ might also be mentioned. Of those works treating
method mainly from a corrective standpoint, I have already mentioned
Jevon's _Elementary Lessons in Logic_. _The_ authoritative and most
comprehensive book on logic is still John Stuart Mill's great tome. Of
course this list of books on method, as well as that on the psychology
of reasoning, cannot pretend to be more than merely suggestive. If the
reader desires an extensive bibliography in either of these subjects he
will probably find it in one of the books mentioned.
On doubt and belief, William Clifford, _The Ethics of Belief_, and
William James, _The Will to Believe_, might be read. The viewpoints of
the two essays are in almost direct contradiction.
On reading, Alexander Bain's _The Art of Study_, in his _Practical
Essays_, will be found useful. Bacon's essay _On Studies_, which is not
more than a couple pages long, contains more concentrated wisdom on the
subject than is to be found anywhere.
On subjects most worth thinking about, the reader cannot do better than
read Herbert Spencer's essay _What Knowledge is of Most Worth?_ in his
_Education_. As to books most worth reading, consult the lists of John
Morley, Sir John Lubbock, and Frederic Harrison; Sonnenschein's _Best
Books_ (in two volumes); Baldwin's _The Book Lover_; Dr. Eliot's _Five
Foot Shelf_ and Frank Parson's _The World's Best Books_, previously
referred to.
On the art of living—the art of planning time so as to have room for
thinking, as well as valuable hints as to how that thinking is to be
carried out—consult Arnold Bennett, _How to Live on Twenty-four Hours a
Day_, and E. H. Griggs, _The Use of the Margin_ (both very, very small
books).
Finally, there is much useful material, as well as incalculable
inspiration, to be obtained from the intellectual and literary
biographies of great thinkers. Especially is this true of
autobiography. Among others may be mentioned the autobiographies of
John Stuart Mill and Herbert Spencer, and an autobiographical fragment
by Charles Darwin.
FOOTNOTES:
[ 1] See Herbert Spencer, _Education_.
[ 2] Pillsbury, _Essentials of Psychology_.
[ 3] _Principles of Psychology_, Vol. II, p. 332.
[ 4] See William A. Scott, _Money_.
[ 5] _How We Think._
[ 6] _Autobiography_, Vol. I, p. 463.
[ 7] _Autobiography._
[ 8] Hugh Elliot, _The Letters of John Stuart Mill_.
[ 9] Essay, _Over-Legislation_.
[10] _The Conduct of the Understanding._
[11] _The Will to Believe._
[12] _Autobiography._
[13] _Science and Education._
[14] T. Sharper Knowlson, _The Art of Thinking_.
[15] _The Conduct of the Understanding._
[16] _On Thinking for Oneself._
[17] This may seem unjustified. Witness, however, this remarkable
statement in a prospectus of Charles Eliot's "Five Foot Shelf": " . . .
The man who has not read the 'Wealth of Nations' is hardly qualified to
speak or even think wisely on these vital subjects." If this be true,
Adam Smith himself was hardly qualified because he certainly could not
have read his own book before he had written it!
[18] Essay _On Thinking for Oneself_.
[19] _Autobiography._
[20] Edward Griggs, _The Use of the Margin_.
[21] _Literary Taste._
[22] The most advanced and severe psychologists may object to some
statements in this exposition. I admit that a word may be used as
the concept, _but only provided it is accompanied by a "fringe" of
potential associates_. I also admit that in order to be dealt with as
if general, the visual image must be accompanied by such a "fringe."
But I do insist that this fringe itself is in a constant state of flux.
That is the important point for our present purposes.
[23] _Literary Taste._
[24] And consult Arnold Bennett's _How to Live on 24 Hours a Day_.
[25] _Talks to Teachers._
THE END
TRANSCRIBER'S NOTE:
Original spelling and grammar have been generally retained. Original
small caps are now uppercase. Italics look _like this_. Footnotes have
been relabeled 1–25 and moved to the end of the book. The transcriber
produced the cover image and hereby assigns it to the public domain.
Original page images are available from archive.org—search for
"cu31924031014867".
End of Project Gutenberg's Thinking as a Science, by Henry Hazlitt (1894-1993)
***
|
{
"redpajama_set_name": "RedPajamaBook"
}
| 9,426
|
suicide or murder?
Five Moves of Doom by AJ Devlin
Five Moves of Doom is the kind of novel we like to point to when we talk about independent publishing and how it feeds vibrant new material into the crime fiction scene. I'm not sure any mainstream publishers would dare touch a book where the…
When We Fell Apart by Soon Wiley
By Sonja van der Westhuizen
Where do you truly belong? In your ancestral homeland or the country where you were born? Through the eyes of two distinct characters, Soon Wiley's debut novel examines the search for one's true cultural identity. The driving force behind this search is the mysterious death…
Never Forget by Michel Bussi
Translated by Shaun Whiteside — With its release timed to coincide with the UK's first French Book Week and Bastille Day, Never Forget is French crime author Michel Bussi's latest suspense thriller and the fifth to be translated into English. Jamal Salaoui has a goal…
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,578
|
Q: error converting jQuery.ajax to XMLHttpRequest I am struggling to convert the following $.ajax using XMLHttpRequest object:
function doPost(url, queryString, successCallback) {
$.ajax({
type : 'POST',
url : url,
dataType : 'json',
data : queryString,
success : successCallback,
error : function(jqXHR, textStatus, errorThrown) {
alert('ajax:error-' + errorThrown);
},
timeout : function() {
alert('ajax:timeout');
},
abort : function() {
showWaitCursor(false);
alert('ajax:abort');
appendLog('doPost:abort');
},
parsererror : function() {
alert('ajax:parsererror');
}
});
}
What is wrong with the following:
function doPost2(url, queryString, successCallback) {
var kvs = queryString.split("&");
var formData = new FormData();
for(var i = 0; i < kvs.length; i++) {
var kv= kvs[i].split("=");
var key = kv[0];
var value = kv[1];
formData.append(key, value);
}
var request = new XMLHttpRequest();
request.addEventListener('load', successCallback, false);
request.open("POST", url, true);
request.setRequestHeader("Accept","application/json");
request.setRequestHeader("Content-Type", "application/json");
request.send(formData);
}
the callback event.target.responseText is "Unexpected character ('-' (code 45)) in numeric value: expected digit (0-9) to follow minus sign, for valid numeric value
at [Source: org.apache.catalina.connector.CoyoteInputStream@547b6d1c; line: 1, column: 3]"
What is wrong with the above doPost2?
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,358
|
<?php namespace net\xp_framework\unittest\core\generics;
/**
* Lookup map
*
*/
#[@generic(self= 'K, V')]
interface IDictionary {
/**
* Put a key/value pair
*
* @param K key
* @param V value
*/
#[@generic(params= 'K, V')]
public function put($key, $value);
/**
* Returns a value associated with a given key
*
* @param K key
* @return V value
* @throws util.NoSuchElementException
*/
#[@generic(params= 'K', return= 'V')]
public function get($key);
/**
* Returns all values
*
* @return V[] values
*/
#[@generic(return= 'V[]')]
public function values();
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,265
|
{"url":"https:\/\/calculus.eguidotti.com\/index.html","text":"Efficient C++ optimized functions for numerical and symbolic calculus.\n\nThe R package calculus implements C++ optimized functions for numerical and symbolic calculus, such as the Einstein summing convention, fast computation of the Levi-Civita symbol and generalized Kronecker delta, Taylor series expansion, multivariate Hermite polynomials, high-order derivatives, ordinary differential equations, differential operators and numerical integration in arbitrary orthogonal coordinate systems. The library applies numerical methods when working with functions or symbolic programming when working with characters or expressions. The package handles multivariate numerical calculus in arbitrary dimensions and coordinates and implements the symbolic counterpart of the numerical methods whenever possible, without depending on external computer algebra systems. Except for Rcpp, the package has no strict dependencies in order to provide a stable self-contained toolbox that invites re-use.\n\n## Quickstart\n\nInstall the package.\n\ninstall.packages(\"calculus\")\n\nlibrary(calculus)\n\nRead or browse the documentation and the vignettes.\n\n## Philosophy\n\nThe package provides a unified interface to work with mathematical objects in R. The library applies numerical methods when working with functions or symbolic programming when working with characters or expressions. To describe multidimensional objects such as vectors, matrices, and tensors, the package uses the class array regardless of the dimension. This is done to prevent unwanted results due to operations among different classes such as vector for unidimensional objects or matrix for bidimensional objects.\n\n## Dependencies\n\nThe package integrates seamlessly with cubature for efficient numerical integration in C. However, except for Rcpp, the package has no strict dependencies in order to provide a stable self-contained toolbox that invites re-use.\n\n## Testing\n\nSeveral unit tests are implemented via the standard framework offered by testthat and run via continuous integration.\n\n## Contribute\n\nReport a bug and star the repository.\n\n## Cite as\n\nGuidotti E (2022). \u201ccalculus: High-Dimensional Numerical and Symbolic Calculus in R.\u201d Journal of Statistical Software, 104(5), 1-37. doi:10.18637\/jss.v104.i05\n\nA BibTeX entry for LaTeX users is\n\n@Article{calculus,\ntitle = {{calculus}: High-Dimensional Numerical and Symbolic Calculus in {R}},\nauthor = {Emanuele Guidotti},\njournal = {Journal of Statistical Software},\nyear = {2022},\nvolume = {104},\nnumber = {5},\npages = {1--37},\ndoi = {10.18637\/jss.v104.i05},\n}","date":"2023-03-26 21:37:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.43897950649261475, \"perplexity\": 3181.6348983538423}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296946535.82\/warc\/CC-MAIN-20230326204136-20230326234136-00079.warc.gz\"}"}
| null | null |
Q: apply the edge detection in an image segmentation I have two questions about snake edge detection.
The first question is, in openCV there is a function cvSnakeImage(src,points,.......), what does the points parameter mean?
The second questions is: I want to apply this function on the part surrounded by an edge that I have already made, how can I do this?
This is my code for edge detection:
cvCanny(img_bw, img_canny , thresh, thresh * 50, 3);
cvFindContours(img_canny, contour, &contours, sizeof(CvContour), CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
A: points parameter is the pointer to the snakes point array
https://fossies.org/dox/opencv-2.4.8/snakes_8cpp_source.html
Here is an example of how to use snakes
http://download.andol.info/cvsnakeimage.cpp
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,604
|
No, thanks Yes, I accept
Quality members
How Pallet Delivery Network Works
Labelling Palletised Freight for Distribution
Wrapping Palletised Freight
Kerbside Delivery FAQ
Insuring Your Freight
Guide to Tracking Your Pallet
European Pallet Delivery
Linkline Express
Mars-Jones Limited
Luxury Wood: Fuelling demand for wood pellets
MyNexus
Member Stories – P2P Logistics
K2 Transport
Biggest perceived threat to logistics industry revealed
Pall-Ex (UK) Ltd - No. 3155761 (England and Wales) © 2021
Become a shareholder member
Track your pallet
Send a pallet
March in March - The Final Report
Sponsorships >
March in March - Final Report
The Pall-Ex team has completed its March in March challenge
in aid of Combat Stress.
The team, headed up by members of the Pall-Ex marketing team, took on the challenge that tasks participants with walking 10 miles either in one day or over the course of March.
The team organised a number of marches, local to the Pall-Ex hub in Coalville, Leicestershire, to which they invited other members of Pall-Ex staff to join them.
Marches One and Two were undertaken as planned in early March. However, following Government directives to reduce social gatherings and then to work from home where possible in response to the growing Covid-19 pandemic, the team have been completing their 10 miles independently at home.
Despite the cancellation of the final two formal marches, the team have raised a brilliant £300 thanks to the extremely generous support of many people.
Abby Langley, Pall-Ex's UK & European Marketing Manager explained why she has enjoyed the March in March Challenge.
"We wanted to take on the challenge to support our charity partner Combat Stress and the great work they do supporting veterans' mental health.
"Knowing our marches were raising money to help such a good cause was amazing motivation.
We have raised a great amount of money for Combat Stress, so thank you to everyone who supported us and donated on our page!"
Abby Langley, UK & European Marketing Manager, Pall-Ex
"It is a shame we couldn't complete all the marches together, but it is understandable given the current climate.
Getting out of the house on our own to make sure we got all the miles done has really benefited all of us, both physically and mentally, especially over the last week."
The funds raised by Team Pall-Ex will help Combat Stress provide outstanding care and support for veterans who are struggling with their mental health.
If you would like to know more about Combat Stress, you can visit their website by clicking here.
To see all the latest news and events from Pall-Ex and our award-winning network, like and follow us on our Facebook, Twitter and LinkedIn channels.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,785
|
Sunday Post, Showcase Sunday and Stacking the Shelves.....November 30, 2014
The Sunday Post is a weekly meme hosted by The Caffeinated Book Reviewer, Showcase Sunday is hosted by Books, Biscuits and Tea, and Stacking the Shelves is hosted by Tynga's Reviews.
All three are blog roundups giving you a chance to share what your weekly book haul!
I'm sitting here finishing this post with a touch of sadness. It's the last day of our four day Thanksgiving break and I have thoroughly relaxed and enjoyed myself. I've got a vacation coming up, and the Christmas Break is right around the corner, so I shouldn't complain, but boo to Monday!
Thank you to Edelweiss....
The Mermaid's Child by Jo Baker....Malin has always felt different. The fact that, according to her
father, her absent mother was actually a mermaid only makes matters worse. When Malin's father dies, leaving her alone in the world, her choice is clear: stay, and never feel at home, or leave and go in search of the fantastical inheritance she is certain awaits her. Apprenticed to a series of strange and wonderful characters, Malin embarks on a picaresque journey that crosses oceans and continents—from the high seas to desert plains, from slavery to the circus—and leads to a discovery that is the last thing Malin ever could have expected. Beautifully written and hauntingly strange, The Mermaid's Child is a remarkable piece of storytelling, and an utterly unique work of fantasy.
I am intrigued by this premise. I can't wait to get started.
The Cake House by Latifah Salom....Rosaura Douglas's father committed suicide—or at least that's what they are telling her. Now she is forced to live in a house she calls "the Cake House"—a garish pink edifice in the wealthy part of town. It's the house where her father died, and owned by her mysterious new stepfather, Claude. But when her father's ghost appears and warns Rosie not to trust Claude, Rosie begins to notice cracks in her new family's carefully constructed facade. Her mother, Dahlia, is obviously uncomfortable in her new marriage; her stepbrother, Alex, is friendly one second, distant the next; and Claude's business is drawing scrutiny from the police. As her father's ghost becomes increasingly violent—and the secrets haunting the halls of The Cake House thicken—Rosie wonders who, if anyone, is worth trusting.
I think this looks so good, but it looks like it's going to be an intense read.
Hush, Hush by Laura Lippman....Now the mother of a toddler, Tess Monagahan is short on time,
patience, and energy. But with orthodontia and college tuition looming, she takes on a case outside of her comfort zone with her new partner, retired Baltimore P.D. homicide detective Sandy Sanchez. They've been hired to assess the security needs of a very rich, very beautiful, and very imperious woman named Melisandre, who has returned to Baltimore to reunite with her estranged daughters—and wants to capture the reunion on film for posterity.
It's a gutsy and controversial move by a woman who relinquished her custody rights a decade ago. Especially when her youngest daughter died in her care—in what was determined to be an episode of post-partum psychosis. Or was it? Tess tries to ignore the discomfort she feels around Melisandre. But it's difficult, especially after Melisandre becomes a prime suspect in a murder—and Tess realizes she has her own, very judgmental stalker.
I love Laura Lippman, and I can't wait to start this new Tess Monagahan!
I Bought.....
Murder at Honeysuckle Hotel....It's summertime in Honeysuckle, and everyone is lazing in the shade
with a tall glass of lemonade. Everyone except Raelynn Pendleton. She's stuck working at the local store to make the rent while her no-good ex-husband lives it up with a floozy.
When she inherits a Victorian house, Raelynn jumps at the chance to turn her life around. How can she afford the upkeep on such a huge place? Simple. She'll run it as a hotel. Problem is, she has no experience and the décor dates back to the Dark Ages. She'll have to use her secret talent for turning junk into treasure or she'll never snag an overnight guest.
But before the new Honeysuckle Hotel even opens for business, Raelynn discovers the body of a young woman in the garden. As a newcomer in town, Raelynn is blamed for the murder. She's fired from her job, which could mean she'll lose the house. The only way to save Honeysuckle Hotel is to find the real killer - with or without the sexy Sheriff Kent Klein.
I've read a Rose Pressey book before, and I enjoyed it, so I thought this was a great chance when it was on sale for 99 cents!
The Last Anniversary by Liane Moriarty.....Sophie Honeywell always wondered if Thomas Gordon
was the one she let get away. He was the perfect boyfriend, but on the day he was to propose, she broke his heart. A year later he married his travel agent, while Sophie has been mortifyingly single ever since. Now Thomas is back in her life because Sophie has unexpectedly inherited his aunt Connie's house on Scribbly Gum Island -- home of the famously unsolved MunroBabymystery.
Sophie moves onto the island and begins a new life as part of an unconventional family where it seems everyone has a secret. Grace, a beautiful young mother, is feverishly planning a shocking escape from her perfect life. Margie, a frumpy housewife, has made a pact with a stranger, while dreamy Aunt Rose wonders if maybe it's about time she started making her own decisions.
I love Liane Moriarty. I have Little Lies packed in my suitcase for my trip to Hawaii this week!
Unbroken by Laura Hillenbrand.... In boyhood, Louis Zamperini was an incorrigible delinquent. As a
teenager, he channeled his defiance into running, discovering a prodigious talent that had carried him to the Berlin Olympics.
But when World War II began, the athlete became an airman, embarking on a journey that led to a doomed flight on a May afternoon in 1943. When his Army Air Forces bomber crashed into the Pacific Ocean, against all odds, Zamperini survived, adrift on a foundering life raft. Ahead of Zamperini lay thousands of miles of open ocean, leaping sharks, thirst and starvation, enemy aircraft, and, beyond, a trial even greater.
Driven to the limits of endurance, Zamperini would answer desperation with ingenuity; suffering with hope, resolve, and humor; brutality with rebellion. His fate, whether triumph or tragedy, would be suspended on the fraying wire of his will.
I think I'm the only person on the planet who hasn't read this yet. But I never read or saw Seabiscuit either.
Perfect Girl by Michelle Gorman.....Cinderella meets Falling Down in this wickedly funny tale about having it all
Carol is perfect… at least that's what everyone thinks. In reality she's sinking fast – her family treats her like their personal assistant and her boyfriend is so busy with work that he's got her single-handedly running their relationship. Not that her job is any easier. As the only woman on the bank's trading floor she spends twelve-hour days trying not to get sworn at or felt up by colleagues who put the "W" in banker*.
How long can she go on pleasing everyone else before she snaps and loses it all?
This looks so good. Who amongst us hasn't wondered if we are going to snap some days???
Five Fires by Laura Lippman
What They Say....Everyone in small-town Belleville is talking about a series of mysterious
fires disrupting the typically tranquil summer. The authorities attribute them to heat lightning, but some Belleville residents are not so sure…
High-school student Beth, like everyone else in Belleville, has been following the fires – she has plenty of time between her monotonous day job at the deli and solitary nights at home while her mom works late. The fires aren't the only unusual occurrence – Beth's old friend Tara, who left town the year before after a scandal, returns with no real explanation. Circumstances only get stranger when Beth unwittingly discovers clues as to what – or who – is the cause of the fires.
What I Say....I love Laura Lippman, so I was happy to get an ARC of this novella. The thing was I didn't realize it was a novella, so I was pretty disappointed when it came to such a quick end.
I've often wondered about the people who you see in People magazine defending their town's accused rapists, what makes them so willing to defend someone in print. In this short story, you get to see the fallout of that defense. What is really sad is that the people who shouldn't have gotten away with their crime are the only one who escape unscathed.
This was a great short story - I just wish it had been a full length novel.
Up and In by Deborah Disney
What They Say....Maria and Joe have saved every available penny to give their daughters Kate and Sarah the best education possible, which to them means attending the most exclusive girls school in the state. But when Kate befriends the spoilt and moody Mirabella, Maria finds herself thrust into a high society of champagne-swilling mother-istas she hasn't budgeted for. Saturday morning netball is no longer a fun mother-daughter outing, but a minefield of social politics.
While the increasingly neurotic Maria struggles to negotiate the school mum hierarchy, Joe quietly battles a midlife crisis and Kate attempts to grow up as gracefully as possible (without having her life ruined by embarrassing parents).
What I Say....Ugh. I've known women like the bea's. They are the moms that can make a little girl's sporting event feel like you've taken a time machine back to junior high school. Only now they have sharper weapons, and when you feel like your children are being hurt, you become even more sensitive.
Take Maria, for example. She spends the first chapter poring over a group email, scouring every response looking for an insult, despairing that the other mother's don't sign x (the electronic kiss indicates your social standing) after their name when communicating with her, and trying to read between the lines as to who is friend or foe (hint, they are almost all foes).
Let me preface this by saying that I don't like to get into the Mommy wars, I think stay at home moms have it rough and I think working moms have it rough. They have so much in common, but they also face different challenges.
However, having been in both roles, I can say that this book definitely brought back some bad memories of my time at home. I think when you are at home all day, you just have more time to get emotionally invested in these types of relationships (a strong parallel to office relationships and the politics that happen in the workplace).
Once you are working full time and trying to keep up with the kid's activiites and home life, you really don't notice these type of women anymore, their barbs either go right over your head, or you find yourself snapping back because you don't really care what they think, because can't they see how TIRED you are????? I did find it interesting that the only character that stayed neutral and friendly with everyone, was Nicole, the hairdresser who worked full time.
Okay, I got off topic, but I think it's the mark of really good book that it made me think through the issues that all mommies face.
I liked Maria and her funny family, but by mid-book, I was getting really irritated with her for being so wishy-washy. She hated the way the bea's treated her daughter, but she kept hanging around, setting herself up for more poor treatment. But she didn't hang around as a syncophant, she wouldn't kiss up to Bea, but she wouldn't stand up for herself either. I wanted to shake her and tell her to either grow a pair, and tell them to get lost or just buy the damn hair ties! Take your daughter out of that snobby school and stop signing up for every after school activity that these beyotches take their kids to, or plaster a big fake smile on her face and play the game.
But Maria was half in and half out throughout the book. Her husband is telling her that he is miserable in his job, and unhappy with the direction of his future, and her only thought is that if he quits his job, how will they pay for Riverton? At that point, it seemed like Maria was the only one who cared about the school, it certainly wasn't doing anything for the rest of the family.
At the end, you see a few of the bea's get their comeuppance, but even then, Maria continues her pattern of one foot in, one foot out. She wracks her brain to think of the perfect text to send to the woman who's life is falling apart, the same woman who has done nothing but try to make her own life miserable.
That was my only real complaint - I wanted Maria to get her head on straight and teach her kids what really matters. But even though she could do that when they were alone, she wasn't able to do it in front of the snobs, where the lesson would have really stuck.
Thank you, Net Galley and Harper Collins, for giving me this ARC in exchange for an honest review. This was a 4/5 star read.
Happy Thanksgiving from Me!
Happy, Happy Thanksgiving! I hope everyone has a day filled with family, food and books!
A Fairy Tale by Shanna Swendson
What They Say....Once upon a time, a girl named Sophie Drake danced with the fairies in the woods behind her grandparents' Louisiana home. But she closed the door to the fairy world and turned her back on the Fae when they tried to steal her little sister Emily. Fourteen years later, Sophie heads to New York City on a desperate mission. Emily, now an up-and-coming Broadway actress, has gone missing. Only Sophie suspects the Fae.
Now Sophie has her work cut out for her. Emily's abduction is part of a larger plot involving the missing Queen of the fairy realm. An upstart fairy is making a bid to assume control of the entire Realm, unite the fairies, and become master over the human world. To free her sister, Sophie must derail this power scheme and find the true Queen of the Realm.
That's a lot for a small-town ballet teacher to tackle, but with the unlikely aid of her sometimes flighty sister, a pair of elderly shopkeepers with a secret, a supremely lazy (but surprisingly knowledgeable) bulldog, and a wounded police detective searching for his own missing person, she just might prevail--if she can force herself to confront her own past and face her true nature.
What I Say.... This was a fun read. I always say that I don't really like fantasy, but then I read books like this and I realize that I actually do enjoy fantasy.
I just need it to be fantasy that feels somewhat rooted in reality. This book was the perfect blend of the two. Emily is an actress in New York, who lives above an injured NYPD officer. When she goes missing after a performance, her sister Sophie comes to New York to find her.
Part of what kept me hooked was the way the story crossed between the Fairy Realm and New York. I enjoyed the way the different characters moved between worlds.
The climax of the book felt like an Indiana Jones type adventure, and the end didn't tie up with a "happily ever after", but it ended with me waiting for the sequel!
Top Ten Books On My Winter TBR List
Top Ten Books on My Winter To-Be-Read List.
Honest to God, I need to change my title. I love participating in this blog roundup hosted by The Broke and the Bookish, but I can never get to ten. It's embarrassing.
Oh well, here are my six......(hangs head in shame)
Revival by Stephen King
In a small New England town, over half a century ago, a shadow falls over a small boy playing with his toy soldiers. Jamie Morton looks up to see a striking man, the new minister. Charles Jacobs, along with his beautiful wife, will transform the local church. The men and boys are all a bit in love with Mrs. Jacobs; the women and girls feel the same about Reverend Jacobs—including Jamie's mother and beloved sister, Claire. With Jamie, the Reverend shares a deeper bond based on a secret obsession. When tragedy strikes the Jacobs family, this charismatic preacher curses God, mocks all religious belief, and is banished from the shocked town.
I'll Give You the Sun by Jandy Nelson
Jude and her twin brother, Noah, are incredibly close. At thirteen, isolated Noah draws constantly and is falling in love with the charismatic boy next door, while daredevil Jude cliff-dives and wears red-red lipstick and does the talking for both of them. But three years later, Jude and Noah are barely speaking. Something has happened to wreck the twins in different and dramatic ways . . . until Jude meets a cocky, broken, beautiful boy, as well as someone else—an even more unpredictable new force in her life. The early years are Noah's story to tell. The later years are Jude's. What the twins don't realize is that they each have only half the story, and if they could just find their way back to one another, they'd have a chance to remake their world.
From the #1 New York Times bestselling author comes Kristin Hannah's next novel. It is an epic love story and family drama set at the dawn of World War II.
Saving Grace by Jane Green
Grace and Ted Chapman are widely regarded as the perfect literary power couple. Ted is a successful novelist and Grace, his wife of twenty years, is beautiful, stylish, carefree, and a wonderful homemaker. But what no one sees, what is churning under the surface, is Ted's rages. His mood swings. And the precarious house of cards that their lifestyle is built upon. When Ted's longtime assistant and mainstay leaves, the house of cards begins to crumble and Grace, with dark secrets in her past, is most vulnerable. She finds herself in need of help but with no one to turn to…until the perfect new assistant shows up out of the blue. To the rescue comes Beth, a competent young woman who can handle Ted and has the calm efficiency to weather the storms that threaten to engulf the Chapman household. Soon, though, it's clear to Grace that Beth might be too good to be true. This new interloper might be the biggest threat of all, one that could cost Grace her marriage, her reputation, and even her sanity. With everything at stake and no one to confide in, Grace must find a way to save herself before it is too late.
The Look of Love by Sarah Jio
Born during a Christmas blizzard, Jane Williams receives a rare gift: the ability to see true love. Jane has emerged from an ailing childhood a lonely, hopeless romantic when, on her twenty-ninth birthday, a mysterious greeting card arrives, specifying that Jane must identify the six types of love before the full moon following her thirtieth birthday, or face grave consequences. When Jane falls for a science writer who doesn't believe in love, she fears that her fate is sealed. Inspired by the classic song, The Look of Love is utterly enchanting.
The Mermaid's Child by Jo Baker
In this fantastical novel, the acclaimed author of Longbourn brings us the magical story of a young girl in search of her mother...who just might be a mermaid. Malin has always been different, and when her father dies, leaving her alone, her choice is clear: stay, and remain an outsider forever, or leave in search of the mythical inheritance she is certain awaits her. Apprenticed to a series of strange and wonderful characters, Malin embarks on a grueling journey that crosses oceans and continents—from the high seas to desert plains—and leads to a discovery that she could never have expected. Beautifully written and hauntingly strange, The Mermaid's Child is a remarkable piece of storytelling, and an utterly unique work of fantasy from literary star Jo Baker.
<a href="http://www.bloglovin.com/blog/12423069/?claim=ahwj5kkmhmw">Follow my blog with Bloglovin</a>
This was a long week, I was laid up with pneumonia again so I've been working on Fairy Tale by Shanna Swendson for over a week.
This isn't a reflection of the book at all, because I'm really enjoying it, but I keep falling asleep!
I got a well needed facelift on my blog and I'm really pleased with how it came out. I went through Le Charmed Boutique, and she was really patient with me from the time I started (I really had no vision for what I wanted), and when I changed my theme halfway through. I highly recommend Rebekah and her Etsy shop.
I'm still trying to figure out a lot of the blogging things, such as Disquis comments, what the difference is in Google followers vs. email followers vs. bloglovin followers vs. fb followers - you get the hint. I'm a mess. Bloggerland is confusing.
What are you reading? Any suggestions? Is everyone as excited as I am over the upcoming four day weekend?
The Sunday Post...November 23, 2014
I'm participating in The Sunday Post by The Caffeinated Book Reviewer. It's a chance to share News. A post to recap the past week, showcase books and things we have received and share news about what is coming up for the week on our blog.
This was a busier week than normal. I got some great reads and I'm super excited about them. I hope you see something below that interests you!
In the Mail:
Food: A Love Story by Jim Gaffigan....Bacon. McDonalds. Cinnabon. Hot Pockets. Kale. Stand-up New York Times bestselling book Dad is Fat to hear him riff on fatherhood but now, in his second book, he will give them what they really crave—hundreds of pages of his thoughts on all things culinary(ish). Insights such as: why he believes coconut water was invented to get people to stop drinking coconut water, why pretzel bread is #3 on his most important inventions of humankind (behind the wheel and the computer), and the answer to the age-old question "which animal is more delicious: the pig, the cow, or the bacon cheeseburger?" I'm super excited about this book, I loved, "Dad is Fat", and I hope this will be just as funny.
From Edelweiss:
Hyacinth Girls by Lauren Frankel...."Thirteen year old Callie is accused of bullying at school, but Rebecca knows the kind and gentle girlshe's raised is innocent. While Callie is eventually exonerated, threatening notes from her alleged victim, Robyn, begin to surface, and as the notes become suicidal, Rebecca is determined to save the unbalanced Robyn. As Rebecca navigates school disapproval and mean moms while trying to comfort Callie and help Robyn, she recalls her own intense betrayals and best-friendships at that age. Then, her failure to understand those closest to her led to losing them forever, and she's determined that this story will end differently. But Rebecca has failed to understand what is really happening in Callie's life, and now Callie is in terrible danger.
This raw and beautiful story investigates the intensity of adolescent emotions and the complex identity of a teenage girl. Hyacinth Girls looks unflinchingly at how cruelty exists in all of us, and how our worst impulses can sometimes estrange us from ourselves - or save us."
This has received comparisons to "Reconstructing Amelia", which I loved. The inner working of the teen girls mind can be as chilling as any horror story.
The Precious One by Marisa de los Santos...."In all her life, Eustacia "Taisy" Cleary has given her
heart to only three men: her first love, Ben Ransom; her twin brother, Marcus; and Wilson Cleary—professor, inventor, philanderer, self-made millionaire, brilliant man, breathtaking jerk: her father.
Seventeen years ago, Wilson ditched his first family for Caroline, a beautiful young sculptor. In all that time, Taisy's family has seen Wilson, Caroline, and their daughter Willow only once.
Why then, is Wilson calling Taisy now, inviting her for an extended visit, encouraging her to meet her pretty sister—a teenager who views her with jealousy, mistrust, and grudging admiration? Why, now, does Wilson want Taisy to help him write his memoir?
Told in alternating voices—Taisy's strong, unsparing observations and Willow's naive, heartbreakingly earnest yearnings—The Precious One is an unforgettable novel of family secrets, lost love, and dangerous obsession, a captivating tale with the deep characterization, piercing emotional resonance, and heartfelt insight that are the hallmarks of Marisa de los Santos's beloved works."
I couldn't put Love Walked In down, so I'm really looking forward to this.
From Net Galley:
The Half Brother by Holly Lecraw....When Charlie Garrett arrives as a young teacher at the shabby-
yet-genteel Abbott School, he finds a world steeped in privilege and tradition. Fresh out of college and barely older than the students he teaches, Charlie longs to leave his complicated southern childhood behind and find his place in the rarefied world of Abbottsford. Before long, he is drawn to May Bankhead, the daughter of the legendary school chaplain; but when he discovers he cannot be with her, he forces himself to break her heart, and she leaves Abbott, €"he believes forever. He hunkers down in his house in the foothills of Massachusetts, thinking his sacrifice has contained the damage, and controlled their fates.
But nearly a decade later, his peace is shattered when his golden-boy half brother, Nick, comes to Abbott to teach, €"and May returns as a teacher as well. Students and teachers alike are drawn by Nick'€™s magnetism, and even May falls under his spell; when Charlie pushes his brother and his first love together, with what he believes are the best of intentions, a love triangle ensues that is haunted by desire, regret, and a long-buried mystery.
With wisdom and emotional generosity, LeCraw takes us through a year that transforms both the teachers and students of Abbott forever. Skillfully plotted, lyrical, and ambitious, The Half Brother is a powerful examination of family, loyalty, and love.
I love boarding school books. I don't know why. But I do.
Up and In by Deborah Disney....Maria and Joe have saved every available penny to give their
daughters Kate and Sarah the best education possible, which to them means attending the most exclusive girls school in the state. But when Kate befriends the spoilt and moody Mirabella, Maria finds herself thrust into a high society of champagne-swilling mother-istas she hasn't budgeted for. Saturday morning netball is no longer a fun mother-daughter outing, but a minefield of social politics.
For every woman who has ever felt she may be wearing the wrong shoes, this is a book that will remind you - you're not alone.
Can't wait to read this since I frequently feel like I'm wearing the wrong shoes.
Posted by Unknown 4 comments: Labels: stacking the shelves
Top Ten Tuesday - Sequels I Can't Wait to Get
I love to participate in The Broke and the Bookish's Top Ten Tuesday. It's a blog round up with a new subject every week.
The theme this week is Top Ten Sequels I Can't Wait To Get. I'm sorry to tell you that this Top Ten list is only truly my Top Five. But they are five must reads for me!
1. First Frost by Sarah Addison Allen is coming January 20, 2015. It's the long awaited sequel , which was one of my most favoritest (I know it's not a word) books by S.A.A., Garden Spells.
2. Shopaholic to the Stars by Sophie Kinsella. This came out in October, but I still haven't managed to pick it up. I love this series, but I've had a lot of great books sent to me for review, so I just haven't gotten to Becky Bloomwood's latest escapades yet.
3. Where She Went by Gayle Forman. This isn't a new sequel, but I really want to read the sequel to If I Stay. I loved the first book and I really, really liked the movie.
4. The Book of Life by Deborah Harkness. This is another sequel I've been looking forward to, but it's been on my iPad since July, when it came out. Too many books, not enough time. Plus, this is the last one, so it makes me sad to think about finishing it.
5. The Rosie Effect by Graeme Simsion. I'm putting this sequel on the list even though I've already read it. If you haven't read The Rosie Project, you need to order it right now. One of my favorite books last year.
Difficult Husbands (a difficult read) by Mary DeLaszlo
What They Say.....Three friends. One surprise inheritance. And the perfect plan to deal with troublesome husbands at Christmas time…
Newly divorced Lorna is struggling to adjust to life on her own. When she discovers that her beloved godfather has left her the grand (and crumbling) Ravenscourt House in the heart of Sussex, she soon has a project on her hands.
Nathan sells delicious goodies at Mulberry Farm. When he meets Lorna at a Christmas market, neither of them can ignore the chemistry. But as they get to know one another, Lorna wants to know one thing – is he after her or the house?
Together with Gloria – whose marriage to alcoholic Adrian has hit rock bottom, and Rosalind – struggling to deal with her womanising husband Ivan, the three friends hatch a plan. They'll ditch their difficult husbands at Ravenscourt House and enjoy stress-free Christmases with their families.
But nothing is ever that simple…
What I Say....I generally try to stay positive in my reviews. But this book was really difficult to get through.
Newsflash, these aren't difficult husbands, they are jerks that should have been left long before the story started. A non-functioning alcoholic, a guy who has affairs and brings his mistresses home for Christmas, and a man who went on "happy pills" (apparently, the author is quite opposed to anti-depressants), which then made him so emotionless (?) that he left his family for a younger woman. Yeah, I'd say they bypassed "difficult" a while back.
So the only way that these women can spend the holiday concentrating on their adult children is to get the husbands to stay at Ravenswood, the large English estate that Lorna has just inherited. How their physical distance matters is uncertain, since the husbands are all that these women can think or talk about.
Normally, I love chick lit that involves inheriting large English estates (see A Good Year for Roses), but this was pretty slow, painful reading. The women appeared to be hopeless and elderly, although they kept talking about how much younger they were than their husbands.
The daughter's pregnancy with a man who is struggling with infertility with his WIFE was a dead end storyline. I didn't feel any sympathy for her at all, and the boyfriend was another "difficult husband" who had to call his wife when he was overwhelmed by the premature arrival of his daughter. If I was his wife, I would have happily come to the hospital and beaten him over the head.
Depressing stories, unsympathetic characters, even the twist involving Nathan, the (surprise!) rich baker who lives with his mom, didn't do anything to save this book.
Stacking the Shelves, November 16, 2014
Stacking The Shelves is all about sharing the books you are adding to your shelves, may it be physical or virtual. This means you can include books you buy in physical store or online, books you borrow from friends or the library, review books, gifts…this is a weekly blog roundup hosted by Tynga's Reviews.
A Fairy Tale by Shanna Swendson.... Once upon a time, a girl named Sophie Drake danced with thefairies in the woods behind her grandparents' Louisiana home. But she closed the door to the fairy world and turned her back on the Fae when they tried to steal her little sister Emily. Fourteen years later, Sophie heads to New York City on a despearte mission. Emily, now an up-and-coming Broadway actress, has gone missing. Only Sophie suspects the Fae.
Now Sophie has her work cut out for her. Emily's abduction is part of a larger plot involving the missing Queen of the fairy realm. An upstart fairy is making a bid to assume control of the entire Realm, unite the fairies, and become master over the human world. To free her sister, Sophie must derail this power scheme and find the true Queen of the Realm.
That's a lot for a small-town ballet teacher to tackle, but with the unlikely aid of her sometimes flighty sister, a pair of elderly shopkeepers with a secret, a supremely lazy (but surprisingly knowledgeable) bulldog, and a wounded police detective searching for his own missing person, she just might prevail–if she can force herself to confront her own past and face her true nature.
Scary Mommy's Guide to Surviving the Holidays by Jill Smokler.... From New York Times bestselling author and acclaimed "Scary Mommy" blogger Jill Smokler comes a funny and practical guide filled with essays, recipes, and tried-and-true tips sure to get any parent through the holiday season—without losing your marbles.
Ah, the holidays: a time of joy, celebration, serenity, and peace…
Unless, of course, you have whiny, screaming children demanding presents, attention, and a personal appearance by Santa or Judah the Maccabee. Then you're screwed.
But wait, there's hope: Scary Mommy Guide to Surviving the Holidays to the rescue!
Yes, in this handy holiday guide, you'll find everything you need to survive the fall/winter rush of cheer in style, and without having a mental breakdown. From relatable, hilarious essays on everything from the Santa myth to being seated at the dreaded kids' table, to easy-to-follow recipes that might include just a little something special to take the edge off (can anyone say Kahlua?), to fun and accessible gift ideas, this book is your ticket to peace of mind—and a laugh—during the busy, crazy holiday season!
The Moon Sisters by Therese Walsh....A beautiful coming-of-age novel about two sisters on a journey to forgive their troubled mother, with a sheen of almost magical realism that overlays a story about the love of a family, and especially between sisters.
Therese Walsh's poignant and mesmerizing novel is a moving tale of family, love, and the power of stories. After their mother's probable suicide, sisters Olivia and Jazz are figuring out how to move on with their lives. Jazz, logical and forward-thinking, decides to get a new job, but spirited, strong-willed Olivia, who can see sounds, taste words, and smell sights, is determined to travel to the remote setting of their mother's unfinished novel to say her final goodbyes and lay their mother's spirit to rest.
Though they see things very differently, Jazz is forced by her sense of duty to help Olivia reach her goal. Bitter and frustrated by the attention heaped on her sunny sister whose world is so unique, Jazz is even more upset when they run into trouble along the way and Olivia latches to a worldly train-hopper. Though Hobbs warns Olivia that he's a thief who shouldn't be trusted, he agrees to help with their journey. As they near their destination, the tension builds between the two sisters, each hiding something from the other, and they are finally forced to face everything between them and decide what is really important.
Posted by Unknown 1 comment: Labels: stacking the shelves
The Walls Around Us by Nova Ren Suma...the book that left me confused
What They Say.... Ori's dead because of what happened out behind the theater, in the tunnel made out of trees. She's dead because she got sent to that place upstate, locked up with those
monsters. And she got sent there because of me."
The Walls Around Us is a ghostly story of suspense told in two voices—one still living, and one long dead. On the outside, there's Vee, an eighteen-year-old dancer days away from the life of her dreams when something threatens to expose the shocking truth of her achievement. On the inside, within the walls of a girls' juvenile detention center, there's Amber, locked up for so long she can't imagine freedom. Tying these two worlds together is Orianna, who holds the key to unlocking all the girls' darkest mysteries: What really happened when Orianna stepped between Violet and her tormentors?
What really happened on two strange nights at Aurora Hills? Will Amber and Violet and Orianna ever get the justice they deserve – in this life or in another one? We hear Amber's story and Violet's, and through them Orianna's, first from one angle, then from another, until gradually we begin to get the whole picture – which is not necessarily the one that either Amber or Violet wants us to see.
What I Say....This book could have easily been a 5 star read. Could have, but wasn't.
The storyline was good and the characters were interesting. But I got lost in the author's desire to write an important book. The prose was heartfelt (you could feel the author, you were aware of the author the whole time), but it tended to go on too long and seemed to ramble at times, rather than building up to the suspense of the story.
There were no clues in the beginning to help you understand that you are reading a ghost story, so when the book made that sudden turn about a third of the way through, it didn't shock me as much as it confused me. I had to go back a few pages and try to make sense of what was happening.
At the end, the exchange of girls (I won't say anymore, no spoilers here) didn't make any sense and didn't seem realistic (I understand I'm reviewing a ghost story, and should I really expect realistic, but don't the best ones make it seem like it could happen?). It felt like a rather abrupt happy (?) ending.
I still give it 3 stars, it was a good book, but it was frustrating as a reader to feel like it could have been a GREAT book.
I received a free ARC from NetGalley in exchange for an honest review. The only kind I give.
The French for Christmas by Fiona Valpy
What They Say....Evie used to LOVE Christmas, but this year she can't wait for the tinsel and
presents to be a distant memory.
When her best friends offer the use of their cottage in the beautiful French countryside, Evie jumps at the chance. With her soon-to-be-ex-husband, celebrity chef Will Brooke, plastered over the news with his latest 'love interest', leaving the country seems like the perfect plan.
Armed with her French grandmother's tattered notebook of recipes, Evie is determined to ignore Christmas altogether and bake herself back to happiness.
And when Evie meets her next-door neighbour – the très gorgeous doctor Didier she finds a very willing taste-tester. But is it possible that he could be interested in more than just her Tarte Tatin?
With snow falling, a special Réveillon dinner and a little Christmas magic in the air, could Didier even be the one to thaw Evie's heart? Or will a visit from the ghost of Christmas past change everything?
What I Say.....My second book of Christmas 2014, and this one was a winner.
Evie flees London, choosing to stay in rural France, without Internet, TV or even reliable electricity, in order to avoid Christmas. She has recently suffered a stillbirth and her marriage has fallen apart in the wake of her grief. Being alone feels like the only way to survive the holidays.
Once she arrives in the countryside, she discovers some kindly neighbors, and a doctor who lives next door that looks like Bradley Cooper (I love fantasy chick lit! This book reminded me of one of my favorite Christmas movies, "The Holiday". I want to live this story). As they get to know each other and share their mutual grief, an evolving relationship helps them both to look forward to the future and new love.
However, a kind email written to her estranged husband in a moment of forgiveness causes a surprise to turn up on Evie's doorstep during Christmas lunch.
Sometimes I don't understand why I love this formula of writing so much. It's so unrealistic. Who moves to the country, finds Dr. Bradley Cooper next door and falls in love? I don't care, it's my escape. A girl can dream........
The Divorce Papers by Susan Rieger
What They Say....Twenty-nine-year-old Sophie Diehl is happy toiling away as a criminal law associate at an old-line New England firm, where she very much appreciates that most of her clients are trapped behind bars. Everyone at Traynor, Hand knows she abhors face-to-face contact, but one week, with all the big partners out of town, Sophie is stuck handling the intake interview for the daughter of the firm's most important client.
After eighteen years of marriage, Mayflower descendant Mia Meiklejohn Durkheim has just been served divorce papers in a humiliating scene at the popular local restaurant, Golightly's. Mia is now locked and loaded to fight her eminent and ambitious husband, Dr. Daniel Durkheim, Chief of the Department of Pediatric Oncology at Mather Medical School, for custody of their ten-year-old daughter Jane. Mia also burns to take him down a peg. Sophie warns Mia that she's never handled a divorce case before, but Mia can't be put off. The way she sees it, it's her first divorce, too. For
Sophie, the whole affair will spark a hard look at her own relationships—with her parents, colleagues, friends, lovers, and, most important, herself.
A rich, layered novel told entirely through personal correspondence, office memos, e-mails, articles, handwritten notes, and legal documents, The Divorce Papers offers a direct window into the lives of an entertaining cast of characters never shy about speaking their minds. Original and captivating, Susan Rieger's brilliantly conceived and expertly crafted debut races along with wit, heartache, and exceptional comedic timing, as it explores the complicated family dynamic that results when marriage fails—as well as the ever-present risks and coveted rewards of that thing called love.
What I Say....I wanted to enjoy this book. And I may very well if it had been told in a different format. But all of the bills, contracts, intake forms, law books, and correspondence really took away from the story for me.
I've always enjoyed the way Sophie Kinsella uses pieces of correspondence scattered thoughout her Shopaholic books, but the way it was done in The Divorce Papers was just too disconnected. I think it can work in a lighthearted way, but not when you are telling the story of a child who is having a breakdown about the thought of living with her father.
It had promise, I think the story deserved to be told without committing so hard to the writing gimmick.
Christmas on Chestnut Street by Nancy Thayer
What They Say....As Christmas draws near, Felicia returns to her family's home on the island to marry her adventurous, rugged boyfriend, Archie. Every detail is picture-perfect for a dream wedding: the snow-dusted streets, twinkling lights in the windows, a gorgeous red and white satin dress. Except a lavish ceremony is not Felicia's dream at all; it's what her mother, Jilly, wants. Jilly's also worried that her daughter's life with daredevil Archie will be all hiking and skydiving. Wondering if their handsome neighbor Steven Hardy might be a more suitable son-in-law, Jilly embarks on a secret matchmaking campaign for Felicia and the dashing stockbroker.
As the big day approaches and Jilly's older daughter, Lauren, appears with rambunctious kids in tow, tensions in the household are high. With the family careening toward a Yuletide wedding disaster, an unexpected twist in Nancy Thayer's heartwarming tale reminds everyone about the true meaning of the season.
What I Say....This was my Christmas 2014 kickoff book. It was probably not the best choice, because it didn't get me too excited.
Loved the description of Nantucket, but the storyline was uneven. The initial setup made it seem like Jilly was going to interfere with her daughter's upcoming wedding by trying to hook her up with her high school love again. Nothing of the sort ever took place - I kept waiting for something to happen, nothing happened.
The dialogue was simplistic and Jilly was an unsympathetic character to me. I couldn't tell whether she was a total PIA mother, or someone who was going to experience a growth moment. Jilly as a character was uneven. She was plotting to break up her daughter's marriage, then applauding her soon to be son-in-law. She wanted her family to have the perfect dinner, but then shrugged when the grand kids spilled jambalaya on the carpet.
One thing I will say is that this was a quick read. I don't think I'm the author's target audience.
I do think maybe my stepmother might enjoy this book. I imagine her mind works a lot like Jilly's around the holidays. Judging....always judging......
Thank you NetGalley for the ARC! My opinion is given honestly in exchange.
Posted by Unknown No comments: Labels: christmas 2014
Luckiest Girl Alive by Jessica Knoll - a rare 5 star read
What They Say.....In a riveting debut novel that reads like Prep meets Gone Girl, a young woman is determined to create the perfect life—husband, home, and career—until a violent incident from her past threatens to unravel everything and expose her most shocking secret of all. Twenty-eight-year-old New Yorker Ani FaNelli seems to have it all: she's a rising star at The Women's Magazine, impossibly fit, perfectly groomed, and about to marry Luke Harrison, a handsome blueblood. But behind that veneer of perfection lies a vulnerability that Ani holds close and buries deep—a very violent and public trauma from her past that has left her constantly trying to reinvent herself. And only she knows how far she would go to keep her secrets safe.
When a documentary producer invites Ani to tell her side of the chilling incident that took place when she was a teenager at the prestigious Bradley School, she hopes it will be an opportunity for public vindication. Armed with the trappings of success—expensive clothes, high-powered byline, a massive engagement ring—she is determined to silence the whispers of suspicion and blame from her past, and prove once and for all how far she's come since Bradley. She'll even let them film her lavish wedding on Nantucket, the final step in her transformation.
But perfection doesn't come without cost. As the wedding and filming converge, Ani's meticulously crafted facade begins to buckle and crack—until an explosive revelation offers her a final chance at redemption, even as it rocks her picture-perfect world.
What I Say....Wow. This was such an unique book.
When I finished it, I opened it up Goodreads and gave it 4 stars. But I couldn't stop thinking about that night and into the next morning. After I found myself spending my whole shower thinking about this book, I realized this book was really a 5 star read for me. And I don't give 5 stars often.
I think initially I gave it 4 stars because through the majority of the book, I didn't like any of the characters, including Ani. And by the end, I only felt a slight thaw towards her, but it was better than my original disdain.
The story weaves between Ani's carefully cultivated New York image, and her embarrassingly middle class teen years in Philadelphia, raised by a distant father and a social climber mother.
As you follow her story, you see that the perfect life that she is attempting to build herself isn't making her happy, but she doesn't really seem interested in being happy. I found Ani's emotional void to be one of the strongest themes in the book.
From the beginning, Ani refers to the incident that happened to her in high school. And I thought I knew where it was going, but I was completely taken off guard when it finally happened.
I calmly read this line, "I was thinking, how strange that it's blond, baby thin, when the hair everywhere else was coarse and dark, when Dean went sideways in the air. Why is Dean jumping?"
and then everything changed, even for me, the reader.
I don't want to discuss this book in any more detail because I think that if you know too much, it really would spoil the experience of this book for you.
I would really like to hear what other readers think of this book.
Top Ten Books I Want to Reread
This week's list is the Top Ten Books I Want To Reread (or if you don't reread, would in an ideal world). I don't have a lot of time to reread books, when there are so many new books to read! But these are the ones I would take to a desert island.......
1. Gone With the Wind by Margaret Mitchell. This is one that I do reread every few years. It's just one of the best books ever, ever, ever.
2. Gone, Baby, Gone by Dennis Lehane. Gripping, moving. It will make you consider morality
and when can doing something wrong end up actually mean you are doing something right? No happy endings here.
3. The Shell Seekers by Rosamunde Pilcher. My favorite WWII book. I have gotten lost in this over and over.
4. No Flying in the House by Betty Brock. Haha! This is the first book I remember reading over
and over, I think I was about 8 years old. I still want to be a fairy with a magic, talking dog.
5. Bridget Jones's Diary by Helen Fielding. A winter classic for me. One of the few times that they movie was as good as the book.
6. To Have and To Hold by Jane Green. I love her books and would reread all of them given unlimited time. Thankfully, she's got a new one coming out soon.
7. Such a Pretty Fat by Jen Lancaster. This was one of the first funny non-fiction books I ever
read. It cracked me up. I bought it off a table at Borders. Does Borders still exist?
8 & 9. Something Borrowed and Something Blue by Emily Giffin. I would happily reread, but I don't think you can read Something Borrowed without immediately starting Something Blue. I think I ended up liking Something Blue better.
10. Little Women by Louisa May Alcott. Always a classic, always a book to lose yourself in. I want to be a Beth, but I'm afraid I'm a Jo.
Posted by Unknown No comments: Labels: Dennis Lehane, emily giffin, jane green, jen lancaster, Top Ten Tuesday
Sunday Post, Showcase Sunday and Stacking the Shel...
Difficult Husbands (a difficult read) by Mary DeLa...
The Walls Around Us by Nova Ren Suma...the book th...
Luckiest Girl Alive by Jessica Knoll - a rare 5 st...
What Burns Away by Melissa Falcon Field
Stacking the Shelves...November 1, 2014
A Second Bite at the Apple by Dana Bate
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 381
|
Agil Mammadov may refer to:
Agil Mammadov (footballer, born 1972)
Agil Mammadov (footballer, born 1989)
Agil Mammadov (soldier)
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,871
|
<?php
namespace Chrisguitarguy\StaticMethodLoader\Resolver\Fixtures;
class Loader
{
public static function load()
{
// ...
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 590
|
Museo Nacional de Bellas Artes může být
Národní muzeum výtvarného umění v Buenos Aires
Národní umělecké muzeum v Havaně
Národní muzeum umění (Chile) v Santiagu de Chile
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,794
|
Enea Mihaj (* 5. Juli 1998) ist ein albanischer Fußballspieler, der beim portugiesischen Club FC Famalicão unter Vertrag steht.
Karriere
Verein
Mihaj wurde als Sohn von albanischen Eltern auf der griechischen Insel Rhodos geboren. Er wuchs in Thessaloniki auf. Nach ein paar Jahren zog er mit seiner Familie zurück nach Rhodos. Er begann seine Karriere bei den Jugendakademien von Panetolikos im westgriechischen Agrinio.
2016 wurde Mihaj für das Spiel gegen Kalithea zum ersten Mal in die A-Mannschaft aufgerufen, blieb aber ein Auswechselspieler auf der Bank. Am 11. Januar 2017 debütierte Mihaj als professioneller Fußballspieler gegen PAOK. Er wurde in der 87. Minute für Giorgos Mygas eingewechselt.
Im Juni 2019 unterzeichnete Mihaj einen Vier-Jahres-Vertrag bei PAOK Thessaloniki. Sein erstes Tor erzielte er im Juli 2020 gegen Olympiakos. 2022 wechselte zum FC Famalicão nach Portugal.
Nationalmannschaft
Am 10. September 2018 wurde er im Rahmen der UEFA Nations League für die Spiele gegen Israel und Schottland ins Kader der albanischen Nationalmannschaft aufgerufen. Beim Spiel gegen Schottland wurde Mihaj in den letzten Minuten für Frédéric Veseli eingewechselt.
Weblinks
Enea Mihaj in der Datenbank des Albanischen Fußballverbands.
Einzelnachweise
Fußballnationalspieler (Albanien)
Fußballspieler (Panetolikos)
Fußballspieler (PAOK Thessaloniki)
Fußballspieler (FC Famalicão)
Albaner
Geboren 1998
Mann
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,645
|
package com.github.dockerjava.core.exec;
import com.fasterxml.jackson.core.type.TypeReference;
import com.github.dockerjava.api.command.PruneCmd;
import com.github.dockerjava.api.model.PruneResponse;
import com.github.dockerjava.core.DockerClientConfig;
import com.github.dockerjava.core.MediaType;
import com.github.dockerjava.core.WebTarget;
import com.github.dockerjava.core.util.FiltersEncoder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PruneCmdExec extends AbstrSyncDockerCmdExec<PruneCmd, PruneResponse> implements PruneCmd.Exec {
private static final Logger LOGGER = LoggerFactory.getLogger(PruneCmdExec.class);
public PruneCmdExec(WebTarget baseResource, DockerClientConfig dockerClientConfig) {
super(baseResource, dockerClientConfig);
}
@Override
protected PruneResponse execute(PruneCmd command) {
WebTarget webTarget = getBaseResource().path(command.getApiPath());
if (command.getFilters() != null && !command.getFilters().isEmpty()) {
webTarget = webTarget.queryParam("filters", FiltersEncoder.jsonEncode(command.getFilters()));
}
LOGGER.trace("POST: {}", webTarget);
PruneResponse response = webTarget.request().accept(MediaType.APPLICATION_JSON)
.post(null, new TypeReference<PruneResponse>() { });
LOGGER.trace("Response: {}", response);
return response;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,337
|
For way too long, students were defined and confined by learning styles. It most likely all began as a positive move toward helping students grow as learners, but it was one of those times educators took some research and went the wrong way with it. We (I include myself in this issue) would tell certain students they were visual learners, and others that they were auditory learners.
It's not that we don't have preferred methods of learning, but too often our students are boxed in by their learning styles as if they didn't have more than one. This issue caused me to write about the Myth of Learning Styles a few years ago. It became a big issue because students, and their parents and teachers, began to believe that students only had one way of preferred learning which prevented them from strengthening other styles of learning.
Howard Gardner dealt with this issue a few years ago as well. In his seminal work around Multiple Intelligences, it became popular to tell students they were linguistic learners or bodily-kinesthetic. Multiple Intelligences has been such an important contribution o education, but it started to get used improperly to the point that Gardner had to address the issue in this Washington Post blog a few years ago as well.
And many of us know the work of Daniel Willingham, who co-authored this article on the Learning Styles with Cedar Reiner in the higher education magazine called Change. Willingham has long been at the forefront of distilling the myth of learning styles.
What these cases show us, is that in an effort to help our students, we seem to like to put them in boxes. We package them up and categorize them in an effort to help them, which really just goes on to enable and force them into one or two types of learning. And then we wonder why they don't have a growth mindset (Why the Growth Mindset Doesn't Work).
Perhaps what we need are learning strategies and not be consumed by learning styles.
In Learning strategies: a synthesis and conceptual model written by John Hattie and Gregory Donoghue, they explore 4 different strategies and three necessary components to help lead to successful learning. For full disclosure, I work with John Hattie as a Visible Learning trainer, but Hattie's research has had a profound impact on my learning. Hattie's research is one of those areas that after you learn it, you simply cannot unlearn it.
Boekaerts, for example, argued for three types of learning strategies: (1) cognitive strategies such as elaboration, to deepen the understanding of the domain studied; (2) metacognitive strategies such as planning, to regulate the learning process; and (3) motivational strategies such as self-efficacy, to motivate oneself to engage in learning.
Hattie and Donighue offer further information when write, "Dignath, Buettner and Langfeldt added a fourth category--management strategies such as finding, navigating, and evaluating resources." These four strategies together focus on the learning, and the next three areas support how to do it. Equally as important as the student being in the position of the learner, which holds the teacher in the position as the...well...teacher, there are three other components that help increase the status of the learner and put them in the position of something more important...which is the self-regulated learner. Or, as Hattie refers to them...the assessment capable learner.
Too often when we walk into classrooms, or when parents talk to their children about their day at school, we ask, "What did you do today?" A simple way to change the dialogue around from doing stuff all day to learning (both curating and consuming) we need to ask, "What did you learn today?"
In order to inspire students to learn, Hattie and Donoghue suggest that they have to have the skill, will and thrill to dive into surface and deep level learning. The following are examples of all three of those important components.
The Skill - "The first component describes the prior achievement the student brings to the task. As Ausubel claimed 'if I had to reduce all of educational psychology to just one principle, I would say this 'The most important single factor influencing learning is what the leaner already knows. Ascertain this and teach him accordingly."
The Will - "Dispositions are more habits of mind or tendencies to respond to situations in certain ways. Claxton claimed that the mind frame of a 'powerful learner' is based on the four major dispositions: resilience or emotional strength, resourcefulness or cognitive capabilities, reflection or strategic awareness, and relating or social sophistication."
The Thrill - "Biggs learning processes model, which combines motivation (why the student wants to study the task) and their related strategies (how the student approaches the task). He outlined three common approaches to learning: deep, surface and achieving. When students are taking a deep strategy, they aim to develop understanding and make sense of what they are learning, and create meaning and make ideas their own."
Hattie has over 150 influences on learning, which come with an average effect size from the massive amounts of meta-analysis that he has reviewed, studied and dissected over the last few decades. When the effect size is a .40 (hinge point) we know that that equates to a year's worth of growth for a year's input. Anything less equates to less growth (and we need to look at why and not necessarily dump it from our educational vernacular) and anything above the hinge point could lead to more than a year's worth of growth for a year's input.
We hear and read all about influences like classroom discussion (.82), assessment capable learners (1.44), collective teacher efficacy (1.57) and feedback (.75). Additionally, there are also subsets of influences that are accompanied by effect sizes, all of which you can read about in this new research paper by Hattie and Donoghue.
More importantly, we have to make sure we are not so concerned about helping our students that we hurt them by putting them in boxes labeled with learning styles, and we have to listen to Hattie when he says, "Know Thy Impact" and understand if we are using different strategies, teaching them to our students so they know what to do when we aren't there, and gather evidence to see how well they are working.
Peter DeWitt, Ed.D. is the author of several books including the forthcoming Collaborative Leadership: 6 Influences That Matter Most (September, 2016. Corwin Press). Connect with Peter on Twitter.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,489
|
Pierre Dan, dit le Père Dan, né vers et mort le , est un religieux et chroniqueur historique français.
Biographie
Pierre Dan naît vers 1580. Bachelier en théologie, il est simultanément religieux dans le couvent des Mathurins au château de Fontainebleau, au moins depuis 1624, et ministre de la maison de l'Honneur-Dieu à Villeneuve-aux-Ânes (actuelle commune de Brou-sur-Chantereine). Cette dernière maison est pourtant supprimée mais la dignité préservée, une coutume chez les Trinitaires, qui permet le droit de vote au chapitre général et la perception de revenus.
Dan effectue également une rédemption dans un « voyage en Barbarie », de à ; entre temps, il est nommé ministre de Fontainebleau à partir du , nomination qu'il apprend à son retour en France. En 1642, il fait paraître Le Trésor des merveilles de la maison royale de Fontainebleau chez Sébastien Cramoisy, à Paris. Ainsi, il est l'auteur de la première monographie complète d'un château en France, un ouvrage qui décrit l'architecture et l'histoire de celui de Fontainebleau et devient une source incontournable.
Il décède le et est inhumé en l'église Saint-Louis de cette localité (alors un bourg). Sa pierre tombale a depuis disparu avec dessus l'inscription funéraire qui fait son éloge.
Œuvres
1637 : Histoire de Barbarie et de ses corsaires
1642 : Le Trésor des merveilles de la maison royale de Fontainebleau
Éponymie
À Fontainebleau, une voie est dénommée rue Pierre-Dan. Anciennement rue du Colonel-Mondain, elle est renommée ainsi en 1927, sous l'impulsion d'Eugène Plouchart.
Références
Annexes
Articles connexes
Histoire de Seine-et-Marne
Cimetière des Mathurins
Liens externes
Date de naissance incertaine (XVIe siècle)
Décès en octobre 1649
Religieux catholique français
Historien français du XVIIe siècle
Château de Fontainebleau
Personnalité inhumée à Fontainebleau
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,878
|
Read below to learn the history and latest updates on the issues that will be discussed.
Read SAMRU's full statement here. Since legalization, October 17, 2018, Mount Royal has complied with the City of Calgary's bylaw prohibiting public consumption of cannabis. SAMRU believes that the City of Calgary, and MRU by extension, should be pursuing a harm reduction approach to educate students about the potential risks associated with using cannabis. As student safety is our primary concern, education, destigmatization, and safe areas for consumption are our top priorities for cannabis use. We believe that Mount Royal University should apply for and be granted an exemption from the City of Calgary bylaws to allow for a designated cannabis area so that we can promote harm reduction on campus, as we do with alcohol. Mount Royal supports SAMRU in this initiative.
$63 million is needed to renovate the old library space. Mount Royal has a Campus Master Plan that demonstrates their plan for the old library space. The intention for this space is to create a "student-centred space" where campus services are consolidated. The space had been temporarily used as the Campus Bookstore in Fall 2018 while the original bookstore was being renovated. Now that the old bookstore has reopened, there are no plans to use the space on a temporary basis for anything else due to the cost of keeping that space open.
We are hopeful that in the next provincial budget we will see funds allocated to fully renovating this space for students and SAMRU will continue to advocate to the government during this time. Until then, SAMRU is working with Mount Royal to keep students informed of its progress through campus outreach.
Mount Royal developed their first Sexual Violence Policy in 2017 at the request of the Minister of Advanced Education for Alberta. All universities in Alberta were required to create a policy that also allowed for the collection of reported incident data. Though the foundation has been set for Mount Royal's Sexual Violence policy, SAMRU believes it doesn't address all the issues; more work needs to be done to implement the policy on campus as well as provide support services for survivors. The department of Campus Equity and Meaningful Inclusion hired a Sexual Violence Response & Awareness Coordinator to their staff to help implement this policy. However, SAMRU believes one staff member is not enough to meet the needs of the student body.
The Provincial government's Bill 19, allows for more affordability and accessibility to post secondary students in terms of tuition and fees. The bill regulates tuition so institutions can no longer hike tuition at will and caps it at the Consumer Price Index (roughly 2%). Should the university attempt to increase tuition more than inflation annually, the Student Board of Governors have to vote in favour of the increase for it to be approved by the Minister of Advanced Education and implemented. There will be a fifth and final year of tuition freeze for the 2019/2020 year. SAMRU is hopeful that the university will receive backfill funding from the government to cover the lost revenue of a tuition increase.
International students will also benefit from this legislation as they will not be subjected to unpredictable annual tuition increases; Prospective students will know what their full degree will cost at the onset. Institutions are no longer allowed to arbitrarily increase tuition after the student has started their program. This does not apply to international students who are already enrolled, only to prospective students.
Any new mandatory non-instructional fees will have to be presented to both the institution and to students with the benefits it will bring to the program. It cannot be revenue generating and must be specific to a transparent outcome. SAMRU fees are not included in mandatory non-instructional fees.
Under previous legislation, the MRU Board of Directors only had one student, the SAMRU President, sit on its board. Under the new PSLA, two students will be able to sit on the Board of Directors. The SAMRU VP Student Affairs will sit in this second seat for a period of two years, subject to the evaluation of the Student Governing Board, after which the position may be opened to another student from the general student population.
The 2019 Get Out the Vote campaign launches January 18th! SAMRU representatives and volunteers will be out and about collecting voter pledges for the upcoming Provincial election on campus. Students who pledge to vote, are consenting to receive information regarding advance polls on campus, election dates, and any important information pertaining to the Provincial election. This campaign is non-partisan and we want students from all political stripes to pledge to vote in the election. You can check out what the Council of Alberta University Students' election priorities are here.
Question still not answered? Submit your question(s) in advance to be answered at the Town Hall by clicking here. Your question will be read at the Town Hall whose topic(s) match the nature of the question.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,981
|
This bag of sequins is perfect for those wintery projects! You will find white, pearl, clear, shades of blue and a pop of red. Also included are some fun iridescent snowflakes. A variety of 5mm and 6mm are in this mix. Approximately 250-300 sequins.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,601
|
\section{Introduction}
The interplay of competing interactions and quantum fluctuations is known to allow for interesting phases to emerge in many-body quantum systems. This route towards novel phases of matter has been explored intensively in recent years, in particular in the field of low-dimensional quantum magnetism~\cite{schollwoeck04}, and in the
context of ultra-cold quantum gases on optical lattices, where one gains a high degree of control of the interaction strength~\cite{bloch08}.
The dominant inter-particle potentials in such systems are typically described by two-body interaction and exchange terms.
It appears fruitful to explore also realistic set-ups of many-body quantum systems that are dominated -- via engineered interaction potentials -- by multi-body interaction terms of e.g. three-particle type.
Indeed, recently, polar molecules have been proposed as promising
candidates towards realizing such many-body systems.
Driven by significant progress towards producing degenerate gases of polar molecules~\cite{sage05,ni08,deiglmayr08,lang08,danzl08}, various proposals have been put forward, how to drive ultra-cold polar molecules into regimes of strong many-body interaction effects~\cite{kotochigova06,micheli06,wang06a,buechler07,lahaye09}.
In a recent work~\cite{buechler07} an effective interaction potential
was derived for polar molecules on an optical lattice in the presence of static electric and microwave fields. It was found, that upon appropriately tuning the external fields, the interactions between the polar molecules
become characterized by extended strong two- as well as three-body interactions.
Here, we derive the ground state phase diagram for these systems with dominating three-body interaction
using quantum Monte-Carlo simulations, exact finite cluster calculations and the tensor network renormalization
group. We find the presence of solid structures for unconventional filling factors with unit cells much larger than
the period of the underlying lattice, and a macroscopic ground state degeneracy in the classical limit.
In the quantum regime, these degeneracies of the low energy sector are lifted giving rise to valence bond
crystal states.
The interaction potential for polar molecules within an opitcal lattice in the parameter regime, where three-body and two-body interactions
are present takes the form
\begin{equation}
V_\mathrm{eff} = \frac{1}{2} \sum_{ij} V_{ij} n_i n_j + \frac{1}{6} \sum_{ijk} W_{ijk} n_i n_j n_k,
\end{equation}
with $V_{ij}=V/r^6_{ij}$ and $W_{ijk}=W/(r^3_{ij} r^3_{jk} + \mathrm{perm.})$. Here, $r_{ij}$ denotes the spatial separation between particles on lattice sites $i$ and $j$, and $n_i$ the local density at site $i$. The the two-body interaction $V$ and the three-body interaction $W$
can be tuned to be of similar strength, $V\gtrsim W$~\cite{buechler07}.
In the following, we consider in particular the case of bosonic polar molecules. In this case, the suppressed tunneling of a particle to an already occupied site needs to be accounted for by a hard-core constraint on the bosonic occupations~\cite{buechler07}.
Since the extended interactions decay rapidly with the inter-particle separations (c.f. \Fref{fig:interactions}), we truncate the interactions after the leading terms, resulting thus in an extended Bose-Hubbard model with two- and three-body nearest neighbour interactions only,
\begin{equation}
H = - t \sum_\nn{ij} \left( b_i^\dagger b_j + \mathrm{h.c.}\right)
+ V \sum_\nn{ij} n_i n_j
+ W \sum_\nn{ijk} n_i n_j n_k
-\mu \sum_{i} n_i.
\label{eq:main_ham}
\end{equation}
Here,
$b_i$ and $b_i^\dagger$ denote boson annihilation and creation operators respectively, and the local density operator $n_i=b_i^\dagger b_i$ has eigenvalues $0$ and $1$ in the hard-core limit. Furthermore, $t$ denotes the nearest-neighbor hopping matrix element, and $\mu$ is a chemical potential, allowing to control the filling (i.e. the density) of the system between $n=0$ (empty) and $n=1$ (full).
\begin{figure}
\begin{center}
\includegraphics[width=250pt]{img/figure1.eps}
\caption{
Top row: Leading three-body repulsion terms and their relative strengths on the honeycomb lattice.
Bottom row: Illustration of the nearest neighbor hopping term, and the relative strengths of the
leading two-body repulsion terms on the honeycomb lattice.
}
\end{center}
\label{fig:interactions}
\end{figure}
In previous works, models of hard-core bosons with extended two- and three-body interactions as effective models of ultra-cold polar molecules were studied for the case of a one-dimensional optical lattice~\cite{capogrossosansone09} and the two-dimensional square lattice geometry~\cite{schmidt08}. In the one-dimensional case, an incompressible phase at a filling $n=2/3$ was established for dominant three-body interactions, stabilizing both change-density wave (CDW) and bond-order wave (BOW) long-ranged correlations, apart from conventional CDW phases appearing at half-filling ($n=1/2$) in the presence of two-body interactions. On the square lattice, several solid phases at fractional fillings as well as supersolid phases were found in a semi-classical approximation, some of which could also be verified by full numerical simulations.
Here, we extend such systematic explorations to the case of the honeycomb lattice, where nearest-neighbour three-body repulsions lead to characteristic effects of strong frustrations.
In order to explore the physics of this system, we used a combination of quantum Monte Carlo (QMC) simulations based on the generalized directed loop algorithm within the stochastic series expansion (SSE) representation~\cite{sandvik99b,syljuasen02,alet05}, as well as exact finite cluster calculations and the tensor network renormalization group approach~\cite{levin07} for the classical ($t=0$) limit of the above Hamiltonian, as detailed below.
In the following Section 2, we discuss the results of our calculations on the ground-state phase diagram in the absence of two-body interactions (i.e. for $V=0$), thus focusing on the main aspects of three-body repulsions on the honeycomb lattice. We observe states with large degeneracies in the classical limit, and
the emergence of complex valence bond crystal (VBC) phases, to be described in detail below.
In Section 3, we discuss the behaviour of the system as we perturb it by a finite two-body repulsion $V$, and explore the experimentally relevant parameter regime where $V\gtrsim W$~\cite{buechler07}. We examine the stability range of the
VBC phases, and find that a cascade of incompressible solid phases steams from the competition between two- and three-body repulsion terms.
\section{Three-Body Interactions}
In this section, we focus on the case $V=0$ in order to explore explicitly the effects of three-body repulsions on the honeycomb lattice. The ground state phase diagram in \Fref{fig:phase} summarizes the results from our analysis. It exhibits at low values of $t/W$ a variety of incompressible phases of different (unconventional) fillings $n=9/16$, $5/8$ $( =10/16)$, $2/3$ and $3/4$ $( =12/16)$.
Due to the incompressible nature of these incompressible phases, they lead to finitely extended plateaus in the $\mu$-dependence of the density $n$, as shown e.g. for fixed $t/W=0.3$ in the inset of \Fref{fig:phase}. The actual nature of these
phases and the quantum phase transitions between them, will be discussed below.
For larger values of $t/W\gtrsim 0.4$, the system is eventually driven by its kinetic energy from
these solid phases via first-order quantum melting transitions into a uniform superfluid phase with a finite superfluid density $\rho_s$.
In the QMC simulations, the superfluid density is obtained
as $\rho_s=\nn{w^2}/(\beta t)$
from measuring the bosonic winding number $w$ fluctuations in the standard way~\cite{pollock87} (here, $\beta=1/T$ ($k_B=1$) denotes the inverse temperature). The first-order nature of the melting transitions follows from pronounced jumps that are observed upon crossing the transition lines in both the density and the superfluid density.
\begin{figure}
\centering
\includegraphics[width=300pt]{img/figure2.eps}
\caption{
Ground-state phase diagram of hard-core bosons on the honeycomb lattice for $V=0$ in terms of $\mu/W$ and $t/W$. The incompressible phases at fillings $n=9/16$, $5/8$, $2/3$ and $3/4$ are labeled by $n$ and underlayed by different colors. Uncertainties on the estimated phase boundaries are indicated by error bars.
The inset shows the density $n$ as a function of the chemical potential $\mu/W$ for fixed $t/W=0.3$, linear system size $L=12$ and an inverse temperature of $\beta=20W$.
}
\label{fig:phase}
\end{figure}
As a typical example, we show in \Fref{fig:melting} the behavior of $n$ and $\rho_s$ near the quantum melting transition between the $n=9/16$ incompressible phase and the superfluid phase at $\mu/W=1$. The first-order nature of the quantum melting transitions is indeed expected, since (as shown below) the incompressible phases break the space group symmetry, whereas in the uniform superfluid $U(1)$ symmetry breaking occurs at $T=0$.
\begin{figure}
\centering
\includegraphics[width=300pt]{img/figure3.eps}
\caption{
Behavior of the density $n$ and the superfluid density $\rho_s$ near the quantum melting transition between the $n=9/16$ incompressible phase and the superfluid phase at $\mu/W=1$ and $V=0$ taken at $\beta=100$.}
\label{fig:melting}
\end{figure}
\subsection{Quantum Monte Carlo}
In order to perform the SSE QMC simulations for the current model, we employed a cluster decomposition of the Hamiltonian in terms of trimers of nearest-neighbor sites, such that each cluster carries one of the three-body interaction terms. The addition of the nearest neighbor two-body repulsions $V$ (in Section 3),
proceeds in this decomposition scheme as well, with each two-body term being shared by four such trimers. In the SSE directed loop construction, we thus used a doubly-linked list of 6-leg vertices. While the algorithm performed well for large values of the hopping $t$, we were not able to reach significantly below $t/W\approx 0.1$ due to the dynamical freezing in the Monte Carlo configurations, once the competing diagonal interaction terms dominate the Hamiltonian (which is a general issue of the SSE for Hamiltonians dominated by large diagonal terms).
In terms of the simulation temperature $T$, we find that an inverse temperature $\beta=1/T=20-50W$ ($k_B=1$) provided an optimal trade-off between the finite temperature incoherence and the algorithmic performance (the SSE algorithmic cost scales linearly in $\beta$). Furthermore, in order to be commensurate with all superstructures identified in the
incompressible phases, the linear system size $L$ is required to be an integer multiple of $6$ (the total number of lattice sites being $N=2L^2$, as the honeycomb lattice contains two sites per unit cell, forming the two sublattices $A$ and $B$).
Within the above temperature regime, we were able to simulate systems up to $L=24$, in some cases up to $L=36$ in linear extend.
In phases with large superstructures -- described below
-- we find that the autocorrelation times of the bosonic structures increase such that the algorithm is not able to tunnel between different realizations of the ordering pattern even within several $10^6$ QMC sweeps, but resides in one particular sector of the ground-state manifold (which however varies upon performing independent runs with different random-number streams for a given set of model parameters). During a first stage of the thermalization process,
we annealed the system in a cyclic way by heating it up and cooling it down slowly such that it is able to relax globally into one of the equivalent ground-states. A fixed temperature thermalization was then performed in the second stage of the thermalization process.
This annealing approach yield better performance than parallel tempering over an extended temperature ranges, since we are mainly interested in the ground-state phase diagram of the system, where $\beta$ is large.
We furthermore employed quantum parallel tempering \cite{sengputa02} (in $\mu$ or $t$) at fixed low temperatures in certain regions of the phase diagram, in particular in order to study the quantum phase transitions between neighboring VBC phases. We discuss these aspects in more detail in Section 2.3.
Our special analysis of the model in the classical limit (i.e. for $t=0$) is presented in Section 2.4. There, we also assess the applicability of the tensor network renormalization group approach for classical systems to the current three-body repulsive model.
In the following Section 2.2, we discuss in detail the nature of the incompressible phases that appear in \Fref{fig:phase}, and how they are characterized from our numerical analysis.
\subsection{Incompressible Phases}
\paragraph{The $n=9/16$ VBC Phase:}
The first density plateau that is encountered upon filling the lattice has a density $n=9/16$ and extends between $0\le \mu/W \le 2$ in the classical limit $(t=0)$. It has an potential energy per lattice site (equal to the internal energy for $t=0$) of $E_{pot}^{(9/16)}=-9/16\mu$, and corresponds to the closest packing of hard-core bosons on the honeycomb lattice without introducing any three-body repulsions.
From geometrical considerations in the classical limit,
this bosonic structure is obtained by covering the lattice with equilateral triangles with side length $4\sqrt{2} a$ (where $a$ is the distance between two lattice sites), each covering 16 lattice sites. These triangles are filled by nine bosons in a staggered (checkerboard) arrangement in order to obtain the overall filling $9/16$. Neighboring triangles differ in the placement of the checkerboard pattern. This leads to domain walls
along the edges of the triangles, where pairs of bosons reside. For $V=0$, this does however not lead to a potential energy penalty.
For an illustration of this structure, see the left panel of \Fref{fig:9_16}.
\begin{figure}
\centering
\includegraphics[width=300pt]{img/figure4.eps}
\caption{
Illustration of the classical configuration in the $n=9/16$ plateau (left panel). The bosons are shown in two different colors, in order to underline the different checkerboard patterns within neighboring triangles.
The green hexagons illustrate those plaquettes, where the bosons are allowed to change positions
as illustrated in the right panel, without changing the potential energy. For finite hopping $t>0$, this leads to resonances within these hexagonal plaquettes (right panel).
}
\label{fig:9_16}
\end{figure}
This structure allows for a denser packing of the particles (and thus a higher filling) than the overall checkerboard state of filling $n=1/2$, without introducing any three-body energy terms.
On every forth hexagon (such as the hexagons indicated in the left panel of \Fref{fig:9_16}), six triangles share a common corner.
In the classical limit, the energy of the system remains unchanged, if two particles change their positions along such a hexagon by an angle of $2\pi/6$, as indicated in \Fref{fig:9_16}. This local move results in a classical ground state
degeneracy $W=3^{N/32}$. Thus, the ground state entropy is extensive, and the entropy per site is $S/N=(\ln W)/N=\log(3)/32 \approx 0.034$ in the thermodynamic limit.
For finite values of $t$,
the local moves allowed in the classical configurations lead to
resonances on the hexagonal plaquettes, corresponding to second-order hopping processes of the bosons, effectively rotating a hexagon by an angle of $2\pi/3$, as illustrated in the right panel of \Fref{fig:9_16}. The system is able this way to gain kinetic energy from these tunneling processes. Such resonances also provide the dominant quantum fluctuations on this density plateau, and stabilize a VBC phase with a superstructure of checkerboarded triangles, linked by the resonating hexagons.
Through an
order-by-disorder
effect, the quantum dynamics thus
selects the ground-state to be a coherent superposition of the local resonance states.
We can indeed identify these peculiar features of the $9/16$ plateau phase from the QMC simulations.
\begin{figure}
\centering
\includegraphics[width=300pt]{img/figure5.eps}
\caption{QMC data of the local density $\nn{n_i}$ (shading) and the kinetic energy density along the nearest-neighbor bonds $\langle K_ {ij}\rangle$ (line thickness and shading) for bosons on the honeycomb lattice
in the $n=9/16$ VBC phase
at $V=0$, $t/W=0.2$, $\mu/W=1$, and a system with $L=12$ at $\beta=20$.
For clarity, the equilateral triangles, together forming the unit cell of the VBC superstructure, are indicated as well.
}
\label{fig:9_16__MC__filling}
\end{figure}
In \Fref{fig:9_16__MC__filling}, we show the local density $\nn{n_i}$, along with the kinetic energy density per bond $\langle K_{ij}\rangle $, where $K_{ij}= b_i^\dagger b_j+b_j^\dagger b_i$
and sites $i$ and $j$ belong to a nearest neighbor bond on the honeycomb lattice, for a representative point within the $n=9/16$ phase.
We find
the local density is close to $\nn{n_i}=1$ and $0$, thus exhibiting only few fluctuations, in a staggered (checkerboard) pattern within triangular structures.
On the other hand, the density around a subset of the hexagons -- those, where the triangular checkerboard patterns meet -- is
within an intermediate range, and it is along the bonds of these hexagons, where the
kinetic energy is mainly located. Both observations indicate a residual density dynamics in this incompressible phase, located along these hexagons.
\begin{figure}
\centering
\includegraphics[width=230pt]{img/figure6.eps}
\caption{QMC data for the bond-bond correlations in the kinetic energy $\langle K_{ij} K_{kl} \rangle$ along the nearest-neighbor bonds for bosons on the honeycomb lattice
in the $n=9/16$ VBC phase
at $V=0$, $t/W=0.2$, $\mu/W=1$, and a system with $L=12$ at $\beta=20$. The reference bond $\nn{ij}$ is indicated by the red ellipse. }
\label{fig:9_16__MC__kinetic}
\end{figure}
The resonant nature of the corresponding hopping events is reflected
in the bond-bond correlation function $\langle K_{ij} K_{kl} \rangle$, shown in \Fref{fig:9_16__MC__kinetic}.
We identify the main contribution from the hexagonal resonances,
which lead to the enhanced bond-bond correlation between the reference bond (marked by an ellipse) and the bonds atop and below the reference bond in \Fref{fig:9_16__MC__kinetic}.
Furthermore, correlations of the same strength are visible between the reference bond and its next-nearest neighbour bonds to the left and right.
These result from residual quantum fluctuations that lead to the finite kinetic energy distribution inside the checkerboarded triangular structures in \Fref{fig:9_16__MC__filling}. We do not observe long ranged bond-bond correlations, thus the hexagonal resonances evolve essentially independently of each other.
\begin{figure}
\centering
\includegraphics[width=400pt]{img/figure7.eps}
\caption{Finite size scaling
of the density structure factor $S_{9/16}$ for the $n=9/16$ phase of hard-core bosons on the honeycomb lattice at
$V=0$, $t/W=0.1$, and $\mu/W=1$ (right panel). The left panel shows the positions of the peaks in the density structure factor in momentum space, with the peak at $\vec{q}_0=(\pi,0)$ indicated by the arrow.
}
\label{fig:9_16peak}
\end{figure}
The left panel of \Fref{fig:9_16peak} shows the density structure factor for the $9/16$ VBC structure, $S(\vec{q})=1/N \sum_{ij} n(\vec{x}_i) n(\vec{x}_j)\: e^{i \vec{q} (\vec{x_i} - \vec{x_j})}$. Here, $n_a(\vec{x}_i)$ denotes
the local density operator at position $\vec{x}_i$.
A 6-fold structure is identified, with one of the equivalent peaks positioned at $\vec{q}_0=(\pi,0)$. This structure relates to the superstructure of equilateral triangles and thus provides a characteristic feature in the density distribution of this phase.
The right panel of \Fref{fig:9_16peak} shows a finite size scaling analysis of QMC data for the structure factor
$S_{9/16}=\langle S(\vec{q}_0)\rangle$
at this characteristic wavevector $\vec{q}_0$ for a system within the $9/16$ VBC phase.
We find that $S_{9/16}/L^2$ indeed extrapolates in the thermodynamic limit ($N\rightarrow\infty$) to a finite value, verifying the presence of long-range density order in the bosonic structure of the $9/16$ VBC phase.
\paragraph{The $n=5/8$ VBC Phase:}
The next density plateau encountered upon further increasing the chemical potential appears between $2< \mu/W \le 5$ in the classical limit, and has a filling of $n=5/8$. The classical potential energy equals $E_{pot}^{(5/8)}=-5/8\mu +W/8$ in this phase.
As for the $9/16$ plateau, we obtain a VBC structure in the quantum regime. However, its effective description is more involved.
\begin{figure}
\centering
\includegraphics[width=200pt]{img/figure8.eps}
\caption{
Mapping between allowed boson configurations, lozenge tilings and dimer coverings of a honeycomb lattice: a) shows the colored lozenge with a trimer-dimer pair and part of the honeycomb superlattice. b) shows an example configuration, where the lozenges arrange to form a dice lattice. The corresponding dimer covering on the honeycomb superlattice is indicated as well as the position of the dimers (fat lines). In c), we show two configurations that
contain a group of three lozenges, where the rotation of the bosons within the central hexagon leads to a flip between the two shown configurations. The notation for the two corresponding states in the quantum dimer model is shown in the bottom row.
}
\label{fig:10_16__filling_with_dimers}
\end{figure}
In the classical limit, each valid configuration at this filling can be mapped to a lozenge tiling of the two-dimensional plane. Each lozenge is formed by eight lattice sites, and contains the boson pattern shown in \Fref{fig:10_16__filling_with_dimers}a, with a trimer-dimer pair such that one three-body vertex is introduced.
A boson covering of filling $n=5/8$ and energy $E_{pot}^{(5/8)}$ results whenever the full area of the lattice is covered with such lozenges. However, one has to introduce a parity constraint in mapping to the boson configuration: coloring the lozenges as shown in \Fref{fig:10_16__filling_with_dimers}a, only sides of the lozenges of equal color are allowed to touch, in order not to introduce additional three-body repulsions, which would lead to a higher potential energy.
Furthermore, it is well known that each lozenge tiling of the plane is dual to a hard-core dimer covering of a honeycomb lattice.
This honeycomb lattice consists of hexagons that are a factor of two larger in linear extend than the hexagons of the underlying honeycomb lattice in the bosonic model.
Different lozenge tilings along with the corresponding dimer coverings and the underlying boson patterns, are shown in \Fref{fig:10_16__filling_with_dimers}b and c. This mapping from bosonic configurations to dimer coverings allows to obtain the ground state entropy of the $n=5/8$ phase in the classical limit from that of closed-packed hard-core dimer coverings on the honeycomb superlattice, which equals $S=0.108$~\cite{moessner01} (the additional freedom of how the honeycomb superstructure is embedded onto the underlying lattice, and the two possible ways of coloring the lozenges lead to a further, non-extensive factor of $3\times 2$ to the ground state degeneracy).
Among the various lozenge tilings we find the dice lattice, shown in \Fref{fig:10_16__filling_with_dimers}b, which corresponds to a staggered dimer covering on the honeycomb superlattice. Other tilings contain a special group of three lozenges shown in \Fref{fig:10_16__filling_with_dimers}c.
Rotating the bosons along the central hexagon in this structure, as shown in \Fref{fig:10_16__filling_with_dimers}c, leads to another allowed configuration, with the group of the three lozenges being rotated. In the dimer covering, this leads to a flip of the dimer pattern around the hexagon, as also seen in \Fref{fig:10_16__filling_with_dimers}c.
\begin{figure}
\centering
\includegraphics[width=300pt]{img/figure9.eps}
\caption{
QMC data of the local density $\nn{n_i}$ (shading) and the kinetic energy density along the nearest-neighbor bonds $\langle K_ {ij}\rangle$ (line thickness and shading) for bosons on the honeycomb lattice
in the $n=5/8$ VBC phase
at $V=0$, $t/W=0.2$, $\mu/W=3$, and a system with $L=12$ at $\beta=20$. Crosses indicate the most dominant plaquette resonances in this configuration.
}
\label{fig:10_16__MC__filling_with_dimers}
\end{figure}
For finite $t$,
a third-order boson hopping process
corresponds to this local flip in the dimer configuration on the hexagonal plaquettes of the honeycomb superlattice, as illustrated in \Fref{fig:10_16__filling_with_dimers}c.
Using degenerate perturbation theory in $t$,
this local dimer-flip dynamics is described by a quantum dimer model with solely kinetic terms,
\begin{equation}
H^\mathrm{QDM}\propto -t^3/W^2 \sum_{p}\left( \ket{\alpha_p}\bra{\beta_p}+h.c. \right),
\end{equation}
and favors the formation of plaquette resonances as shown in \Fref{fig:10_16__filling_with_dimers}c.
Here, $\ket{\alpha_p}$ and $\ket{\beta_p}$ denote the two states on the hexagonal plaquette $p$ on the superlattice in \Fref{fig:10_16__filling_with_dimers}c involved in the plaquette flip process. For finite values of $t$, the system tries to maximize the number of plaquette resonances, and the ground-state degeneracy is partially lifted~\cite{moessner01}. From the analysis of the quantum dimer model on the honeycomb lattice~\cite{moessner01}, we thus find that the ground state of the bosonic system at filling $n=5/8$ corresponds to the plaquette VBC phase of the quantum dimer model with purely kinetic terms. This phase is characterized by resonances among hexagons, and
we indeed observe a corresponding pattern in the QMC simulations: to exhibit this, we show in \Fref{fig:10_16__MC__filling_with_dimers} the local density and the kinetic energy density on the nearest neighbor bonds for a representative point within the $n=5/8$ phase. The data is taken at $t/W=0.2$, outside the asymptotic regime where the perturbative derivation of the effective quantum dimer model strictly holds. Nevertheless, we can identify in \Fref{fig:10_16__MC__filling_with_dimers}
a pattern that shares the structure of the plaquette VBC phase of the quantum dimer model, as indicated by crosses on the most dominant hexagonal resonances.
\paragraph{The $n=2/3$ VBC Phase:}
Further increasing the chemical potential,
the classical model exhibits a first-order phase transition at $\mu/W=5$ from the $n=5/8$ phase to a $n=3/4$ density plateau, with potential energy $E_{pot}^{(3/4)}=-3/4\mu+3/4W$, which (as well as $E_{pot}^{(5/8)}$) equals $-3W$ at $\mu/W=5$. This point in the phase diagram is highly degenerate; in particular, we find among the degenerate ground states also states with a filling of $n=2/3$ and a potential energy $E_{pot}^{(2/3)}=-2/3\mu+1/3W$ (equal to $-3W$ at $\mu/W=5$). One particular such state is shown in \Fref{fig:2_3_figure}. It consists of parallel strips of nearest neighbor boson pairs, separated by zig-zag chains of occupied sites.
Other states of the same filling and energy can be obtained from this configuration upon allowing the bosons along the chains segments to move to a neighboring site. One such move is indicated in \Fref{fig:2_3_figure}.
\begin{figure}
\centering
\includegraphics[width=300pt]{img/figure10.eps}
\caption{
A configurations of bosons on the honeycomb lattice at filling $n=2/3$, with potential energy $E_{pot}=-3W$ (left panel). Also indicated is one of the processes that leads to further patterns of the same density and potential energy (right panel). In the quantum model, such moves lead to resonances in the emerging $n=2/3$ phase.
}
\label{fig:2_3_figure}
\end{figure}
Such processes however are not independent of each other: For example, if the change indicated in \Fref{fig:2_3_figure} was made, the two bosons located at the next-nearest neighbors along the chain of the originally occupied site are blocked to perform a similar move, since that would lead to additional three-body repulsion terms, and thus a higher potential energy.
These changes in the configuration are thus blocking each other.
We find from the QMC simulation, that an extended phase of filling $n=2/3$ gets selected via an order-by-disorder effect out of the degenerate ground state manifold in the classical limit at $\mu/W=5$: a new plateau of filling $n=2/3$ emerges in the quantum phase diagram of \Fref{fig:phase}. It vanishes in the classical limit,
while upon increasing $t$, it
extends well into the regime $\mu/W<5$.
\begin{figure}
\centering
\includegraphics[width=300pt]{img/figure11.eps}
\caption{
QMC data of the local density $\nn{n_i}$ (shading) and the kinetic energy density along the nearest-neighbor bonds $\langle K_ {ij}\rangle$ (line thickness and shading) for bosons on the honeycomb lattice
in the $n=2/3$ VBC phase
at $V=0$, $t/W=0.3$, $\mu/W=5$, and a system with $L=12$ at $\beta=20$.
}
\label{fig:2_3__MC__filling_and_kin}
\end{figure}
In order to analyse the nature of this emerging phase, we show in
\Fref{fig:2_3__MC__filling_and_kin} representative QMC data of the local density and kinetic energy density within the $n=2/3$ phase. One identifies a rigid backbone of parallel strips of boson pairs, where the local density takes on values close to unity. Furthermore, between these strips we find sites with a reduced local density, which are linked perpendicular to the stripes' direction by bonds with an enhanced kinetic energy density.
Such behavior in both the density and the kinetic energy distribution is in accord with bond resonances, induced by the processes illustrated in
\Fref{fig:2_3_figure}.
The opening of the $n=2/3$ plateau for finite $t/W$ can thus be understood given the large local kinetic energy that the system gains, and which outweighs the penalty in potential energy. Since in the classical limit the kinetic energy contribution vanishes, the potential energy penalty leads to the disappearance of this phase.
In \Fref{fig:2_3__MC__filling_and_corr}, we furthermore show the bond-bond correlations in the kinetic energy for the $n=2/3$ phase.
While the correlations decay quickly in the direction parallel to the strips due to the previously mentioned blocking effect, they sustain over a rather wide range perpendicular to the stripe direction. This underlines the robustness of this emerging $n=2/3$ phase in the quantum regime.
\begin{figure}
\centering
\includegraphics[width=230pt]{img/figure12.eps}
\caption{
QMC data for the bond-bond correlations in the kinetic energy $\langle K_{ij} K_{kl} \rangle$ along the nearest-neighbor bonds for bosons on the honeycomb lattice
in the $n=2/3$ VBC phase
at $V=0$, $t/W=0.3$, $\mu/W=5$, and a system with $L=12$ at $\beta=20$. The reference bond $\nn{ij}$ is indicated by the red ellipse.
}
\label{fig:2_3__MC__filling_and_corr}
\end{figure}
\paragraph{The $n=3/4$ Solid Phase:}
Increasing the chemical potential beyond $\mu/W=5$ , the classical system enters a phase of filling $n=3/4$, that concludes into the fully occupied state for $\mu/W>9+3t/W$. A particular classical state of this filling and a potential energy of $E_{pot}^{(3/4)}=-3/4\mu+3/4W$ consists of a regular superlattice of fully occupied hexagons shown in
the left panel of \Fref{fig:12_16__global_move}.
Additional states of the same filling and potential energy can be constructed by applying \textit{global} moves along parallel lines throughout the system, as shown in Figure \ref{fig:12_16__global_move}.
Such global moves shift all bosons on two parallel lines along one sublattice. The classical ground state entropy of this phase can be calculated by counting the number of states $W$ obtained by such moves.
As the number of lines to perform such moves is proportional to the linear system size $L$, the entropy per sites $S=(\ln W)/(2L^2) \propto 1/L$ scales to zero in the thermodynamic limit.
\begin{figure}
\centering
\includegraphics[width=300pt]{img/figure13.eps}
\caption{
Two classical configurations of filling $n=3/4$. The state in the left panel consists of a regular superlattice of fully occupied hexagons. To obtain the state in the right panel, the particle colored blue have been shifted upwards (as indicated by arrows) with respect to their original positions in the left panel along two parallel lines.
}
\label{fig:12_16__global_move}
\end{figure}
Because these degenerate ground states are connected not by local moves, but by global displacements, we do not find resonant structures for finite $t$. Due to the global nature of the moves that relate the various classical ground states, we expect the QMC algorithm to generate states that are members of this rigid manifold. Indeed, from the QMC simulations, we obtain
density patterns that show characteristics of the deformed superlattice of fully occupied hexagons shown in \Fref{fig:12_16__global_move}.
A representative result from the QMC simulations is shown in Figure~\ref{fig:12_16_kin}.
\begin{figure}
\centering
\includegraphics[width=300pt]{img/figure14.eps}
\caption{
QMC data of the local density $\nn{n_i}$ (shading) and the kinetic energy density along the nearest-neighbor bonds $\langle K_ {ij}\rangle$ (line shading) for bosons on the honeycomb lattice
in the $n=3/4$ solid phase
at $V=0$, $t/W=0.3$, $\mu/W=5$, and a system with $L=12$ at $\beta=20$.
Two of the fully occupied hexagons observed in this structure are highlighted as well as a chain-like structure like the
one in Figure \ref{fig:12_16__global_move}.
}
\label{fig:12_16_kin}
\end{figure}
The classical degeneracy thus appears not to be lifted in the quantum regime.
\subsection{VBC-VBC Quantum Phase Transitions}
After having described the VBC phases appearing in the presence of three-body repulsions on the honeycomb lattice, we next turn to the quantum phase transitions between these incompressible phases.
An exploration of the transition regions is challenging for the QMC algorithm, because neighboring VBC states are rather close in energy, with competing potential and kinetic contributions of similar size.
To illustrate this issue, we consider a numerical example of the relevant energy scales. The potential energy difference between the two phases at $9/16$ and $5/8$ filling is $\Delta E_\mathrm{pot} = -1/16 \mu + 1/8 W$, which equals $-0.0625W$
near $\mu/W=3$. The kinetic energy difference between the two VBC phases for a given value of e.g. $t/W=0.3$ is obtained from the QMC simulations as
$\Delta E_\mathrm{kin}(t/W=0.3)\approx 0.025W$, and is thus of similar size.
We find that the competition between these two similar energy scales leads to an extended transition region between the VBC phases, that gives rise to
an apparent continuous increase in the density, as seen e.g. in the inset of \Fref{fig:phase}. We explicitly checked that such behavior occurs also for temperatures as slow as $T/W=0.05$ and $T/W=0.01$, i.e. well below the above energy scales.
Furthermore, we find strong algorithmic hysteresis effects upon varying the chemical potential at fixed $t/W$ through the transition region within a single simulation.
\begin{figure}
\centering
\includegraphics[width=300pt]{img/figure15.eps}
\caption{
Results of a hysteresis study between the $9/16$ and $5/8$ VBC phases (solid blue line) for a system size $L=12$, and results from quantum parallel tempering simulations for systems of sizes $L=12,24$, and $36$. The upper panel shows the
filling $n$, and the lower panel the compressibility $\kappa$ obtained from the quantum parallel tempering calculations.
}
\label{fig:full_hyst}
\end{figure}
\Fref{fig:full_hyst} exemplifies this behavior:
Starting within the $n=5/8$ phase and decreasing $\mu$, the density is stable until reaching $\mu/W \sim 2.1$, where it drops to $n=9/16$ rapidly. Starting from the $n=9/16$ phase and increasing $\mu$, the filling of $n=5/8$ is not
established even well inside the $n=9/16$ VBC region. This behavior exhibits a metastability of the
$n=9/16$ state; the algorithm cannot
perform effectively the re-arrangements that are necessary in order to establish the structure of the $n=5/8$ phase,
starting from a configuration typical for the $n=9/16$ phase.
We employed quantum parallel tempering in $\mu/W$ in order to assess, if with the help of replica-exchanges between neighboring values of $\mu/W$, the algorithm overcomes such metastabilities. Results of these calculations are shown in \Fref{fig:full_hyst} for systems of different sizes for $V=0$ and $t/W=0.3$. We find that
the obtained density $n$ mimics the behavior of the hysteresis curve.
In particular, we observe an increase of $n$ within the range of $\mu/W$, where the upper hysteresis curve drops.
In addition, the compressibility $\kappa$ shows a peak within this region, that increases with system size. In the QMC simulations, the compressibility
$\kappa= \partial n/\partial \mu=\beta (\langle n^2\rangle - \langle n\rangle^2)$ is obtained in terms of the density fluctuations.
The density curve flattens for the larger values of $\mu/W$, and does not reach $5/8$ even at $\mu/W=3$. On the other hand, $n$ reaches the value $9/16$ for $\mu/W>2$. Hence, the replicas are not able to establish the $n=5/8$ VBC structure, even though they tunnel repeatedly throughout the extended parameter range. These observations are consistent with a first-order quantum phase transition between the neighboring VBC phases.
However, one might be concerned, that the numerical difficulties could hide an intermediate phase that separates
the VBC phases.
In one such
scenario, an intermediate phase is not commensurate with our lattice sizes. When simulating lattices of varying sizes ($L$=12, 24, 25, 36), we did however not detect any new structures in the transition regime. Of course, we cannot exclude phases with very large unit cells from our finite size study.
Another possibility would be a superfluid or even supersolid phase in the intermediate region, such as observed in a similar model on the square lattice~\cite{schmidt08}.
However, we can rule out a finite superfluid density in the transition region:
Using quantum parallel tempering in $t/W$ or driving a superfluid system (from large $t/W=0.5$) towards the transition region at low $t$, resulted in $\rho_s$ always becoming zero for values of $t/W \lesssim 0.35$.
A third possibility for an intermediate phase would be a VBC "emulsion" i.e., a metastable phase mixture with domain walls separating $9/16$ and $5/8$ VBC-like domains. While we observe in the QMC simulations bosonic structures that show features of both the $9/16$ and the $5/8$ phase, which would be expected from such an VBC "emulsion" scenario, these structures are also consistent
with a first-order transition, given the algorithmic metastability.
Thus, we consider a first-order VBC-VBC quantum phase transition the most conservative scenario consistent with the QMC data and the apparent difficulty of the QMC algorithm in this parameter regime.
While we cannot determine the precise location of the quantum phase transition, we take the peak position of the compressibility as an estimate.
This is reflected in the phase diagram in \Fref{fig:phase}: The points denote the region where the phase transition occurs according to our estimate. Since we are however not able to provide the exact position, but an interval, we denote this interval by an errorbar.
We note, that the above discussion applies likewise to the other inter-plateau transitions.
\subsection{Classical Limit ($t=0$)}
\paragraph{Finite Cluster Studies:}
In order to analyse the ground state phase diagram of the present model, it is useful to consider also the classical limit $t=0$ of the quantum Hamiltonian. In the particular case $V=0$, the model reduces to the most simple classical model on the honeycomb lattice with extended three-body interactions. We are not aware of any previous numerical or even exact results on this statistical physics model. While one can construct states that appear in the classical limit from analysing boson configurations from the QMC simulations, we wanted to check the implications for the classical limit using independent methods.
For this purpose, we solved the classical model on a finite hexagonal cluster of linear system size $L=4$ exactly. Doing so, we confirm the extends of the densities plateaus as described above, including the absence of a $n=2/3$ plateau in the classical limit.
However, this system size suffers from the particular problem, that among the various states of the $5/8$ plateau, only the
bosonic state corresponding to the
dice lattice lozenge tiling (c.f. \Fref{fig:10_16__filling_with_dimers}b) matches onto this finite cluster.
While this particular tiling provides a valid boson covering in the classical limit, this restriction shadows the huge degeneracy of this phase. Namely, on the $L=4$ cluster, no configuration can be realized, that contains the group of three lozenges shown in \Fref{fig:10_16__filling_with_dimers}c.
In order to be commensurate with such coverings, the linear system size must be an integer multiple of 6. All other phases however are commensurate with this system size, and we can exclude different boson patterns than those described above for up to $32$ sites exactly.
In order to check on larger clusters, whether the classical predictions for the bosonic structures are correct, we tried to employ classical Monte Carlo simulations. However, they suffer from dynamical freezing at the low temperatures required to explore the plateau structure. In that respect, the QMC simulations perform more efficient, and we thus used the QMC simulations in order to generate configurations of the classical model (this is possible within the SSE framework since on each propagation level one obtains a configuration of the classical model). While we do of course not sample these classical configurations with the correct statistical weight, we can nevertheless check if the various classical configuration obtained this way are consistent with our effective descriptions.
In particular, we verified that each classical configuration of density $n=5/8$ and potential energy $E_{pot}=E_{pot}^{(5/8)}$ indeed complies with the construction given in Section 2.2 in terms of lozenge tilings.
We did not observe any classical configuration with the right density and potential energy, that would violate this construction. We are thus confident, that our understanding of the classical phases is correct.
\paragraph{Tensor Network Renormalization Group:}
Recently,
an interesting novel approach to study thermodynamic properties of classical statistical models has been proposed. It is based on a tensor network representation of the partition function, and evaluates it directly in the thermodynamic limit using a renormalization procedure in the tensor network decomposition. For details about this method, we refer to
the original paper by M. Levin and C. P. Nave \cite{levin07}, as well as recent works applying this approach to the Ising-model on the triangular \cite{hinczewski08} and the Shastry-Sutherland lattice \cite{chang09}. Since this method performed well in these cases, we tried to apply it also to our model of nearest-neighbor three-body repulsions on the honeycomb lattice. In fact, in the original publication, the method was described explicitly for tensor network models on the honeycomb lattice. We found, that one can indeed express the partition function of our model in terms of a tensor network. For this purpose,
one specifies two cyclically symmetric tensors $T^A_{ijk}$ and $T^B_{ijk}$, with indices $i,j,k$ running from $1$ up to a finite integer $D$, where each index corresponds to a degree of freedom $i=1,...,D$ on the bonds of the honeycomb lattice.
Due to the bipartiteness of the honeycomb lattice, three bonds meet at a site of sublattice $A$ or $B$. The tensor $T^A_{ijk}$ is assigned to each site of the honeycomb lattice within sublattice $A$, and likewise the tensor $T^B_{ijk}$
to those of sublattice B.
The partition function of a tensor network model on the honeycomb lattice is then given as
\begin{equation}
Z=\sum_{i,j,k,...=1}^D T^A_{ijk}T^B_{ilm}T^B_{jnp}T^B_{kqr}...,
\end{equation}
i.e. $Z$ is obtained from the product of all the tensors, contracting pairs of indices for each bond of the honeycomb lattice.
In order for $Z$ to match the partition function of the bosonic model (in the classical limit $t=0$), the tensors $T^A_{ijk}$ and $T^B_{ijk}$ can be chosen as given in Table 1, with the dimension $D=4$. Here, we considered the general case, where both $W$ and $V$ are finite.
One verifies easily, that with these particular tensors, $Z$ indeed recovers the partition function of the classical particle model.
\begin{table}
\begin{center}
\begin{tabular}{| c | c | c || l | l |}
\hline
i&j&k&$T^{A}_{ijk}$&$T^{B}_{ijk}$ \\
\hline
1 & 1 & 1 & 1 & 1 \\
1 & 2 & 2 & 1 & 0 \\
1 & 1 & 3 & 0 & 1 \\
2 & 2 & 2 & 0 & $\exp(\beta\mu)$ \\
3 & 3 & 3 & $\exp(\beta\mu)$ & 0 \\
2 & 2 & 1 & 1 & 0 \\
3 & 3 & 1 & 0 & 1 \\
3 & 3 & 4 & $\exp(-\beta (-\mu+V/2))$& 0 \\
2 & 2 & 4 & 0 & $\exp(-\beta (-\mu+V/2))$ \\
3 & 4 & 4 & $\exp(-\beta (-\mu+V+W))$& 0 \\
4 & 4 & 2 & 0 & $\exp(-\beta (-\mu+V+W))$ \\
2 & 2 & 2 & 1 & 0 \\
3 & 3 & 3 & 0 & 1 \\
4 & 4 & 4 & $\exp(-\beta(-\mu+3/2V+3W))$ & $\exp(-\beta(-\mu+3/2V+3W)$ \\
\hline
\end{tabular}
\label{ }
\caption{Non-zero tensor elements $T^{A}_{ijk}$ for links $i,j,k$ around a site of sublattice $A$, and
$T^{B}_{ijk}$ for links $i,j,k$ around a site of sublattice $B$, respectively. All other finite
tensor elements are obtained from the shown ones via cyclic permutation of the indices $i,j,k$.
}
\end{center}
\end{table}
Once a representation of the classical model in terms of a tensor network has been obtained, we can proceed to perform the renormalization procedure to this tensor network. Doing so, involves an approximation, since within each renormalization step the dimension of the renormalized tensor network is truncated to a fixed maximum dimension $D_{max}>D$. $D_{max}$ is thus a regularization parameter of the algorithm.From the approximatively evaluated partition function, one obtains the free energy $F=-\frac{T}{N}\ln{(Z)}$ in the thermodynamic limit, from with the density $n$ is calculated by taking numerical derivatives of $F$ with respect to $\mu$. Doing so for sufficiently low temperatures $T$, we obtain the low-$T$ phase diagram from resolving the density plateau structure. For sufficiently large $D_{max}$, the numerical data eventually converges to the final result. Within the tensor network renormalization procedure, one performs a singular value decomposition of a $(D_{max})^2 \times (D_{max})^2$ matrix of contracted tensors, from which the renormalized tensors are constructed. After each step the tensors have be to be normalized such that all tensor elements are smaller or equal to unity to avoid overflows. Since this procedure is iterated many times, one needs to implement all matrix computations using high-precision floating point arithmetics (e.g. the initial non-zero tensor elements differ in about 18 orders of magnitude from for $\mu/W=2$ and $\beta=20/W$ at $V=0$.). In Ref.~\cite{hinczewski08}, quadruple precision was found appropriate for an Ising model. Using a customized version of LAPACK~\cite{laug} with 128 bit reals (floating point precision of about $10^{-34}$), we find that for the current model this precision was still not sufficient in the relevant part of the phase diagram. While the method performs for the case of purely two-body interactions (i.e. for $W=0$), the tensor iteration procedure does not converge for $\beta\gtrsim W$, once $W$ dominates and the chemical potential reaches beyond $\mu/W\approx 1$. (For $\beta<W$, the method converged and the results could be verified by comparing to Monte Carlo simulations. However, for such high temperatures, the plateau structure is thermally smoothed out, thus providing no information about the ground state properties, which we are after.)
\begin{figure}
\centering
\includegraphics[width=250pt,]{img/figure16.eps}
\caption{
Filling $n$ in the classical limit at $V=0$, as a function of $\mu/W$ obtained from the tensor network renormalization group approach at $\beta=10$. The curves correspond to different values of $D_{max}$.
}
\label{fig:tnrg_3}
\end{figure}
As an example, we show in Fig.~\ref{fig:tnrg_3} the density curves that we obtained at $\beta=10$.
In agreement with our previous analysis, we still find that the system enters a $n=9/16$ plateau. However, we cannot explore the full phase diagram using this method. We trace the failure of the tensor network renormalization group procedure
to the fact, that the tensors for larger values of $\mu/W$ span many orders of magnitude due to the exponential dependence of the Boltzmann factor in the tensor elements.
Thus a broad range of values is required in order to account for the physics of the model. However, due to the accumulation of round-off errors in the computation of the renormalized tensor network (which includes a large number of additions and multiplications), one loses the required precision in a numerical implementation of the algorithm
after a small number of iteration steps. This problem appears to be a generic drawback of the tensor network renormalization group method, which we suffer from because
of the broad range of magnitudes that our system implies in the tensor network representation. For a model with two-body interactions only (i.e. at $W=0$), the range of values is significantly reduced, and in this case we could indeed
recover the (well known) behavior of the model, which is equivalent to the ferromagnetic Ising model on the honeycomb lattice.
\begin{figure}
\centering
\includegraphics[width=200pt,angle=270]{img/figure17.eps}
\caption{
Filling $n$ in the classical limit ($t=0$) at $W=0$, as a function of $\mu/V$ obtained from the tensor network renormalization group approach at $\beta=10$. The curves correspond to different values of $D_{max}$.
}
\label{fig:tnrg_2}
\end{figure}
As an example, we show in Fig.~\ref{fig:tnrg_2} the filling $n$ as a function of $\mu/V$ at $W=0$ and $\beta/W=10$,
with a pronounced plateau at $n=1/2$ emerging.
\section{Two-body Interactions}
Thus far, we considered mainly the case of purely three-body repulsions, and explored the phases of the bosonic model in that parameter regime. However, from the derivation of the extended Hubbard model for ultra-cold polar molecules in Ref. \cite{buechler07} it is clear, that two-body interactions will be at least of the same strength as the three-body terms (the claim in Ref.~\cite{schmidt08} , that the interactions are solely of three-body type thus appears in contrast to the results in Ref.~\cite{buechler07}). Hence, it is important to assess the influence of two-body terms on the physics of such models. Starting with the nearest-neighbour three-body term $W$, the next important interaction term to be considered is the nearest-neighbor two-body repulsion $V$.
The phase diagram for hard-core bosons with solely nearest-neighbor two-body interactions ($W=0$, $V\ne 0$) features a half-filled checkerboard solid for small values of $t/V<0.5$, surrounded by a superfluid phase, without any supersolid phases present in the quantum phase diagram~\cite{wessel07b}. In the current setup, we recover this checkerboard solid at large values of $V/W$. An important question is, whether the other phases discussed in Section 2 are stable towards the relevant parameter regime $V \gtrsim W$, and if new phases appear from the competition between two- and three-body interactions.
\paragraph{Cascaded Transition:}
To make predictions about the impact of two-body interactions, we first consider the $n=9/16$ phase in the classical limit. It consists of triangles with an edge length corresponding to the size of four hexagons (c.f. \Fref{fig:9_16}). No three-body vertices are present in this state, but the structure has neighboring bosons along the edges of the triangles. For finite $V$, these boson pairs result in a potential energy penalty, which tends to destabilize the structure.
One possibility would be, that there is a direct transition from the $n=9/16$ phase to the $1/2$ solid upon increasing $V/W$. However, we find that one can construct intermediate states with densities $1/2 <n <9/16$, that are energetically preferred for certain ranges of $V/W$. We obtain such state, upon
generalizing the classical $n=9/16$ state, where checkerboarded (half-filled) triangles are separated by domain walls: we now consider the size of these triangular domains to vary.
Namely, we
consider a honeycomb lattice and cover it with equilateral triangles of edge length $x$ (in units of the size of one hexagon). Each triangle thus covers $x^2$ lattice sites (\Fref{fig:x12} shows QMC data for the local density that corresponds to a configuration with $x=12$). A staggered filling with bosons yields a lattice filling of $n_\triangle(x)=x(x+1)/2-1$. The boson pairs along the boundaries of the triangles cost an energy of $P_\triangle=V(3x-4)/2$, and we obtain the potential energy per lattice site as
\begin{equation}
E_\triangle(x)=-\frac{\mu}{x^2} n_\triangle(x) + \frac{1}{x^2} P(x)_\triangle V.
\end{equation}
Minimizing the energy with respect to $x$ gives $x=4$ for $V=0$ (we thus recover the states at filling $9/16$) and $x\rightarrow \infty$ as $V/W\rightarrow \infty$ (we converge to the half-filled checkerboard state).
The number of resonances in a system with $N$ sites equals $N/(2x^2)$, resulting in a macroscopic ground state
degeneracy of entropy $S/N=\ln(3)/(2x^2)$ for these solid phases.
In order to derive the ground-state phase diagram based on this construction, we minimize the energy for fixed $\mu/W$ and $V/W$ as a function of $x$. For $\mu/W\ge 2$ other competing states are the $n=5/8$ and for $\mu/W \ge 5$ the $n=3/4$ phase described in Section 2. We thus optimise the energy among these various states in order to determine the phase boundaries.
The phase diagram obtained this way is shown in the left panel of in \Fref{fig:twobody_classical}.
\begin{figure}
\centering
\includegraphics[width=350pt]{img/figure18.eps}
\caption{Classical phase diagram for finite $V$ and $W$ in terms of $\mu/W$ and $V/W$ (left panel). Colors indicate the filling of the corresponding phases. The $n=5/8$ and $n=3/4$ plateaus extend towards finite values of $V/W$. The color gradient denotes the filling in the cascade, varying from $9/16$ to $1/2$. The phase boundaries of the larger phases are marked by black lines. The right panel shows the filling $n$ as a function of $V/W$ in the classical limit at $\mu/W=1$.}
\label{fig:twobody_classical}
\end{figure}
We find that in addition to the previously established phases, the system exhibits a whole cascade of new solid structures (with $x$ running from $4$ to infinity) that appear upon increasing the value of $V/W$. As an example, we show in the right panel of \Fref{fig:twobody_classical} the filling as a function of $V/W$ for a fixed value of $\mu/W=1$. Starting from the $n=9/16$ plateau, a cascade of transitions eventually leads for $V/W>0.35$ to the staggered plateau of filling $1/2$. This cascade of plateau phases forms an incomplete devil's staircase, induced by the competing nature of the three- and two-body repulsion terms.
\begin{figure}
\centering
\includegraphics[width=250pt]{img/figure19top.eps}
\includegraphics[width=250pt]{img/figure19middle.eps}
\includegraphics[width=250pt]{img/figure19bottom.eps}
\caption{
QMC data of the filling $n$ as a function of $V/W$ for $t/W=0.3$ and $\mu/W=4,\,5$ and $7$ (from top to bottom) for $L=12$ and $24$ at $\beta=20$. \textbf{Top:} For $V/W \approx 1.2$ a plateau at $n=77/144$ (corresponding to $x=12$) appears, with a direct transition to the half-filled solid at $V/W \approx 1.4$. \textbf{Middle:} The $2/3$ density plateau is stable towards finite $V/W$ and decays into the $5/8$ plateau at $V/W \approx 0.3$. \textbf{Bottom:} For large enough $\mu/W$ one can also see the finite extend of the $3/4$ plateau.
}
\label{fig:twobody_filling}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=230pt]{img/figure20.eps}
\caption{
QMC data for the local density $\nn{n_i}$ (shading) for $\mu/W=4$, $t/W=0.3$ and $V/W=1.25$. The colored triangles
highlight the unit cell of the superstructure with $x=12$ found in the density plateau at filling $n=77/144$.
}
\label{fig:x12}
\end{figure}
\paragraph{Numerical Results:}
For finite $t>0$, we expect deviations from the classical cascade structure, since the number of hexagonal resonances that appear for finite $t$
decreases as $1/x^2$ with increasing $x$. The system could thus skip some of the higher-$x$ plateaus because of the stabilization of the low-$x$ phases by their larger kinetic energy. This can result in entering the staggered phase after only a finite number of intermediate plateaus. Besides, quantum fluctuations stabilize the $n=2/3$ phase and we have to account for the relevance of this new phase on the overall phase diagram. However, it is difficult to resolve most of the new solid structures numerically because they are not commensurate with our lattice sizes. Moreover, the differences in energy and filling for states of neighboring values of $x$ are small, and the plateaus rather narrow already in the classical limit. We nevertheless obtain evidence from the QMC simulations for (i) a non-direct transition to the staggered solid upon turning on a finite two-body repulsion $V$, and (ii) the stabilization of new VBC phases.
For this purpose, \Fref{fig:twobody_filling} shows QMC data for the $V$-dependence of the filling $n$ for $\mu/W=4,~5$ and $7$ respectively, obtained at $t/W=0.3$. There one clearly identifies a plateau corresponding to the $x=12$ structure (of filling $n=77/144$), commensurate with our lattice sizes of $L=12$ and $24$. In \Fref{fig:x12}, we show a representative boson covering obtained by QMC for $\mu/W=4$ at $V/W=1.25$, which is in perfect agreement with the $x=12$ structure introduced above and the formation of hexagonal resonances as in the $n=5/8$ phase.
From \Fref{fig:twobody_filling}, we find that the $n=2/3$ phase is stable for small finite values of $V/W<0.2$, before a transition takes place towards the $n=5/8$ density plateau. Similarly, we find from \Fref{fig:twobody_filling}, that the $n=3/4$ phase remains stable for small values of $V$. At larger values of $V/W$ we clearly resolve the $n=2/3$ and the $n=5/8$ plateau.
While we thus obtain evidence for a cascaded transition to the checkerboard solid, we were not in a position to fully resolve this transitions within our finite size QMC simulations. We hence did not attempt to comply a full phase diagram for finite $V$ in the quantum case, but considered the above set of representative cuts through the parameter space.
However, considering the relevant interaction range
$V\gtrsim W$~\cite{buechler07}, we still find that in particular the $n=5/8$ VBC phase can be realized for such realistic
values of the ratio between three- and two-body interaction terms.
\section{Conclusion}
We studied a model of hard-core bosons with strong three-body repulsions on the honeycomb lattice using quantum Monte Carlo simulations, exact cluster analysis and the tensor-network renormalization group approach. The system's ground state phase diagram exhibits besides a superfluid region several complex valence bond crystal phases at fractional fillings $9/16$, $5/8$, $3/2$ and $3/4$. The obtained quantum phase transitions between neighboring valence bond crystal phases are consistent with first-order transitions, given mild energy differences in both the potential and the kinetic energy sectors.
With regard to a possible experimental realization based on cold polar molecules in appropriately tuned external electric and microwave fields,
we included in addition to the three-body repulsions also nearest neighbour two-body interactions, which in a realistic set-up are at least of the same strength as
the three-body
interactions. Considering the competition between the different interaction terms,
we obtained a cascade of intermediate incompressible plateaus as the two-body interaction strength increases. Furthermore, we find that the valence bond crystal
phase of filling $5/8$ with an effective low-energy description in terms of a quantum dimer model remains accessible well within the reachable
parameter regime.
A further step towards modeling cold polar molecules on the honeycomb lattice would be to take into account the full long-ranged nature
of both the three-body and two-body interactions~\cite{buechler07}. Treating the longer ranged contributions appropriately appears to require the use of
different calculational techniques, since already the leading contributions to both interaction sectors provided a challenge for the quantum Monte Carlo
approach. Based on our current results, one could expect that longer-ranged interactions stabilize additional solid phases with large unit cells, at least in the classical regime. For finite hopping strengths, residual entropies of the classical states would be lifted, eventually leading to the
emergence of complex resonating structures similar to those described above. Whether other exotic states e.g. with topological order can indeed be stabilized in three-body extended Bose-Hubbard models~\cite{buechler07}, thus far appears open to future investigations.
\section*{Acknowledgements}
We should like to thank K. P. Schmidt, A. L\"auchli, R. Moessner, and A. Muramatsu for interesting discussions, and acknowledge the allocation of CPU time on the high performance computers at HLRS Stuttgart and NIC J\"ulich, where the numerical calculations have been performed. L. B. furthermore acknowledges support from the Studienstiftung des Deutschen Volkes, and HP. B. and S. W. from the DFG within the SFB/TRR 21.
\section*{References}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,358
|
Q: Show Custom Toolbar while Scrolling Screen with Keyboard Please read the question carefully before any action,
I am using the custom toolbar in my android application with one image, one button, and title.
Now below that, I have a full screen with edit texts and text views and buttons. While I am trying to fill data and keyboard is open at that time while I am scroll down my screen upside, it hides toolbar also, even toolbar is outside of scroll.
I have taken scrollbar inside the body view, not for the whole screen, but then also while I am scrolling it hides the toolbar.
A: Try this in manifest file it may work.
<activity
android:name=".YourActivity"
android:screenOrientation="portrait"
android:windowSoftInputMode="adjustResize" />
A: Use in your activity tag in Manifest file
<activity android:windowSoftInputMode="adjustResize"> </activity>
The activity's main window is always resized to make room for the soft keyboard on the screen.
Moreover, you can find the detail about it here official doc
Also, look at this question
Hope it will be helpful to you.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,480
|
If you want to create those areas in your home that really wow, you want the appropriate data. With a bit of know-how, some elbow grease and a touch of creativity, you'll be able to flip your visions into reality. Use the recommendation and tips you have discovered here to help you get started. Consider making use of wallpaper to only 50% of a wall. It can be costly to redecorate. Cowl half of the wall with wallpaper to save money. For instance, you might use wallpaper on half of the wall, then complement it with paint or a decorative border. This may make your own home look fashionable with out breaking the financial institution.
Be sure you have all the storage space you need. You may never get a room that appears prefer it got here out of a journal if there's litter everywhere. Look for engaging containers lined in a cloth that matches the remainder of the room. When you make storage part of your design, it's easy to make a room look fabulous. Just remember to determine the theme of your lounge before you start the undertaking. You possibly can select to have a really playful lounge with an entertainment system and toys when you have kids or a peaceable living room with a fireplace if you're a newly married couple.
When you've got a small dwelling, purchase furniture that may serve multiple functions. For instance, a storage ottoman can serve as a place to relaxation your legs as well as a place to stash magazines and knick-knacks. A futon can function seating and a bed for guests. Getting furnishings that's versatile can keep your property looking uncluttered in case you have a small house.
Compare the samples in numerous lighting and instances of day.
If your home is a smaller one the place some of the rooms have multiple capabilities, you need to purchase suitable furnishings. Some homes have the eating area and front room in a single space, as an example. So, when shopping for pieces of furnishings in this state of affairs it is best to try to get items that go effectively with each the eating and living area. As you store, take both rooms into account and buy pieces that may make a strong bond between the two areas and create stream.
If you want to add a dramatic touch to a room with out repainting it completely, you possibly can pick one wall to color in an accent shade. This must be a vibrant shade that coordinates with the remainder of the room's colors however positively stands out. Consider using a main shade in a room that's in any other case painted in pastels, as an example. All of your cautious interior-design decisions will likely be overlooked if the room you create is not practical. Points like site visitors move, maintenance and your private comfort, as well as the room's meant function, have to be considered earlier than any design selections are made for probably the most satisfying outcomes.
Are you in search of a starting place to your next interior design challenge? Inside design can appear a bit intimidating if decorating doesn't come naturally to you. Luckily, anyone can adorn their home with the correct advice. If you observe the helpful tips within the article that follows, you should have no hassle with your interior design initiatives. It may be difficult to brighten a basement since you cannot probably think about what you would do in such a darkish and gloomy place. When you use some brighter colours and materials, you possibly can flip your darkish, damp, miserable basement into a spot where you'll want to spend time with your family.
Get inventive. Even if you happen to do not consider your self a great artist, you can also make a wonderful assortment of artwork. Draw an emblem or an summary piece on a bit of drawing paper. It would not must be that huge. Put it in a high quality frame. If you would like actually do one thing good, create 3 or 4 drawings and frame all of them together. If you're decorating a smaller room or house, try to incorporate mirrors into your design. Mirrors create the phantasm of bigger house, and add depth and sweetness to the room's design as effectively. Interesting, distinctive frames also can enhance the decor of the space, turning a mirror into a murals.
Get new window coverings. Remember that consistency within a space is crucial to the overall look. These provide you with much larger management over a room's lighting.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 7,580
|
In enzymology, a m7G(5')pppN diphosphatase () is an enzyme that catalyzes the chemical reaction
7-methylguanosine 5'-triphospho-5'-polynucleotide + H2O 7-methylguanosine 5'-phosphate + polynucleotide
Thus, the two substrates of this enzyme are 7-methylguanosine 5'-triphospho-5'-polynucleotide and H2O, whereas its two products are 7-methylguanosine 5'-phosphate and polynucleotide.
This is the enzyme involved in the processing of amphetamines of the cathinone group, including mephedrone and khat.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is 7-methylguanosine-5'-triphospho-5'-polynucleotide 7-methylguanosine-5'-phosphohydrolase. Other names in common use include decapase, and m7G(5')pppN pyrophosphatase.
See also
7-Methylguanosine
References
EC 3.6.1
Enzymes of unknown structure
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,375
|
BEGIN;
SELECT CURRENT_TIMESTAMP AS '', 'Fixing https://github.com/Netflix/genie/issues/463' AS '';
ALTER TABLE `applications` CHANGE `user` `genie_user` VARCHAR(255) NOT NULL;
ALTER TABLE `clusters` CHANGE `user` `genie_user` VARCHAR(255) NOT NULL;
ALTER TABLE `commands` CHANGE `user` `genie_user` VARCHAR(255) NOT NULL;
ALTER TABLE `job_requests` CHANGE `user` `genie_user` VARCHAR(255) NOT NULL;
ALTER TABLE `jobs` CHANGE `user` `genie_user` VARCHAR(255) NOT NULL;
SELECT CURRENT_TIMESTAMP AS '', 'Finished fixing https://github.com/Netflix/genie/issues/463' AS '';
COMMIT;
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,202
|
\section{Introduction}
\label{sec:introduction}
The summit of Mauna Kea in Hawaii is home to the largest and most
powerful collection of astronomical telescopes in the world. For many
studies accurate flux calibration is critical for deriving the maximum
amount of information from observations with these telescopes, and
correction for the optical atmospheric and instrumental transmissions is
one of the main limitations of astronomical flux measurements from the
ground \citep[see][for the most recent reviews]{Burke10,
Patat11}. Therefore, as part of our Nearby SuperNova Factory project
\citep[\scname{SNfactory}\xspace,][]{Aldering02} we have carefully monitored the atmospheric
transmission over the course of our observing campaign, and in this
paper describe findings that should be of use to other Mauna Kea
observers.
According to the GONG (Global Oscillation Network Group) site survey
\citep{Hill94a, Hill94b}, the extinction above the summit of Mauna Kea
is among the lowest and most stable of any astronomical site. Several
studies were carried out at the end of the 80's to assess the
atmospheric characteristics above Mauna Kea \citep{Krisciunas87,
Boulade87, Beland88}. These have formed the basis for the standard
Mauna Kea extinction curve provided by most observatories on Mauna
Kea. However, the results reported by \cite{Boulade87} and
\cite{Beland88} are single-night extinction studies carried out using
the 3.6~m Canada France Hawaii Telescope (CFHT) over limited wavelength
ranges (3100--3900~\AA{} and 3650--5850~\AA, respectively). Thus, they
do not cover the entire optical window, nor do they reflect variability
of the extinction. The measurements of the optical extinction from the
Mauna Kea summit presented in \cite{Krisciunas87} are based on 27~nights
of $B$ and $V$ band ($\sim 4400$~\AA{} and $\sim 5500$~\AA{})
measurements from three different telescopes, including the 2.2~m
University of Hawaii telescope (UH88) and the CFHT, between 1983 and
1985. Since then, only the evaluation of the quality of the site for
the Thirty Meter Telescope (TMT) has been published \citep{Schock09,
Travouillon11}. The TMT site testing campaign confirms that Mauna Kea
is one of the best sites for ground based astronomy but does not include
the properties and the variability of the spectral extinction at the
site.
\scname{SNfactory}\xspace \citep{Aldering02} was developed for the study of dark energy using
Type Ia supernovae ({SNe~Ia}\xspace). Our goal has been to find and study a
large sample of nearby {SNe~Ia}\xspace, and to achieve the percent-level
spectro-photometric calibration necessary so that these {SNe~Ia}\xspace can be
compared with {SNe~Ia}\xspace at high redshifts. Since 2004 the \scname{SNfactory}\xspace has
obtained spectral time series of over 200 thermonuclear supernov\ae\xspace and these
are being used to measure the cosmological parameters and improve {SNe~Ia}\xspace
standardization by empirical means and through a better understanding of
the underlying physics. The main asset of the \scname{SNfactory}\xspace collaboration is the
SuperNova Integral Field Spectrograph \citep[\scname{SNIFS}\xspace,][]{Lantz04}, a
dedicated integral field spectrograph built by the collaboration and
mounted on the University of Hawaii UH88 telescope.
Along with the supernova\xspace observations, the \scname{SNfactory}\xspace data set includes
spectro-photometric observations of standard stars with \scname{SNIFS}\xspace, which
are used to obtain the instrumental calibration and the atmospheric
extinction. The observations benefit from a large wavelength range
(3200--9700~\AA) and the high relative precision that \scname{SNIFS}\xspace can
achieve. When possible, standards were obtained throughout a given night
to help distinguish between spectral and temporal variations of the
transmission. While it is common practice to derive an extinction curve
by solving independently for the extinction at each wavelength, this
approach ignores the known physical properties of the atmosphere, which
are correlated across wide wavelength regions. Using the standard star
spectra to disentangle the physical components of the atmosphere
extinction ensures a physically meaningful result, allows for robust
interpolation across standard star spectral features, and provides a
simpler and more robust means of estimating the error covariance matrix.
The method described in this paper allows us to obtain such a complete
atmospheric model for each observation night, {\it including nights
afflicted by clouds}. We use the results to generate a new mean Mauna
Kea extinction curve, and to explore issues related to variation in the
extinction above Mauna Kea.
\section{The \scname{SNfactory}\xspace Mauna Kea extinction dataset}
\label{sec:extinction_dataset}
We begin by describing the basic properties of the dataset to be used in
measuring the extinction properties above Mauna Kea.
\subsection{The Supernova Integral Field Spectrograph \& data
reduction}
\label{sec:snifs}
\scname{SNIFS}\xspace is a fully integrated instrument optimized for automated
observation of point sources on a structured background over the full
optical window at moderate spectral resolution. It consists of a
high-throughput wide-band pure-lenslet integral field spectrograph
\citep[IFS, ``\`a la TIGER'';][]{Bacon95, bacon01}, a multifilter
photometric channel to image the field surrounding the IFS for
atmospheric transmission monitoring simultaneous with spectroscopy, and
an acquisition/guiding channel. The IFS possesses a fully filled
$6\farcs 4 \times 6\farcs 4$ spectroscopic field of view subdivided into
a grid of $15 \times 15$ spatial elements (spaxels), a dual-channel
spectrograph covering 3200--5200~\AA{} and 5100--9700~\AA{}
simultaneously, and an internal calibration unit (continuum and arc
lamps). \scname{SNIFS}\xspace is continuously mounted on the south bent Cassegrain
port of the University of Hawaii 2.2~m telescope on Mauna Kea, and is
operated remotely. The \scname{SNIFS}\xspace standard star spectra were reduced using
our dedicated data reduction procedure, similar to that presented in
Section~4 of \cite{bacon01}. A brief discussion of the spectrographic
pipeline was presented in \cite{Aldering06}. Here we outline changes to
the pipeline since that work, but leave a complete discussion of the
reduction pipeline to subsequent publications focused on the instrument
itself.
After standard CCD preprocessing and subtraction of a low-amplitude
scattered-light component, the 225~spectra from the individual spaxels
of each SNIFS exposure are extracted from each blue and red spectrograph
exposure, and re-packed into two $(x,y,\lambda)$-datacubes. This highly
specific extraction is based upon a detailed optical model of the
instrument including interspectrum crosstalk corrections. The datacubes
are then wavelength-calibrated using arc lamp exposures acquired
immediately after the science exposures, and spectro-spatially
flat-fielded using continuum lamp exposures obtained during the same
night. Cosmic rays are detected and corrected using a
three-dimensional-filtering scheme applied to the datacubes.
Standard star spectra are extracted from each $(x,y,\lambda)$-datacube
using a chromatic spatial point-spread function (PSF) fit over a uniform
background
\citep{Buton09}\footnote{\url{http://tel.archives-ouvertes.fr/docs/00/56/62/31/PDF/TH2009_Buton_ClA_ment.pdf}}.
The PSF is modeled semi-analytically as a constrained sum of a Gaussian
(describing the core) and a Moffat function (simulating the wings). The
correlations between the different shape parameters, as well as their
wavelength dependence, were trained on a set of 300~standard star
observations in various conditions of seeing and telescope focus between
2004 and 2007 with \scname{SNIFS}\xspace. This resulted in a empirical chromatic model
of the PSF, depending only on an effective width (accounting for seeing)
and a flattening parameter (accounting for small imaging defocus and
guiding errors). The PSF modeling properly takes the
wavelength-dependent position shift induced by atmospheric differential
refraction into account without resampling.
\subsection{Data characteristics and sub-sample selection}
\label{sec:dataset}
\begin{figure}
\centering
\subfloat[Number of standard stars per night]{%
\label{fig:standards-number}%
\includegraphics[width=\columnwidth]{StandardsNumberDistribution.pdf}}%
\hspace{0mm}%
\subfloat[Airmass range ($\Delta_{X}$) per night]{%
\label{fig:standards-airmass}%
\includegraphics[width=\columnwidth]{DeltaAirmassDistribution.pdf}}%
\caption{Time evolution of the standard star number (\# standards) per
night \protect\subref{fig:standards-number} and of the airmass range
($\Delta_{X}$) per night \protect\subref{fig:standards-airmass}
after quality cuts. The nMAD acronym stands for ``normalized median
absolute deviation'' (the normalization factor is introduced in
order to use the median absolute deviation as a consistent estimator
for the estimation of the standard deviation).}
\label{fig:standards-statistics}
\end{figure}
The \scname{SNfactory}\xspace spectro-photometric follow-up has been running regularly since
September 2004, with a periodicity of two to three nights. In most years
observations were concentrated in the April to December period in order
to coincide with the best weather at Palomar where we carried out our
search for supernov\ae\xspace. Initially the nights were split with UH observers, with
\scname{SNfactory}\xspace taking the second half of allocated nights. In May 2006, our
program switched from half-night to full-night observations. The \scname{SNfactory}\xspace
time was mainly used to observe {SNe~Ia}\xspace (up to 20 per night), but in
order to flux calibrate the supernova\xspace data, standard star observations were
inserted throughout the night.
Two different kinds of standard stars were observed: bright standard
stars ($V = 4\text{--}7$) were mainly observed during nautical twilight,
while faint standard stars ($V = 10\text{--}14$) were observed during
astronomical twilight and during the night. A list of the standard
stars used for calibration purposes by the \scname{SNfactory}\xspace is given in
Table~\ref{tab:standard-stars}. A typical night started with $\sim 3$
bright standard stars and one faint standard star during evening
twilight, followed by 3 to 4 faint standard stars distributed all along
the night in between supernov\ae\xspace, and finished with another faint standard and
$\sim 3$ bright standard stars during morning twilight. Generally the
calibration provided in the literature for the fainter stars is of
higher quality. Moreover, we found that the very short (1--2 second)
exposures required for bright standard stars resulted in very complex
PSF shapes. During the period when observations were conducted only
during the second half of the night, the typical number of standard
stars was more limited, as seen in Fig.~\ref{fig:standards-number}.
The evolution through the years in the number of standard stars observed
per night is shown in Fig.~\ref{fig:standards-number}, while
Fig.~\ref{fig:standards-airmass} shows the airmass range. The noticeable
changes in the numbers and airmass distribution have several causes. As
mentioned above, the half-night allocations up through May 2006
restricted the number of standard stars that could be observed each
night without adversely impacting our SN program. In addition, in order
to improve the automation in executing our observing program we
developed algorithms to select a standard star during windows slated for
standard star observations. Initially the selections were pre-planned
manually each night, but this approach was not sufficiently flexible,
\eg to account for weather interruptions. In fall 2005 we developed an
automated selection algorithm designed to pick an observable star that
was best able to decorrelate airmass and spectral type based on the
standard stars previously observed that night. The idea here was to
obtain a good airmass range and observe enough different stellar
spectral classes to avoid propagating stellar features into the
calibration. More recently, having convinced ourselves that stellar
features did not present a problem when using the physical extinction
model presented here, the automated selection was changed so as to
minimize the extinction uncertainty, considering the standard stars
previously observed that night. Occasionally, nights when the Moon was
full were dedicated to intensive observations of standard stars as a
means of testing various aspects of our method and software
implementation. Finally, a few inappropriate standard stars --
Hiltner600 (double star), HZ4, LTT7987, GRW+705824 (broad-line DA white
dwarfs) -- have been phased out and are not included in the analysis
presented here.
Of the 711~nights originally available with spectroscopic data, we
removed 77~nights with only one or no standard star available. These
occurred primarily due to unplanned telescope closures during the night
caused by weather or technical problems. Such cases were concentrated
during the period when only half-nights were available. For these
nights, since an extinction solution is not possible, a mean atmospheric
extinction is used for the flux calibration of the science targets. To
ensure the quality of the Mauna Kea extinction determination for the
present study, we also choose to discard nights with an \emph{expected}
extinction error larger than 0.1~magnitude/airmass; this resulted in the
exclusion of an additional 55~nights. The expected extinction accuracy
was calculated using the known airmass distribution of the standard
stars and using the achromatic extraction error of 3\% and 2\%
empirically found for bright and faint standard stars, respectively
\citep{Buton09}. Calibration of science data on such nights is not
necessarily a problem, it is simply that the atmospheric properties are
difficult to decouple from the instrument calibration on such
nights. Finally, strict quality cuts on the number of stars ($\geq 3$)
and airmass range ($\geq 0.25$) per night are applied (respectively 68
and 33~nights are skipped). Additional cuts based on flags from the
pre-processing steps and the quality of the fit were applied to avoid
bad exposures or spectra with production issues. In the end, the data
sample is comprised of 4285~spectra from 478~nights that passed these
very restrictive cuts.
\section{Flux calibration formalism}
\label{sec:formalism}
In a given night the spectrum, $S_{i}(\lambda, \hat{z}_{i},
t)$\footnote{expressed in pseudo-ADU/s/\AA}, of an astronomical source
$i$ observed by \scname{SNIFS}\xspace can be expressed as,
\begin{equation}
\label{eq:formalism-1}
S_{i}(\lambda, \hat{z}_{i}, t) = S^{\star}_{i}(\lambda, t) \times
C(\lambda, t) \times T_{\atm}(\lambda, \hat{z}_{i}, t),
\end{equation}
where $S^{\star}_{i}(\lambda, t)$ is the intrinsic spectrum of the
source\footnote{expressed in erg/cm$^{2}$/s/\AA} as it would be seen
from above the atmosphere, $T_{\atm}(\lambda, \hat{z}, t)$ is the
time-dependent, line-of-sight ($\hat{z}$) dependent atmospheric
transmission. $C(\lambda, t)$ is the instrument calibration (\ie the
combined response of the telescope, the instrument and the detector),
such that,
\begin{equation}
\label{eq:formalism-2}
C(\lambda, t) = T_{\tel}(\lambda, t) \times T_{\scname{SNIFS}\xspace}(\lambda, t) \times
Q(\lambda, t)
\end{equation}
where $T_{\tel}(\lambda, t)$, $T_{\scname{SNIFS}\xspace}(\lambda, t)$ and $Q(\lambda,
t)$ are respectively the chromatic telescope transmission, instrument
transmission and detector quantum efficiency, all of which are
potentially time dependent.
Because the data are divided by a flat-field exposure that is not
required to be the same from night to night, we are interested only in
$t$ spanning one night intervals. We will assume that the instrument
response is stable over the course of a night, and therefore write:
\begin{equation}
\label{eq:formalism-3}
C(\lambda, t) = C(\lambda).
\end{equation}
Later, in \S~\ref{sec:stability-instrument}, we re-examine this question
and confirm that it is valid at a level much better than 1\%.
As for the atmospheric extinction, we choose to separate
$T_{\atm}(\lambda, \hat{z}_{i}, t)$ with respect to its time dependence,
as follows:
\begin{equation}
\label{eq:formalism-4}
T_{\atm}(\lambda, \hat{z}_{i}, t) =
\overline{T}_{\atm}(\lambda, \hat{z}_{i}) \times
\delta T_{i}(\lambda, \hat{z}_{i}, t).
\end{equation}
Here $\delta T_{i}(\lambda, \hat{z_{i}}, t)$ represents the normalized
atmospheric transmission variability at the time, $t$, along the line of
sight, $\hat{z}_{i}$, to the star $i$. By definition, a photometric
night is one in which the $\delta T_{i}$ are retrospectively found to be
compatible with 1 (taking into account the measurement errors),
irrespectively of wavelength, direction, or time.
Furthermore, it is common in astronomy to express the extinction in
magnitudes, such that the transmission, $\overline{T}_{\atm}(\lambda,
\hat{z})$, is given by
\begin{equation}
\label{eq:formalism-6}
\overline{T}_{\atm}(\lambda, \hat{z}) =
10^{-0.4 \times K_{\atm}(\lambda, \hat{z})},
\end{equation}
where $K_{\atm}(\lambda, \hat{z})$ is the atmospheric extinction in
magnitudes per airmass.
Overall, Eq.~\eqref{eq:formalism-1} becomes:
\begin{equation}
\label{eq:formalism-7}
\log \frac{S_{i}(\lambda, \hat{z}_{i}, t)}{S^{\star}_{i}(\lambda, t)} =
\log C(\lambda) - 0.4 \times K_{\atm}(\lambda, \hat{z}_{i}) +
\log \delta T_{i}(\lambda, \hat{z}_{i}, t).
\end{equation}
For standard star observations, $S^{\star}(\lambda, t)$ is supposedly
known (\cf \S~\ref{sec:results}) and the unknowns are $C(\lambda)$,
$K_{\atm}(\lambda, \hat{z})$ and the $\delta T_{i}(\lambda, \hat{z}_{i},
t)$ (one per star $i$). Conversely, for a supernova observation,
$S^{\star}(\lambda, t)$ becomes the unknown, and $C(\lambda)$, $
K_{\atm}(\lambda, \hat{z})$ and any deviation of $\delta T_{i}(\lambda,
\hat{z}, t)$ from unity would need to be known in order to achieve flux
calibration. As outlined in \cite{Aldering02} and \cite{pereira08}, with
SNIFS, $\delta T_{i}(\lambda, \hat{z}, t)$ can be determined from
secondary stars on the parallel imaging channel for fields having at
least one visit on a photometric night.
Our focus in this paper is on the properties of the atmospheric
extinction, $K_{\atm}(\lambda, \hat{z})$. But as we have just seen, its
determination is linked to the determination of the instrument
calibration, $C(\lambda)$, and of any atmospheric transmission
variations with time, $\delta T_{i}(\lambda, \hat{z}_{i}, t)$, for each
standard star, $i$. In order to constrain the extinction,
$K_{\atm}(\lambda, \hat{z})$, to have a meaningful shape, we now present
its decomposition into physical components.
\section{Atmospheric extinction model}
\label{sec:atmospheric-extinction}
It is now well established that the wavelength dependence of the
atmospheric extinction, $K_{\atm}(\lambda, \hat{z})$, is the sum of
physical elementary components \citep{Hayes75, Wade88, Stubbs07}, either
scattering or absorption. Furthermore, the extinction increases with
respect to airmass $X$ along the line of sight, $\hat{z}$, giving:
\begin{equation}
\label{eq:atm-model-1}
K_{\atm}(\lambda, \hat{z}) =\sum_{j} X^{\rho_{j}}(\hat{z})
\times k_{j}(\lambda).
\end{equation}
Here the different physical components $j$ are:
\begin{itemize}
\item Rayleigh scattering, $k_{R}$,
\item aerosol scattering, $k_{A}$,
\item ozone absorption, $k_{\ensuremath{\mathrm{O_{3}}}\xspace}$,
\item telluric absorption, $k_{\tell}$.
\end{itemize}
$X$ denotes airmass, and $\rho_{j}$ is an airmass correction exponent
(Beer-Lambert law in presence of saturation), with $\rho_{j} = 1$
\citep{Rufener86, Burke10} for all but the telluric component.
In the following subsections, we will present the different extinction
components as well as the time dependent part of the atmospheric
transmission, $\delta T_{i}$.
\subsection{Light scattering}
\label{sec:mie-scattering}
The treatment of light scattering depends on the ratio between the
scattering particle size and incident wavelength. Light scattering by
particles of size comparable to the incident wavelength is a complicated
problem which has an exact solution only for a homogeneous sphere and a
given refractive index. This solution was proposed by \cite{Mie08} to
study the properties of light scattering by aqueous suspensions of gold
beads. A limiting case of this problem --- the Born approximation in
quantum mechanics --- is Rayleigh scattering, for which the size of the
particles is very small compared to the incident wavelength. At the
other extreme, when the wavelength becomes much smaller than the size of
the scattering particles (like the water droplets or ice crystals in the
clouds), the scattering cross section becomes constant and of order of
the geometrical cross section. We cover these different cases in the
following subsections.
\subsubsection{Rayleigh scattering}
\label{sec:rayleigh-scattering}
As introduced above, Rayleigh scattering refers to scattering of light
by atoms or molecules (those of the atmosphere in our case) whose size
is much smaller than the incident wavelength. For molecules in
hydrostatic equilibrium, extinction due to Rayleigh scattering can be
expressed as,
\begin{equation}
\label{eq:rayleigh-1}
k_{R}(\lambda, P, h) =
\frac{2.5}{\ln(10)} \frac{\sigma(\lambda)\ P}{g(h)\ M},
\end{equation}
where $P$ is the atmospheric pressure at the site, $M$ is the molecular
mass of dry air, $g(h)$ is the equivalent acceleration of gravity at the
altitude $h$, and $\sigma(\lambda)$ represents the Rayleigh cross
section for dry air \citep{Bucholtz95, Breon98}.
Eq.~\eqref{eq:rayleigh-1} is a simplified equation of a more general
case including water vapor in the atmosphere, but it turns out that this
correction factor is negligible (of the order of $10^{-3}$) in this
analysis. In this case, the variation in Rayleigh extinction at a given
observing site depends only on surface pressure, $P$.
\cite{Sissenwine62} and \cite{Bucholtz95} have tabulated values for
cross sections of the Rayleigh scattering for a wavelength range from
0.2 to 1~$\mu$m. These data allowed \cite{Bucholtz95} to fit an
analytical model for the cross section:
\begin{equation}
\label{eq:rayleigh-2}
\sigma(\lambda) = A \lambda^{-\left(B + C \lambda + D/\lambda \right)},
\end{equation}
where $A$, $B$, $C$ and $D$ are numerical parameters.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{RayleighComparison.pdf}
\caption{Comparison between different methods to model the Rayleigh
scattering in the \scname{SNIFS}\xspace wavelength range from \cite{Hansen74}
(blue), \cite{Hayes75} (yellow), \cite{Froehlich80} (green) and
\cite{Bucholtz95} (red). The comparison is made for the mean
pressure at the Mauna Kea summit (616~mbar). The discrepancies
between the different models are less than 1.5\% over the wavelength
range considered in this paper.}
\label{fig:rayleigh-comparison}
\end{figure}
Fig.~\ref{fig:rayleigh-comparison} shows other numerical evaluations of
the Rayleigh scattering for a Mauna Kea mean pressure of 616~mbar,
including \cite{Hansen74}, \cite{Hayes75} and \cite{Froehlich80} whose
results are very similar to the model from \cite{Bucholtz95}. Note that
all these methods are very close to the simplified formula
$\lambda^{-4+\epsilon}$, which is often found in the literature. In our
analysis, the value of $\epsilon$ that best matches the Rayleigh
description at aforementioned 616~mbar is $-0.15$ (\cf
Fig.~\ref{fig:rayleigh-comparison}, solid black line).
Since we have a direct measurement of the surface pressure at the Mauna
Kea summit at the time of our observations, the Rayleigh scattering
component is not adjusted in our model. Rather, following common
practice in the atmospheric science community we directly use the
calculated Rayleigh extinction. For convenience we employ the
\cite{Hansen74} description, very close to the \cite{Bucholtz95}
expression.
\subsubsection{Aerosol scattering}
\label{sec:aerosols-scattering}
The monitoring of atmospheric aerosols is a fundamentally difficult
problem due to its varying composition and transport by winds over large
distances. For Mauna Kea we expect that aerosols of maritime origin,
essentially large sea-salt particles, will dominate
\citep{Dubovik02}. In that case, we can expect a low aerosol optical
depth given the elevation of Mauna Kea \citep{Smirnov01}. Furthermore,
the strong temperature inversion layer between 2000 and 2500~m over the
island of Hawaii helps to keep a significant fraction of aerosols below
the summit\footnote{\url{http://www.esrl.noaa.gov/gmd/obop/mlo/}}. Major
volcanic eruptions can inject aerosols into the upper atmosphere,
affecting extinction \citep{Rufener86, Vernier11}. Nearby Kilauea has
been active throughout the period of our observations, but its plume is
generally capped by the inversion layer and carried to the southwest,
keeping the plume well away from the Mauna Kea summit.
The particle sizes are of the order of the scattered wavelength for the
wavelength range (3000--10000~\AA) of our study. According to
\cite{Angstrom29}, \cite{Angstrom64} and \cite{Young89}, and in
agreement with the Mie theory, the chromaticity of the aerosol
scattering is an inverse power law with wavelength:
\begin{equation}
\label{eq:aerosol-1}
k_{A}(\lambda) = \tau \times (\lambda/1\;\mu\mathrm{m})^{-\aa}
\end{equation}
where $\tau$, the aerosol optical depth at 1~$\mu$m, and $\aa$, the \AA{}ngstr\"om\xspace
exponent, are the two parameters to be adjusted.
According to \cite{Reimann92}, the value of the exponent $\aa$ varies
between $4$ and $-2$ for astronomical observations, depending on the
composition of the aerosol particles. We will see in
\S~\ref{sec:aerosol-mauna-loa} that the \AA{}ngstr\"om\xspace exponent at the Mauna Kea
summit is confirmed to vary within these values, with a mean value close
to 1. While aerosol scattering may be spatially and temporally variable,
since it is anticipated to be weak at the altitude of Mauna Kea we begin
our study by assuming aerosol scattering is constant on the timescale of
a night. We defer the discussion of this hypothesis to
\S~\ref{sec:line-sight}.
\subsubsection{Water scattering in clouds and grey extinction}
\label{sec:water-scattering}
The water droplets and ice crystals in clouds also affect the
transmission of light through the atmosphere. To first approximation, it
can be assumed that the size of the constituents of the clouds are large
compared to the wavelength of the incident light (a cloud droplet
effective radius is of the order of at least 5~$\mu$m,
\citealt{Miles00}). In this case, the dominant phenomenon is the
refraction inside water droplets. The extinction is then almost
independent of wavelength and can be considered achromatic. We will
therefore refer to this extinction as ``grey extinction''. In
\S~\ref{sec:short-variability} we will examine this approximation more
closely, and demonstrate is applicability for cloud conditions under
which useful astronomical observations are possible.
In our current framework, extinction by clouds is the only component
treated as variable on the time scale of a night (\cf
\S~\ref{sec:discussion}). It is represented by the atmospheric
transmission variation parameter, $\delta T (\lambda, \hat{z}, t)$.
Being grey, there is no wavelength dependence, and we may write:
\begin{equation}
\label{eq:water-1}
\delta T(\lambda, \hat{z}, t) = \delta T (\hat{z}, t).
\end{equation}
\subsection{Molecular absorption}
\label{sec:molecular-absorption}
The molecular absorption bands and features in the atmosphere are
essentially due to water vapor, molecular oxygen, and ozone. Nitrogen
dioxide exhibits broad absorption, but is too weak to affect
astronomical observations \citep{Orphal03}. The regions below 3200~\AA{}
and above 8700~\AA{} are especially afflicted, due to strong \ensuremath{\mathrm{O_{3}}}\xspace and
\ensuremath{\mathrm{H_{2}O}}\xspace absorption, respectively.
\subsubsection{Ozone bands}
\label{sec:ozone-bands}
The ozone opacity due to the Hartley \& Huggins band \citep{Huggins90}
is responsible for the loss of atmospheric transmission below
3200~\AA{} and the Chappuis band \citep{Chappuis80} has a
non-negligible influence --- at the few percent level --- between
5000~\AA{} and 7000~\AA{}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{OzoneTemplate.pdf}
\caption{Ozone transmission template of the Hartley \& Huggins band in
the UV and the Chappuis band at 6000~\AA.}
\label{fig:ozone-template}
\end{figure}
In order to model the ozone absorption we are using a template (\cf
Fig.~\ref{fig:ozone-template}) for which we adjust the scale:
\begin{equation}
\label{eq:ozone-1}
k_{\ensuremath{\mathrm{O_{3}}}\xspace}(\lambda) = I_{\ensuremath{\mathrm{O_{3}}}\xspace} \times P_{\ensuremath{\mathrm{O_{3}}}\xspace}(\lambda)
\end{equation}
where $P_{\ensuremath{\mathrm{O_{3}}}\xspace}(\lambda)$ represents the ozone extinction template
computed using the MODTRAN library\footnote{\url{http://modtran.org/}}
and $I_{\ensuremath{\mathrm{O_{3}}}\xspace}$ is a scale factor, expressed in Dobson Units [DU].
\subsubsection{Telluric lines}
\label{sec:telluric-lines}
In contrast to the other extinction components, the telluric lines
affect only a few limited wavelength ranges. The major features are
comprised of saturated narrow \ensuremath{\mathrm{O_{2}}}\xspace lines, including the Fraunhofer ``A''
and ``B'' bands, deepest at 7594~\AA{} and 6867~\AA{} respectively, a
wide \ensuremath{\mathrm{H_{2}O}}\xspace band beyond 9000~\AA{}, and \ensuremath{\mathrm{H_{2}O}}\xspace absorption features in
several spectral regions between 6000~\AA{} and 9000~\AA. The weak O$_4$
features at 5322~\AA{} and 4773~\AA{} \citep{Newnham98} have been
neglected in our current treatment. Table~\ref{tab:telluric-domains}
shows the wavelength ranges taken to be affected by telluric features
for purposes of \scname{SNfactory}\xspace calibration. These wavelength ranges were
determined from the high resolution telluric spectrum from Kitt Peak
National Observatory\footnote{\url{http://www.noao.edu/kpno/}}
\citep[KPNO,][]{Hinkle03}, matched to the \scname{SNIFS}\xspace resolution.
\begin{table}
\caption{Wavelength ranges of telluric features, determined from the
high resolution KPNO spectrum, matched to the resolution of \scname{SNIFS}\xspace
(3.2~\AA{} for the red spectroscopic channel).}
\label{tab:telluric-domains}
\centering
\begin{tabular}{lcc}
\hline
\hline
Feature & Start & End \\
& [\AA] & [\AA] \\
\hline
$\ensuremath{\mathrm{O_{2}}}\xspace \gamma + \ensuremath{\mathrm{O_{4}}}\xspace$ & 6270.2 & 6331.7 \\
$\ensuremath{\mathrm{O_{2}}}\xspace $ B & 6862.1 & 6964.6 \\
$\ensuremath{\mathrm{H_{2}O}}\xspace$ & 7143.3 & 7398.2 \\
$\ensuremath{\mathrm{O_{2}}}\xspace $ A & 7585.8 & 7703.0 \\
$\ensuremath{\mathrm{H_{2}O}}\xspace$ & 8083.9 & 8420.8 \\
$\ensuremath{\mathrm{H_{2}O}}\xspace$ & 8916.0 & 9929.8 \\
\hline
\end{tabular}
\end{table}
For the telluric lines, the airmass dependence, $\rho$, from
Eq.~\eqref{eq:atm-model-1} corresponds to a saturation parameter. Since
the telluric contributions are separated enough to be interpolated over,
it is possible to determine this saturation parameter, which according
to \citet{Wade88, Stubbs07} is approximately~0.5 for strongly saturated
lines and 1 for unsaturated lines. In \S~\ref{sec:telluric-correction}
we will further discuss the value of $\rho$ for the telluric lines as
well as the method used to correct them.
\section{Nightly photometricity}
\label{sec:photometricity}
The photometricity of a night refers to its transmission stability in
time. At the present time we are only able to separate nights with
clouds from those unlikely to have clouds. Besides clouds, the aerosol
and water absorption components of the extinction are the most
variable. However, because their variation is difficult to detect, we do
not currently include these components when assessing the photometricity
of a night. Recall that our formalism and instrument capabilities allow
us to determine the extinction on both photometric and non-photometric
nights. Nights that are non-photometric simply use estimates of $\delta
T_{i}(\lambda, \hat{z}, t)$ obtained from the parallel imaging
channel. For this reason we can afford to be conservative in our
selection of photometric nights.
\begin{figure*}
\centering
\subfloat[Photometric night]{%
\label{fig:skyprobe-phot}%
\includegraphics[width=.85\textwidth]{SkyProbe_09_180.pdf}}\\
\hspace{0mm}%
\subfloat[Non-photometric night]{%
\label{fig:skyprobe-nonphot}%
\includegraphics[width=.85\textwidth]{SkyProbe_09_304.pdf}}%
\caption{\protect\subref{fig:skyprobe-phot} \emph{SkyProbe}\xspace atmospheric
transparency (blue points) and \scname{SNIFS}\xspace standard stars grey extinction
term $\delta T$ (red triangles) for a photometric night and
\protect\subref{fig:skyprobe-nonphot} a non-photometric night.}
\label{fig:skyprobe}
\end{figure*}
\subsection{\emph{SkyProbe}\xspace}
\label{sec:skyprobe}
In order to obtain a reliable assessment of the sky transparency
stability, we use several available sources. These include \emph{SkyProbe}\xspace
\citep{Cuillandre02, Steinbring09}, photometry from the SNIFS parallel
imager, the brightness stability of the SNIFS guide stars, the scatter
of our standard stars about the best extinction solution, and knowledge
of technical issues such as dome slit misalignment.
Because of its high cadence and continuity across the sky, we begin with
measurements from
\emph{SkyProbe}\xspace\footnote{\url{http://www.cfht.hawaii.edu/Instruments/Elixir/skyprobe/home.html}}
\citep{Cuillandre02, Steinbring09}, a wide field camera mounted at the
CFHT and dedicated to real time atmospheric attenuation analysis. Some
outlier cleaning of the \emph{SkyProbe}\xspace data is necessary since it includes
measurements taken when the telescope is slewing. There also is evidence
for occasional small but highly stable offsets between pointings,
suggestive of small systematics in the photometry reference catalog or
photometry technique employed. The robustness of such cleaning is
adversely affected on nights when CFHT slews frequently between fields.
We find that generally when the \emph{SkyProbe}\xspace data stream has an RMS greater than
3.5\% after cleaning the night is not likely to be photometric. One
added limitation of our use of the \emph{SkyProbe}\xspace data is the possibility that
CFHT could miss the presence of clouds if only part of the sky is
affected throughout the night.
\subsection{Guide star}
\label{sec:guide-star}
Since the guiding video and resultant brightness measurements of SNIFS
guide stars are stored for all guided observations, the presence of
clouds in the SNIFS field can be ascertained directly. Some cleaning is
needed for these data as well, since cosmic ray hits or strong seeing
variations can produce measurable fluctuations in the guide star
photometry. The guide star video has a rate between 0.4 and 2~Hz, so the
data can be averaged over 30--60~sec intervals to achieve sensitivity at
the few percent level for most guide stars. As different guide stars
from the different targets observed over the course of a night have
different and unknown brightnesses, these data can only detect relative
instability over the interval of an observation but not between
observations and with poor sensitivity for short exposures.
\subsection{Tertiary reference stars}
\label{sec:photometric-std}
For fields that are visited numerous times, field stars in the \scname{SNIFS}\xspace
photometric channel can provide an estimate of the average attenuation
over the course of the parallel spectroscopic exposure. We refer to
these as a ``multi-filter ratio'' or MFR. The \scname{SNIFS}\xspace spectroscopic and
imaging channels cover adjacent regions of sky spanning just a few
arcminutes, and sit behind the same shutter. \cite{pereira08} found that
this relative photometry is accurate to $\sim 2$\%, except for the rare
cases where there are few suitable field stars. For long exposures of
supernovae there generally are enough field stars with high
signal-to-noise. Some standard star fields lack enough stars to ensure a
sufficient number of high signal-to-noise field reference stars for our
typical exposure times and under mildly cloudy conditions. Use of such
standards for our program has been phased out.
\subsection{Spectroscopic standard stars}
\label{sec:spectroscopic-std}
Finally, our formalism allows us to easily compute an initial instrument
calibration and extinction curve under the assumption that a night is
non-photometric. The resulting values of $\delta T(\hat{z},t)$ can then
be used to detect the presence of clouds during the standard star
exposures themselves.
\subsection{Combined probes}
\label{sec:combined-probes}
Combining all this information, we estimate the photometricity of the
night for the targets observed by SNIFS. It is important to consider
the noise floor for each source to avoid rejecting too many photometric
nights. By examining the distribution of each photometricity source we
are able to define two thresholds, ``intermediate'' and
``non-photometric''. For our purposes in this paper, non-photometric
nights are defined as having at least one source above a non-photometric
threshold, or all sources above the intermediate thresholds. The
threshold values are listed in Table~\ref{tab:photo-cuts}.
\begin{table}
\caption{Root Mean Square thresholds (in \%) for the different
photometricity source distributions. For values intermediate between the
photometric and non-photometric thresholds, a combination of indicators
is used to ascertain the temporal stability of the atmospheric transmission.}
\label{tab:photo-cuts}
\centering
\begin{tabular}{lcc}
\hline
\hline
Sources & Photometric & Non-photometric \\
\hline
\emph{SkyProbe}\xspace & $<2.5$ & $>3.5$ \\
\scname{SNIFS}\xspace guide star & $<2.5$ & $>5$ \\
\scname{SNIFS}\xspace photometry & $<2.5$ & $>4$ \\
\scname{SNIFS}\xspace standard stars & $<2.5$ & $>4$ \\
\hline
\end{tabular}
\end{table}
Examples of the \emph{SkyProbe}\xspace transmission are shown in
Fig.~\ref{fig:skyprobe-phot} for a photometric night and
Fig.~\ref{fig:skyprobe-nonphot} for a non-photometric night. The red
triangles represent the grey extinction term, $\delta T_{i}(\hat{z},t)$,
for each standard star $i$ observed during the night. The pattern of the
grey extinction seen in Fig.~\ref{fig:skyprobe-nonphot} follows that of
the independent \emph{SkyProbe}\xspace data (blue points). This confirms that the
parameter $\delta T(\hat{z},t)$ is able to track atmospheric attenuation
by clouds (\cf Fig.~\ref{fig:grey-histo} to see the distribution of
$\delta T(\hat{z},t)$ in non-photometric conditions). We estimate that
$\sim 35\%$ of the \scname{SNfactory}\xspace nights were photometric according to these
transmission stability cuts. This value is lower than values of 45--76\%
that have been reported elsewhere (\citealt{Schock09} \& ESO Search for
Potential Astronomical Sites,
ESPAS\footnote{\url{http://www.eso.org/gen-fac/pubs/astclim/espas/espas_reports/ESPAS-MaunaKea.pdf}}),
however as noted earlier, for our purposes in this paper we wish to set
conservative photometricity criteria.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{GreyExtinction-NonPhot-Distribution.pdf}
\caption{Distribution and histogram of the grey transmission
parameter $\delta T$ for each observed standard star during
non-photometric conditions. Note that many non-photometric nights have
only thin clouds, and therefore $\delta T \sim 1$.}
\label{fig:grey-histo}
\end{figure}
\section{Extinction from standard star observations}
\label{sec:results}
In the case of standard star observations, $S^{\star}(\lambda, t)$
is known \emph{a priori} as a tabulated reference, $\overline{S}(\lambda)$, and
thus the term on the left side of Eq~\eqref{eq:formalism-7} becomes a
known quantity. To solve for the extinction and instrument
calibration we begin by constructing a conventional~$\chi^{2}$:
\begin{equation}
\label{eq:application-1}
\chi^{2} = \sum_{i}
\mathcal{R}_{i} \cdot V_{i}^{-1} \cdot \mathcal{R}_{i}^{\text{T}},
\end{equation}
where the index $i$ stands for each individual standard star, $V_{i}$
is the covariance matrix (described below) and $\mathcal{R}_{i}$
represents the residuals, given by,
\begin{equation}
\label{eq:application-1bis}
\mathcal{R}_{i} = \log C(\lambda) -
0.4 \times K_{\atm}(\lambda, \hat{z}) +
\log \delta T_{i}(\hat{z}, t) -
\log\frac{S_{i}(\lambda, \hat{z}, t)}{\overline{S}_{i}(\lambda)}.
\end{equation}
$K_{\atm}(\lambda, \hat{z})$ represents the parametrization detailed in
\S~\ref{sec:atmospheric-extinction} for the Rayleigh, aerosols, ozone
and telluric extinction components. The only adjustable parameters
are:
\begin{itemize}
\item $C(\lambda)$, the instrument calibration (\cf Eq.~\ref{eq:formalism-2}),
\item $\aa$ and $\tau$, the aerosol \AA{}ngstr\"om\xspace exponent and optical
depth,
\item $I_{\ensuremath{\mathrm{O_{3}}}\xspace}$, the ozone template scale factor,
\item $\delta T_{i}(\hat{z}, t)$, the transmission variation for each
standard star $i$.
\end{itemize}
$k_{R}(\lambda)$ is not adjusted since it depends only on the surface
pressure $P$, which is known \emph{a priori}. The instrument
calibration, $C(\lambda)$, and the atmospheric extinction,
$K_{\atm}(\lambda, \hat{z})$, are spectrally smooth, so we do not
constrain the model at full resolution. Rather, we employ a coarser
spectral grid, consisting of ``meta-slices''; we construct meta-slices
for each of the two \scname{SNIFS}\xspace channels, with meta-slice widths between 100
and 150~\AA{} depending on the channel. The telluric line scale factors
must also be determined, but we will see in the following section that
this can be accomplished in a separate step.
$V_{i}$ in Eq.~\ref{eq:application-1} is the covariance matrix between
all meta-slice wavelengths of standard star $i$ for a given night; this
assumes the covariance between standards is zero. To build the
covariance matrix of each standard star, we first include the
statistical covariance issued from the point-source PSF-extraction
procedure. A constant is then added to the whole matrix, representing a
3\% correlated error between all meta-slices for a given standard star
observation. This is added as a way to approximate our empirically
determined per-object extraction error.
As described in \S~\ref{sec:priors}, when solving for the instrument
calibration and extinction we will modify Eq.~\ref{eq:application-1}
to include Bayesian priors on some atmospheric parameters.
\subsection{Telluric correction}
\label{sec:telluric-correction}
The telluric lines affect only limited wavelength regions, whereas the
other atmospheric extinction components are continuous (aerosol and
Rayleigh scattering) or very broad (ozone). Although it is possible to
fit the telluric lines at the same time as the extinction continuum, we
chose to do it in a separate step both for historical implementation
reasons and to allow a telluric correction when the flux calibration is
not needed. As a consequence, with our approach the telluric wavelength
regions can be either avoided for the study of the atmospheric
extinction continuum or corrected separately using standard stars to
determine an average correction spectrum per night. In the latter case we
separate $K_{\atm}(\lambda, \hat{z})$ as follows:
\begin{equation}
\label{eq:telluric-1}
K_{\atm}(\lambda, \hat{z}) = \underbrace{%
X(\hat{z}) \times
\left(
k_{R}(\lambda) + k_{A}(\lambda) + k_{\ensuremath{\mathrm{O_{3}}}\xspace}(\lambda)
\right)
}_{K(\lambda, \hat{z})} + X^{\rho}(\hat{z}) \times k_{\tell}(\lambda).
\end{equation}
For a standard star $i$ on a given night,
\begin{equation}
\label{eq:telluric-2}
\mathscr{C}^{\star}_{i}(\lambda, \hat{z}, t) =
\frac{S_{i}(\lambda, \hat{z}, t)}{\overline{S}_{i}(\lambda)} =
\delta T_{i}(\hat{z}, t) \times C(\lambda) \times
10^{-0.4 \times K(\lambda, \hat{z})},
\end{equation}
behaves as a smooth function of wavelength, which can be reasonably well
modeled with a spline $\mathscr{C}(\lambda, \hat{z}, t)$, outside the
telluric lines regions. Inserting this into Eq.~\eqref{eq:formalism-7}
for standard stars gives:
\begin{equation}
\label{eq:telluric-3}
\log \frac{S_{i} (\lambda, \hat{z}, t)}{\overline{S}_{i}(\lambda)} =
\log \mathscr{C}_{i}(\lambda, \hat{z}, t) -
0.4 \times X_{i}^{\rho}(\hat{z}) \times k_{\oplus}(\lambda).
\end{equation}
As shown in \S~\ref{sec:atmospheric-extinction}, the amplitude of the
telluric lines is proportional to the factor $X^{\rho}(\hat{z})$.
Taking the logarithm of Eq.~\eqref{eq:telluric-3}, we obtain a linear
expression with respect to the logarithm of the airmass, $\log
X(\hat{z})$, allowing us to fit for the saturation factor $\rho$ and the
telluric extinction $k_{\oplus}(\lambda)$:
\begin{equation}
\label{eq:telluric-4}
\log \left( -2.5 \times
\log \frac{S_{i}(\lambda, \hat{z}, t)/\overline{S}_{i}(\lambda)}{%
\mathscr{C}_{i}(\lambda, \hat{z}, t)}
\right) = \log k_{\oplus}(\lambda) + \rho \times \log X_{i}(\hat{z}).
\end{equation}
Optically thin absorption produces attenuation proportional to the
airmass ($\rho=1$) whereas highly saturated lines are expected to have
equivalent widths growing as the square root of the airmass
($\rho=0.5$). According to the observations performed by \cite{Wade88}
for airmasses from 1 to 2, the saturated Fraunhofer ``A'' and ``B''
lines and the water lines have an airmass dependence $\rho \simeq 0.6$.
We repeated their approach and fit for $\rho_{\ensuremath{\mathrm{O_{2}}}\xspace}$ and $\rho_{\ensuremath{\mathrm{H_{2}O}}\xspace}$
for each observation night. The saturation distributions are shown in
Fig.~\ref{fig:telluric-saturation}. We find a median value of 0.58 for
$\rho_{\ensuremath{\mathrm{O_{2}}}\xspace}$, and find a normalized Median Absolute Deviation (nMAD) of
0.03. As for the \ensuremath{\mathrm{H_{2}O}}\xspace lines, since the water content in the atmosphere
is variable, so is the saturation parameter $\rho_{\ensuremath{\mathrm{H_{2}O}}\xspace}$. We found
median values of $0.60\pm0.27$ and $0.35\pm0.37$ for water regions below
and above $\approx 9000$~\AA{}, respectively.
The latter result for the strong \ensuremath{\mathrm{H_{2}O}}\xspace absorption region (8916--9929~\AA)
is treated independently from the region below $\approx 9000$~\AA{}
since it cannot be measured very well. This is because the wavelength
range of this particular band extends beyond the \scname{SNIFS}\xspace wavelength
range, which makes the estimation of the spline fit for $\mathscr{C}$
unreliable. As we can see in Fig.~\ref{fig:telluric-computation}, the
``strong'' water telluric band is difficult to correct in these
conditions. Nevertheless, by fixing the saturation parameter, a
correction accurate to 5--10\% can still be obtained.
Subsequently, and in order to improve the fit, we choose to neglect the
variations of the saturation and we fix $\rho_{\ensuremath{\mathrm{O_{2}}}\xspace} = 0.58$ for the
molecular oxygen absorption and to $\rho_{\ensuremath{\mathrm{H_{2}O}}\xspace} = 0.6$ for the water
vapor absorption \citep[as did][]{Wade88}. Neglecting the saturation
variations for the water bands has no perceptible negative impact on the
quality of the telluric correction since there is still a degree of
freedom from the telluric scale factor.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{TelluricSaturationParameter.pdf}
\caption{Distributions of the saturation parameters, $\rho_{\ensuremath{\mathrm{O_{2}}}\xspace}$ and
$\rho_{\ensuremath{\mathrm{H_{2}O}}\xspace}$, for each night of the \scname{SNfactory}\xspace data set. The water lines
above 9000~\AA{} are treated separately due to the difficulties
encountered in the correction of the ``strong'' \ensuremath{\mathrm{H_{2}O}}\xspace telluric
band. One reason we chose to fix the water saturation parameter is
that some fits gave unphysical values (\eg $\rho_{\ensuremath{\mathrm{H_{2}O}}\xspace} > 1$).}
\label{fig:telluric-saturation}
\end{figure}
After removing the pseudo-continua, $\mathscr{C}(\lambda, \hat{z}, t)$,
from the spectra, linear fits of the intensity are performed for each
group of lines of the \scname{SNfactory}\xspace derived telluric template (\ie \ensuremath{\mathrm{O_{2}}}\xspace and \ensuremath{\mathrm{H_{2}O}}\xspace)
with their respective saturation parameter fixed.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Telluric_07_253.pdf}
\caption{Example of a telluric correction spectrum computation for a
typical night. Top panel: All the reference normalized standard
star spectra $S/\overline{S}$ of the night and their continua
$\mathscr{C}$ (dashed lines). Middle panel: Adjusted telluric
template intensities as function of their
saturation. Bottom panels: Dependence with respect to
airmass of $\log S/\overline{S} - \log \mathscr{C}$ integrated
over each group of lines (shaded areas in middle panel). The
dashed lines represent the airmass dependence with a
\emph{fixed} saturation exponent.}
\label{fig:telluric-computation}
\end{figure*}
The results of the linear fit for $k_{\oplus}(\lambda)$ from a given
night are shown on Fig.~\ref{fig:telluric-computation}. The first panel
shows each observed spectrum divided by its corresponding intrinsic
spectrum (solid lines), together with the pseudo-continuum
$\mathscr{C}(\lambda, \hat{z}, t)$ (dashed lines) which is a global fit
for $S/\overline{S}$ in which the telluric regions are
excluded. The resulting telluric correction template is presented in the
second panel. The linear fits for each group of lines are shown in the
three bottom panels. We use $\mathrm{H_{2}Os}$ to denote the strong
\ensuremath{\mathrm{H_{2}O}}\xspace telluric lines redward of 9000~\AA{}, and $\mathrm{H_{2}Ow}$ to
denote the weaker \ensuremath{\mathrm{H_{2}O}}\xspace telluric lines blueward of this. In the middle
bottom panel we see that the $\mathrm{H_{2}Os}$ correction is poorly
constrained due to the large scatter, as expected. (We eventually
expect to improve this situation by extending the spectral extraction
beyond 1~$\mu$m.)
The current accuracy of the telluric correction is generally at the same
level as the noise of the spectra, and thus sufficient for rigorous
spectral analysis, including studies of spectral indicators
\citep{Bailey09, Chotard11}. For the oxygen lines, which are very
narrow, some wiggles ($\sim 2$\% peak to peak fluctuations) can remain
after the correction in a few cases due to small random wavelength
calibration offsets (of the order of $\sim 0.1$~\AA)
between the spectra and the template.
\begin{figure*}
\centering
\includegraphics[width=1.01\textwidth]{Extinction_07_253_extcal.pdf}
\caption{Upper panel: Atmospheric extinction, $K(\lambda)$ (solid
line), for a given non-photometric night (2010-06-28); \ie the sum
of its physical components, Rayleigh scattering $k_{R}(\lambda)$
(dashed line), aerosol scattering $k_{A}(\lambda)$ (dash-dot line)
and ozone absorption $k_{\ensuremath{\mathrm{O_{3}}}\xspace}(\lambda)$ (dotted line). Middle
panel: instrument calibration $C(\lambda)$; this includes not only
the overall throughput, but also scaling by the spectral flat field.
Bottom panel: Relative error of the standard star fluxes after the
fit (\ie the difference between the individual standard star flux
solutions and $\log C(\lambda)$). The grey bands in all three panels
indicate meta-slices that are affected by the ``strong'' \ensuremath{\mathrm{H_{2}O}}\xspace
telluric features. This wavelength region is not used when fitting
for the other physical components.}
\label{fig:comp-extinction}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.01\textwidth]{Extinction_07_253_airmass.pdf}
\caption{Linear fit of $\log S/\overline{S} - \log\delta T$ with
respect to the airmass for three different wavelengths (indicated by
dashed lines in the top panel), where each point represents a star
with (large points with error bars) or without (small points) grey
extinction correction. The dashed lines represent the linear fits
and the grey bands their corresponding error.}
\label{fig:extinction-airmass}
\end{figure*}
\subsection{Nightly atmospheric extinction solutions}
\label{sec:multi-stand-approach}
\subsubsection{Introduction of Bayesian priors}
\label{sec:priors}
Outside of the telluric lines, or after correction of the telluric
features, Eq.~\eqref{eq:formalism-7} for standard stars becomes:
\begin{equation}
\label{eq:multi-standard-1}
\log \frac{S_{i} (\lambda, \hat{z}, t)}{\overline{S}_{i} (\lambda)}
= \log C(\lambda) - 0.4 \times X_{i}(\hat{z}) \times K(\lambda) +
\log \delta T_{i}(\hat{z}, t).
\end{equation}
Under these conditions, a degeneracy appears in
Eq.~\eqref{eq:multi-standard-1} between the instrument calibration $\log
C(\lambda)$ and the average of the grey extinction parameters $\langle
\log \delta T_{i} \rangle$. This means that the value of $C(\lambda)$
and the geometric average of the $\delta T_{i}$ values will only be
relative measurements during non-photometric nights. The absolute scale
of the $\delta T_{i}$ can be determined independently though using MFRs
obtained by secondary stars from the \scname{SNIFS}\xspace photometric channel. Such
MFRs are not needed to compute the atmospheric extinction, but we refer
the interested reader to \cite{pereira08} and \cite{Scalzo10} for
further description of this particular step of the \scname{SNIFS}\xspace flux
calibration process.
This degeneracy can be lifted during the fit by setting the mean value
of the grey extinction parameter to an arbitrary value. For a
photometric night there should be no need for a grey extinction term
($\delta T_{i} \equiv 1$), thus $\langle \log \delta T_{i} \rangle$
would be zero if such a term were included. To ease comparisons between
photometric and non-photometric nights, or the effects of changing the
photometric status of a night, we therefore set the mean value of the
cloud transmission to 1 on non-photometric nights. Since often such
nights have only thin cirrus or clear periods, this can provide
meaningful information on cloud transmission for other types of
studies. Note that mathematically the 3\% correlated error put into the
covariance matrix, $V$, can be traded off against the grey extinction
term on non-photometric nights. However, this has no effect on the
photometric solution.
In addition, in order to ensure that all components behave physically,
we chose to also apply priors to the aerosol and ozone parameters. This
choice is motivated by the fact that degeneracies can appear between the
physical components in some wavelength ranges resulting in negative
extinction or poor numerical convergence (For example, on
non-photometric nights there is a degeneracy between $\delta T$ and
$\tau \sim \lambda^{0}$). All priors are Gaussian, but we chose to set
a prior on the logarithm of the aerosol optical depth since this
provides a better match to the aerosol optical depth distribution (\cf
Fig.~\ref{fig:optical-depth}) at the atmospheric observatory on nearby
Mauna Loa. All priors are summarized in Table~\ref{tab:priors}, and they
are implemented by adding a penalty function to the $\chi^{2}$ of
Eq.~\eqref{eq:application-1}:
\begin{equation}
\label{eq:multi-standard-2}
\chi^{2}_{\mathrm{total}} = \chi^{2} + \Psi^{2}
\end{equation}
where the priors on the atmospheric extinction shape are
encapsulated in the penalty function,
\begin{equation}
\label{eq:multi-standard-4}
\Psi^{2} = \left( \frac{I_{\ensuremath{\mathrm{O_{3}}}\xspace} - {I^{\star}_{\ensuremath{\mathrm{O_{3}}}\xspace}}}{\sigma^{\star}_{I_{\ensuremath{\mathrm{O_{3}}}\xspace}}} \right)^{2} +
\left( \frac{\aa - \aa^{\star}}{\sigma^{\star}_{\aa}} \right)^{2} +
\left( \frac{\ln ( \tau / \tau^{\star} ) }{\varsigma^{\star}_{\tau}} \right)^{2}.
\end{equation}
The extinction for a given night is obtained by minimizing
Eq.~\eqref{eq:multi-standard-2} for all std stars $i$, and all
(meta-slices) wavelengths $\lambda$.
\begin{table}
\caption{List of priors used in the fit for the
atmospheric extinction and their corresponding errors.
Their goal is to ensure physical behavior of the
extinction components.}
\label{tab:priors}
\centering
\begin{tabular}{cr@{ $\pm$ }lc}
\hline
\hline
Prior & \multicolumn{2}{c}{Value} & Scaling \\
\hline
$I^{\star}_{\ensuremath{\mathrm{O_{3}}}\xspace}$ & 260 & 50~DU& linear \\
$\tau^{\star}$ & 0.007 & 80\% & logarithmic \\
$\aa^{\star}$ & 1 & 3 & linear \\
\hline
\end{tabular}
\end{table}
\subsubsection{Maximum-likelihood fitting}
\label{sec:bayesian-priors-and-fit}
The total $\chi^{2}$ (Eq.~\eqref{eq:multi-standard-2}), that includes
the Bayesian penalty function $\Psi^{2}$ is minimized. Consequently, the
resulting error on the parameters is computed by inverting the
covariance matrix of the full function.
The different contributions to the resulting extinction fit for a
specific night are illustrated in Fig.~\ref{fig:comp-extinction}, along
with the instrument calibration. The noticeable larger scatter in the
blue arm can be explained by the PSF model used for point-source
extraction which is presumably less accurate in the blue than in the
red, because of a stronger chromaticity; this is even more evident for
the short exposure PSF model used for bright standard stars, due to the
complex PSF in short exposures. Overall, increased errors in
point-source extracted spectra in the blue will contribute to a larger
scatter in the flux calibration residuals. Linear relations are adjusted
for three distinct wavelength bins from the same night in the
Fig.~\ref{fig:extinction-airmass}. In this example the night is
slightly non-photometric and the grey attenuation has a RMS of the
distribution of the order of 4\% (\cf Fig.~\ref{fig:extinction-alphas}
for a distribution of the $\delta T_{i}(\hat{z}, t)$ in airmass and in
time). Finally, the covariance map of all the adjusted parameters of the
night is presented in Fig.~\ref{fig:cov-matrix}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Extinction_07_253_alphas.pdf}
\caption{The grey extinction distribution, $\delta T(\hat{z}, t)$, for
a slightly non-photometric night (2010-06-28). The top panel shows
$\delta T(\hat{z}, t)$ as a function of airmass while the bottom
panel shows $\delta T(\hat{z}, t)$ as a function of time. Since the
night is non-photometric most of the scatter is due to clouds, but
there is some scatter due to achromatic extraction errors. Recall
that we normalize by the mean of $\delta T(\hat{z}, t)$ rather than
the largest (fewest clouds) value.}
\label{fig:extinction-alphas}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=.65\textwidth]{Covariance_10_179.pdf}
\caption{Resulting covariance matrix of the parameters for a given
non-photometric night (2010-06-28). The represented parameters are
the flux solution $C(\lambda)$ at every meta-slice wavelength, the
atmospheric \textit{normalized model parameters} $I_{\ensuremath{\mathrm{O_{3}}}\xspace}$, $\tau$
and $\aa$ as well as the grey extinction $\delta T$ per star. SDSS
filters u, g, r and i are also represented (black squares) over the
flux solution wavelengths.}
\label{fig:cov-matrix}
\end{figure*}
The 478~extinction curves computed using this method are presented in
Fig.~\ref{fig:extinction-variability}. Individual nights are plotted in
light grey whereas the median extinction, based on the median physical
parameters, is represented by the thick solid green line. The green band
represents the dispersion of the nightly extinction determinations.
Fig.~\ref{fig:ext-pars-correlation} shows the correlations
between all extinction parameters for both photometric and
non-photometric nights. No obvious correlations appear between the
parameters except for $\tau$ and $\aa$, which are
expected to be correlated since in combination they represent
the aerosol component of the extinction. This demonstrates the
independent behavior of the physical components.
\begin{figure*}
\centering
\includegraphics[width=.7\textwidth]{ExtinctionVariability.pdf}
\caption{Typical extinction and its variability over the 7~years of
the observation campaign. The superposition of all nightly
extinction curves (grey) is shown, along with the median Mauna Kea
extinction we derive (green).}
\label{fig:extinction-variability}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{extParams.pdf}
\caption{Correlations between extinction parameters for both
photometric (blue circles) and non-photometric nights (red
points). No strong correlations are seen between pressure, ozone, or
telluric \ensuremath{\mathrm{H_{2}O}}\xspace and \ensuremath{\mathrm{O_{2}}}\xspace. The aerosol parameters $\tau$ and $\aa$ are
correlated because they represent the same physical component. The
distributions of the parameter values (diagonal plots) also are
similar on photometric and non-photometric nights. Note that the
incidence of aerosol exponents near $\aa \sim 0$ is suppressed on
non-photometric nights since the grey extinction term can completely
compensate for this case.}
\label{fig:ext-pars-correlation}
\end{figure*}
\section{Accuracy and variability}
\label{sec:extinction-errors}
Having presented the fitting methodology and median results, we will now
discuss the accuracy of the results and what can be said about the
variability of the various physical components.
\subsection{Extinction accuracy}
\label{sec:computation-accuracy}
The 478~extinction error spectra are shown in
Fig.~\ref{fig:extinction-errors}. These curves represent our ability to
measure the extinction with the standard observations taken each night.
This is largely set by the numbers of standard stars and their airmass
range. These curves do not represent natural variations in the
extinction, which are addressed below. The mean error, displayed as a
green line in the figure, decreases from its maximal value of
20~mmag/airmass in the UV to 7~mmag/airmass in the red. This power law
behavior can be explained by the fact that the aerosol component is the
main source of variability allowed in our model. A similar behavior is
observed for all the individual error spectra (light grey curves) with
various values of the power exponent. On nights with few standards or a
small airmass range, the errors are larger. On such nights, use of the
median extinction curve rather than the nightly curve should provide
better calibration.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{AtmosphericExtinctionError.pdf}
\caption{Error on the nightly extinction computations as a function of
wavelength (light grey lines). The mean error and RMS of the
extinction curves distribution are shown in green.}
\label{fig:extinction-errors}
\end{figure}
\subsection{Rayleigh variability}
\label{sec:rayleigh-variability}
The average Rayleigh extinction component is shown in
Fig.~\ref{fig:RayleighVariability}. During the course of the
observations the surface pressure at the summit of Mauna Kea showed a
peak-to-peak variation of 6~mbar around the average 616~mbar value. The
nMAD of the variation was 2~mbar, which corresponds to an extinction
variation of 2~mmag/airmass at the blue edge of our spectral range. The
peak-to-peak extinction variation still would be only 6~mmag/airmass.
These variations are negligible with respect to the aerosol or ozone
components and have no impact on the global extinction variability at
our required level of accuracy.
\subsection{Aerosol variability}
\label{sec:aerosol-variability}
The median aerosol component we derive is shown in
Fig.~\ref{fig:AerosolVariability}, superimposed on our aerosol
measurements from each night. Fortunately the median value is
very low, ranging from $\sim$40 down to $\sim$10~mmag/airmass over the
\scname{SNIFS}\xspace wavelength range. Even so, aerosol variations are the largest
contributor to the variability of the extinction continuum from one
night to another, and these variations are also shown in
Fig.~\ref{fig:AerosolVariability}. The combined fluctuations in $\tau$
and the \AA{}ngstr\"om\xspace exponent are responsible for variations that reach an
extreme of 0.4~mag/airmass at 3300~\AA{}. It should be noted that \scname{SNIFS}\xspace
is not operated when winds exceed 20~m/s, so there may be more extreme
aerosol excursions that our data do not sample.
\subsection{Ozone variability}
\label{sec:ozone-variability}
The mean ozone component we derive is shown in
Fig.~\ref{fig:OzoneVariability}, superimposed on our ozone measurements
from each night. Over the Big Island the mean is 260~DU, a value lower than
the world-wide mean. The maximal peak to peak ozone variability in one
year can reach 60 to 80~DU. Much of this variation is scatter around a
clear seasonal trend attributable to the Quasi-Biennial Oscillation
(QBO), which has a mean amplitude around 20~DU. Much of the remaining
variation is due to tropospheric winds \citep{steinbrecht03}. 20~DU is
only a 8\% variation, corresponding to 4~mmag/airmass for the ozone peak
at 6000~\AA{}. We will show in \S~\ref{sec:ozone-mauna-loa}, this level
of variation currently is very difficult for us to detect. While this
level of uncertainty is unimportant for our science program, we could
better constrain ozone by using the \scname{SNIFS}\xspace standard star signal
available below 3200~\AA{}, where the Hartley \& Huggins band is much
stronger.
\begin{figure}
\centering
\subfloat{%
\label{fig:RayleighVariability}%
\includegraphics[width=\columnwidth]{RayleighVariability.pdf}}\\
\hspace{0mm}%
\subfloat{%
\label{fig:AerosolVariability}%
\includegraphics[width=\columnwidth]{AerosolVariability.pdf}}\\
\hspace{0mm}%
\subfloat{%
\label{fig:OzoneVariability}%
\includegraphics[width=\columnwidth]{OzoneVariability.pdf}}%
\caption{Nightly contributions of each physical component of the
continuum extinction, showing their mean value and variability
during the course of our observing campaign. All the nightly
Rayleigh extinction curves (in grey) are within the width of the
red median curve.}
\label{fig:components-variability}
\end{figure}
\subsection{Telluric line variability}
\label{sec:telluric-variability}
The strength of the \ensuremath{\mathrm{O_{2}}}\xspace and \ensuremath{\mathrm{H_{2}O}}\xspace telluric features is displayed in
Fig.~\ref{fig:telluric-intensity}. The fluctuations of the strength of
the \ensuremath{\mathrm{O_{2}}}\xspace lines are remarkably small. This is fortunate given the strength
of the \ensuremath{\mathrm{O_{2}}}\xspace A and B bands. On the other hand the strengths of the \ensuremath{\mathrm{H_{2}O}}\xspace
lines vary widely. For some nights our current practice of assuming a
fixed \ensuremath{\mathrm{H_{2}O}}\xspace strength for the entire night may be too simplistic. However,
Mauna Kea has less Precipitable Water Vapor than most ground-based
astronomical sites, and so far we have not encountered any serious
problems with this approximation. We explore this question further in
\S~\ref{sec:telluric-comparison}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{TelluricIntensity.pdf}
\caption{Distribution of the telluric strength over the \scname{SNfactory}\xspace data set for
each group of features. The strength represents the multiplicative
factor needed to scale the template to the observed telluric features, and
as such has no units.}
\label{fig:telluric-intensity}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{ExtinctionComparison.pdf}
\caption{Mean \scname{SNfactory}\xspace atmospheric extinction (solid line) and its
physical components (dashed lines). For comparison we overplot the
previous Mauna Kea extinction measures derived by \cite{Boulade87}
(diamonds), \cite{Beland88} (triangles) and \cite{Krisciunas87}
(stars).}
\label{fig:extinction-comparison}
\end{figure*}
\subsection{Short time variability}
\label{sec:short-variability}
Due to our limited temporal sampling during each night, and the 2--3\%
achromatic scatter, it is difficult for us to detect extinction
variability of less than 1\% over the course of a night. As noted above,
Rayleigh scattering variations are negligible, and ozone variations are
nearly negligible with most of the variation occurring on seasonal
timescales. Aerosol variations thus remain the primary concern for
extinction variations during the night.
It should be noted that the $\delta T(\hat{z}, t)$ term, used on
photometric and non-photometric nights alike, is degenerate with aerosol
extinction having an \AA{}ngstr\"om\xspace exponent of zero. Therefore, part of any
temporal aerosol variation will be absorbed.
The GONG site survey of Mauna Kea \citep{Hill94a, Hill94b} found
exceptional extinction stability. \cite{Mann11} have demonstrated
stability at the milli-magnitude level over the course of hours using
\scname{SNIFS}\xspace. Unfortunately, our extraction error is large enough (2 to 3\%)
to dominate our measurement of aerosol extinction variability (see
Fig.~\ref{fig:comp-extinction} for typical errors). We have nevertheless
tried to compare the mean aerosol extinction of time domains distributed
at the beginning and the end of the night, but we have no
evidence/indication whatsoever for a strong aerosol variation during the
course of the night. Finally, the various metrics we used in
\S~\ref{sec:photometricity} to determine the photometric quality of each
night are sensitive enough to alert us to transmission changes greater
than several percent. If we are able to reduce the achromatic standard
star extraction uncertainty, our sensitivity to temporal variations
would also improve.
Turning to other sites, for Cerro Tololo in Chile \cite{Burke10} showed
that aerosol variations were rather small during a night. \cite{Burki95}
showed that for the La Silla site in Chile the total $U$-band extinction
is correlated over a period of several days and that over one night the
auto-correlation drops by only 5\%. This implies a typical $\sim 2$\%
variation in extinction between the beginning and the end of the night.
In our formalism the $\delta T(\hat{z}, t)$ term can be used to test for
such temporal trends on photometric nights. These sites have similar
altitudes (2.1~km and 2.3~km, respectively), with inversion layers that
are near \citep{Burki95, Gallardo00} their summits. Since more aerosol
variability might be expected for these sites than for Mauna Kea, this
further justifies our assumption that nightly variability of
aerosols is relatively unimportant for Mauna Kea.
\section{Comparison to external measurements}
\label{sec:comp-meteo}
As noted in the introduction, there have been previous measurements of
the optical extinction above Mauna Kea, and we begin this section with a
comparison of our results to those previous
studies. Fig.~\ref{fig:extinction-comparison} shows our median Mauna Kea
extinction curve along with the observed fluctuations. Previous spectroscopic
determinations, which covered only the blue half of the optical window, are
overplotted as diamonds \citep{Boulade87} and stars
\citep{Beland88}. The broadband filter extinction measurements from
\cite{Krisciunas87} are also shown (triangles). These external sources,
taken more than 20~years prior, show excellent agreement with our own
Mauna Kea extinction curve.
For further comparison we turn to atmospheric sciences data from Mauna
Loa Observatory\footnote{\url{http://www.esrl.noaa.gov/gmd/obop/mlo/}}
\citep[MLO,][]{Price59, Price63}, operated by the U.S. network
\emph{Earth System Research Laboratory} of the National Oceanic and
Atmospheric Administration (NOAA). MLO is situated on the side of Mauna
Loa mountain, 41~km south of Mauna Kea, at an altitude of 3400~m and
770~m below the summit. The observatory possesses specific
instrumentation for aerosol and ozone measurements and we are
particularly interested in comparing these measurements of ozone
strength, aerosol optical depth and \AA{}ngstr\"om\xspace exponent with our own
measurements obtained during nighttime on Mauna Kea.
\subsection{Ozone comparison}
\label{sec:ozone-mauna-loa}
The Total Column Ozone (TCO) in the region (\ie the amount of ozone in
a column above the site from the surface to the edge of the
atmosphere) is measured at the
MLO\footnote{\url{http://www.esrl.noaa.gov/gmd/ozwv/dobson/mlo.html}}
three times per day during week days using a Dobson
spectro-photometer \citep{Komhyr97}. The TCO comparison
between Mauna Loa and \scname{SNfactory}\xspace data is presented in
Fig.~\ref{fig:ozone-intensity}.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{OzoneIntensity.pdf}
\caption{Total Column Ozone [Dobson units] from May 2004
to December 2011 for the \scname{SNfactory}\xspace quarterly weighted averages (blue
squares) and the Mauna Loa Observatory daily measurements (red
points). The error bars on the \scname{SNfactory}\xspace points represent the standard
errors on the quarterly weighted averages.}
\label{fig:ozone-intensity}
\end{figure}
The seasonal \ensuremath{\mathrm{O_{3}}}\xspace variability is well established by the Mauna Loa data,
with clear peaks corresponding to summer. The seasonal pattern observed
in the \scname{SNfactory}\xspace ozone variability is less obvious. This is due to our use of
a prior. The typical nightly measurement uncertainty on $I_{\ensuremath{\mathrm{O_{3}}}\xspace}$ is
around 60~DU without the prior. Given our prior with standard deviation
50~DU (see Table~\ref{tab:priors}), the fit will return a value of
$I_{\ensuremath{\mathrm{O_{3}}}\xspace}$ roughly midway between the true value and the mean of the
prior (260~DU). This bias suppresses the amplitude we measure for the
seasonal variation in ozone, but in the worst case leads to a
spectrophotometric bias below 0.3\%. If desired, this bias could be
further minimized by using the Mauna Loa values as the mean for the prior on each
night, or by first estimating a prior on a quarterly basis from our own
data using an initial run without the prior, or by measuring \ensuremath{\mathrm{O_{3}}}\xspace using
SNIFS coverage of the Hartley \& Huggins band below 3200~\AA.
Overall, the behavior of our ozone measurements averaged over a quarter
seems consistent with the Mauna Loa measurements although being not as
accurate.
\subsection{Aerosol comparison}
\label{sec:aerosol-mauna-loa}
The MLO ground-based aerosol data come from the \scname{AERONET}\xspace \citep[AErosol
RObotic NETwork.][]{Holben01, Smirnov01, Holben03}. \scname{AERONET}\xspace uses wide
angular spectral measurements of solar and sky radiation measurements.
For these reasons, such aerosol measurements at Mauna Loa have only been
carried out during daytime, and usable data requires reasonably good
weather conditions (no clouds). Figures~\ref{fig:angstrom-exponent} and
\ref{fig:optical-depth} show the comparisons between Mauna Loa data and
\scname{SNfactory}\xspace data for the \AA{}ngstr\"om\xspace exponent and the optical depth of the aerosol.
\begin{figure}
\centering
\subfloat{%
\label{fig:angstrom-exponent}%
\includegraphics[width=\columnwidth]{AerosolsAngstromExponant.pdf}}\\
\hspace{0mm}%
\subfloat{%
\label{fig:optical-depth}%
\includegraphics[width=\columnwidth]{AerosolsOpticalDepth.pdf}}%
\caption{Aerosol \AA{}ngstr\"om\xspace exponent
\protect\subref{fig:angstrom-exponent} and optical depth
\protect\subref{fig:optical-depth} distributions from May 2004 to
December 2010 for the \scname{SNfactory}\xspace (circles) and the Mauna Loa Observatory
(crosses) measurements. Only the common nights of both data sets are
presented. Mauna Loa aerosol data were not available for 2011 in
time for our analysis.}
\end{figure}
The \scname{SNfactory}\xspace \AA{}ngstr\"om\xspace exponent, $\aa$, from 2004 to 2011 is distributed between
$-2$ and 4. The fact that the Mauna Kea site is almost 1000~m higher
than MLO, and the fact that the \scname{AERONET}\xspace observations are carried out
during day time make a point to point comparison
difficult. Nevertheless, the general features of both distributions
still can be compared with each other: the mean \scname{SNfactory}\xspace \AA{}ngstr\"om\xspace exponent ($1.3
\pm 1.4$) is compatible with the one measured by \scname{AERONET}\xspace ($1.2 \pm
0.9$).
We see in Fig.~\ref{fig:optical-depth} that the \scname{SNfactory}\xspace aerosol optical
depth mean value is slightly smaller than the Mauna Loa mean value
(still, they are compatible with each other), and the distribution does
not show the slight seasonal trend observed in the \scname{AERONET}\xspace data.
Nevertheless, we do expect to have more stable values since the
measurements are made at night far above the inversion layer, and the
mean optical depth value is compatible with aerosols from maritime
origins.
\subsection{Telluric absorption comparison}
\label{sec:telluric-comparison}
\cite{Patat11} showed that the water band equivalent width at
7200~\AA{} is well correlated with the Preciptable Water Vapor (PWV)
at the Paranal site in Chile. In order to check the consistency of our
approach, we compare our water intensity measurement to the PWV amount
at Mauna Kea. For that purpose, we used the optical depth at 225~GHz
data from the Caltech Submillimeter Observatory \citep[CSO,][]{Peterson03} and
the empirical formula from \cite{Otarola10} to compute the PWV at Mauna
Kea during the \scname{SNfactory}\xspace observation campaign (\cf
Fig.~\ref{fig:telluric-PWV}). Plotting the computed telluric water
intensity (below 9000~\AA{}) with respect to the PWV (\cf
Fig.~\ref{fig:telluric-correlation}), we found that both quantities
were highly correlated, reinforcing the findings of \cite{Patat11} and
validating our water telluric correction approach. Furthermore, using an
orthogonal distance regression we find a saturation exponent of
$\rho=0.62\pm0.01$ from the best power law describing the distribution
(\cf red curve in Fig.~\ref{fig:telluric-correlation}). This is in
excellent agreement with the value $\rho_{\ensuremath{\mathrm{H_{2}O}}\xspace}=0.6$ determined from the
airmass dependence of $I_{\ensuremath{\mathrm{H_{2}O}}\xspace}$ in the \scname{SNfactory}\xspace dataset.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{PrecipitableWaterVapor.pdf}
\caption{Precipitable Water Vapor (PWV) above the Mauna Kea summit
during the \scname{SNfactory}\xspace observation campaign. The PWV is empirically
computed \citep{Otarola10} from the atmospheric optical depth at
225~GHz data by the 225~GHz tipping radiometer at the Caltech
Submillimeter Observatory \citep{Peterson03}.}
\label{fig:telluric-PWV}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{PWVcorrelation.pdf}
\caption{Correlation between the intensity $I_{\ensuremath{\mathrm{H_{2}O}}\xspace}$ of the water
telluric line computed from the \scname{SNfactory}\xspace data and the Precipitable
Water Vapor gathered from the Caltech Submillimeter
Observatory. The red line represents the best power law fit of the
distribution with an exponent value of 0.62 --- close to the water
saturation parameter of 0.6 used in our procedure.}
\label{fig:telluric-correlation}
\end{figure}
\section{Revisiting standard assumptions}
\label{sec:discussion}
A major strength of our analysis comes from the homogeneity and temporal sampling of the \scname{SNfactory}\xspace data
set. It consists of a large number of spectro-photometric standard star
spectra observed, processed and analyzed in a uniform way. In
addition, the sampling candence is frequent and covers a
long period of time. Observations were taken under a wide range of atmospheric conditions,
including drastically non-photometric nights, which allows us to test
many of the standard assumptions used in spectrophotometry, some of
which we employed in \S~{\ref{sec:formalism}}. In particular, the range
of conditions allows us to check our atmospheric model under conditions
of strong extinction by clouds, something that has not been done before
with spectroscopy simultaneously covering the full ground-based optical
window.
\subsection{Grey extinction}
\label{sec:discuss-grey}
Using the software package \emph{Optical Properties of Aerosols and
Clouds} \citep[OPAC,][]{Hess98}, we simulated $\sim5$~mag of
extinction by clouds. While useful observations under such conditions
would be unlikely, this large opacity makes it possible to detect any
non-grey aspect of cloud extinction. We analyzed three types of cloud
(cumulus, stratus and cirrus) in a maritime environment using the
standard characteristics from OPAC. The results, as shown in
Fig.~\ref{fig:clouds-opac}, demonstrate that extinction from cumulus
and stratus do exhibit a small trend in wavelength. But even for such
strong extinction of 5~mag/airmass the change in transmission from
3200~\AA{} to 10000~\AA{} is below 3\%. Cirrus with less than 1~mag
of extinction is the most common cloud environment that still allows
useful observing, and Fig.~\ref{fig:clouds-opac} demonstrates that
such clouds would be grey to much better than 1\%.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{CumulusExtinction.pdf}
\caption{Simulations using OPAC of the wavelength dependence for
typical clouds (in maritime environment) through very strong ---
$\sim5$ magnitudes --- of extinction. Even with such strong
extinction, the trend from 3200~\AA{} to 9700~\AA{} is of the order
of 3\% for cumulus and stratus and negligible for cirrus.}
\label{fig:clouds-opac}
\end{figure}
Although atmospheric conditions such as these are usually avoided or
inappropriate for observations, we wanted to check whether or not such
an effect was visible in our data. Therefore, we gathered
11~observations (\cf Fig.~\ref{fig:clouds-extinction}) of the standard
star GD71 affected by different levels of cloud extinction (from 0.7
to 4.5~mag/airmass). We calibrated these spectra without introducing
our grey extinction correction factor, $\delta T$. We then performed a
$\chi^{2}$ test that showed that the resulting extinction curves are
compatible with a constant to better than 1\% across the full
wavelength coverage of \scname{SNIFS}\xspace.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{CloudsGreyExtinction.pdf}
\caption{\scname{SNfactory}\xspace observations of the standard star GD71 under various
level of cloud extinction during non photometric nights. The top
panel shows the flux-calibrated spectra from both spectroscopy
channels, without grey extinction correction, compared with the
standard star reference template (green). The middle panel shows
the additional extinction $\delta T(\lambda, \hat{z}, t)$. The
bottom panel shows the variation with wavelength of the ratios
spectra/reference. It is notable that even with strong extinction by
clouds the transmission is compatible with grey extinction ($\delta
T(\lambda, \hat{z}, t) = \delta T(\hat{z}, t)$) at the $\sim1$\%
level (grey band).}
\label{fig:clouds-extinction}
\end{figure}
Our findings using spectroscopy agree with theoretical expectations
and the OPAC clouds models. They also agree with the observational
analysis by \cite{Ivezic07} based on repeated broadband filter images
of the same \emph{Sloan Digital Sky Survey} (SDSS) field, which show
only a very weak dependence of the photometric residuals with color,
even through several magnitudes of extinction by clouds, and even when
comparing the flux in the $U$ and $Z$ bands. Similarly \cite{Burke10}
find little sign of color variation from $g-r$, or separately $r-i$,
$i-z$ and $z-y$, colors synthesized from their spectroscopic
measurements.
As a final check, we can compare the median extinction curves from
photometric and non-photometric nights. Fig.~\ref{fig:P_vs_NP} shows
the difference of these two extinction curves. The agreement is much
better than 1\% over the full optical window. The smooth trend is due to
the aerosol component of our extinction model, and may be a hint either
of slight coloring due to clouds or a small difference in the aerosol
size distribution with and without clouds. A small residual due to ozone
is also apparent.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Phot_vs_NonPhot.pdf}
\caption{The difference, in [mag/airmass], between the median
extinction measured on photometric nights and the median extinction
measured on non-photometric nights.}
\label{fig:P_vs_NP}
\end{figure}
Altogether, we confirm that the treatment of cloud extinction being
grey remains valid at the 1\% level or better.
\subsection{Stability of the instrument}
\label{sec:stability-instrument}
In \S~\ref{sec:formalism} we assumed that the instrument calibration,
$C(\lambda, t)$, would be stable over the course of a night. Here we
provide further justification for this assumption.
To begin, we note that there are several possible sources that could
lead to changes in the instrument calibration. Dust accumulation on
exposed optics alone can amount to 1~mmag/day per surface,
gradually reducing the reflectivity of the telescope mirror and
transmission of the spectrograph entrance window. In our case, the
telescope tube is closed and the observatory dome is unvented, so dust
accumulation on the telescope optics is minimized. The telescope mirrors
are regularly cleaned with CO$_{2}$ ``snow'' and realuminized
occasionally.
Because the spectrograph optics are enclosed, their transmission is
expected to be stable (except for the dichroic beam-splitter which shows
changes with humidity --- an effect corrected in our pipeline). The
quantum efficiency of the detectors will depend on the cold-head
temperature at the very reddest wavelengths, where phonons can assist an
electron into the conduction band. The electronics could experience
drift, though our gain tests using X-rays in the lab and photon transfer
curves \emph{in situ} do not show this.
To experimentally address this issue we have measured the stability of
$C(\lambda, t)$ over the course of several nights under photometric
conditions. In this test $C(\lambda, t)$ was found to be consistent
from one night to another at better than the percent level using the
method developed in \S~\ref{sec:multi-stand-approach}.
In a separate test we compared $C(\lambda, t)$ using 23~standard stars
observed throughout the same highly non-photometric night. When
divided by their mean, we find that the RMS of the distribution is
below the percent level. In this case, because we allow for clouds,
only chromatic effects can be tested.
Fig.~\ref{fig:flux-solution-stability} shows the excellent agreement
throughout the night.
Of course it is important to keep in mind that major events, such as
cleaning or realuminization of the mirror, or instrument repairs, can
change the value of $C(\lambda, t)$ significantly. The desire to avoid
the need to explicitly account for such events, and the good nightly
consistency demonstrated above, justifies our choice to assume that
$C(\lambda, t) = C(\lambda)$ \emph{during} a night, while enabling it to change
from one night to another.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{FluxSolutionStability.pdf}
\caption{Instrument calibrations, $C(\lambda)$, for 23~observations of
GD71 from the same highly non-photometric night (top
panel). Relative error of each instrument calibration with respect
to the mean value (bottom panel). The mean RMS over the whole
wavelength range is $~0.007$ and the features are due to spectral
resolution issues between the spectra and their reference. Poorer
accuracy due the water telluric band in the region redward
8900~\AA{} is noticeable.}
\label{fig:flux-solution-stability}
\end{figure}
\subsection{Line of sight dependency}
\label{sec:line-sight}
One assumption that we have not directly tested is the dependence, or
lack thereof, of extinction on the viewing direction. Rayleigh and ozone
extinction variations across the sky will be negligible. Aerosols,
transported by winds, can be variable and so could vary across the
sky. Due to the very low level of aerosols typically encountered over
Mauna Kea, detection of such aerosol variations would require an
intensive campaign with \scname{SNIFS}\xspace, or better yet, with a LIDAR system. The
very low rates of change in aerosol optical depth found in the \scname{AERONET}\xspace
data for nearby Mauna Loa \citep[\eg][]{Stubbs07} suggest that
variations across the sky will be small or rare. Even for Cerro Tololo,
a site at significantly lower altitude and surrounded by dry mountains
(the Andes foothills) rather than ocean, \citet{Burke10} find little
evidence for aerosol gradients across the sky. Thus, while we have not
been able to directly test the assumption of sightline independence for
our extinction measurements, any residual effects are likely to be
small. On non-photometric nights the achromatic portion of any sightline
dependence would be absorbed into the grey extinction term. In any
event, any such spatio-temporal variations will be averaged away across
our large dataset.
\section{Conclusions}
\label{sec:conclusions}
We have derived the first fully spectroscopic extinction curve for Mauna
Kea that spans the entire optical window and samples hundreds of
nights. The median parameters, tabulated in
Table~\ref{tab:final-params}, and the median curve, tabulated in
Table~\ref{tab:atmospheric_extinction}, can be used to correct past and
future Mauna Kea data for atmospheric extinction. Comparison of our
median extinction curve shows good agreement in regions of overlap with
previously published measurements of Mauna Kea atmospheric extinction
\citep{Boulade87, Krisciunas87, Beland88} even though the measurements
are separated by roughly 20~years.
Our large dataset of 4285~spectra of standard stars collected over
7~years means that a wide range of conditions has been sampled. Using
this, we estimate the per-night variations on the extinction curve (see
Table~\ref{tab:atmospheric_extinction}), and this can be employed by
others to estimate the uncertainty in their data when using the median
extinction curve rather than deriving their own nightly flux
calibration. This is especially important for Mauna Kea due to the
presence of several of the world's largest telescopes, for which time
spent on calibration is often at odds with obtaining deeper
observations, and where calibration to a few percent is often deemed
adequate by necessity.
The method providing the extinction we have introduced in this paper has
several notable features. First, it borrows techniques long used in the
atmospheric sciences to decompose the extinction into its physical
components. This allows robust interpolation over wavelengths affected
by features in the spectra of the standard stars, and it accounts for
even weak extinction correlations spanning across wavelengths that would
otherwise be hard to detect. We have also utilized a method of dealing
with clouds by which the non-cloud component of the extinction still can
be measured. This enables chromatic flux calibration even through
clouds, and with an external estimate of the cloud component alone (such
as through the \scname{SNIFS}\xspace Multi-filter ratios) allows accurate flux
calibration even in non-photometric conditions.
Because of these desirable properties, we encourage the broader use of
this technique for flux calibration in astronomy. Furthermore, we note
that this parametric template is especially easy to use for simulating
the flux calibration requirements for future surveys (The atmospheric
extinction computation code is available from the \scname{SNfactory}\xspace software web site
\url{http://snfactory.in2p3.fr/soft/atmosphericExtinction/}).
Finally, we have used our large homogeneous dataset to examine several
standard assumptions commonly made in astronomical flux calibration, like
the greyness of the clouds, and we confirmed our instrument
stability. These assumptions are usually difficult for most astronomers
to verify with their own data, which usually spans just a few nights. We
draw attention to the fact that the altitude of Mauna Kea and the
generally strong inversion layer sitting well below the summit helps
minimize the impact of aerosols and water vapor. These features are
often neglected relative to metrics related to image quality, but they
help make Mauna Kea an excellent site also for experiments requiring
accurate flux calibration.
\begin{table}
\caption{Summary of the median values for the various adjusted
physical quantities. For $\delta T$ which represents the cloud
extinction, the absolute scale mean value is meaningless (as
explained in \ref{sec:priors}), only variability could
have an interest as an estimate of grey fluctuations during
non-photometric conditions.}
\label{tab:final-params}
\centering
\begin{tabular}{cc@{ $\pm$ }cl}
\hline
\hline
Parameter & Median & nMAD & Description \\
\hline
$\delta T$ & 1.04 & 0.34 & grey extinction (mean \& RMS) \\
$I_{\ensuremath{\mathrm{O_{3}}}\xspace}$ & 257.4 & 23.3 & \ensuremath{\mathrm{O_{3}}}\xspace intensity \\
$\tau$ & 0.0084 & 0.0014 & aerosol optical depth \\
$\aa$ & 1.26 & 1.33 & aerosol \AA{}ngstr\"om\xspace exponent \\
$\rho_{\ensuremath{\mathrm{O_{2}}}\xspace}$ & 0.58 & 0.04 & telluric \ensuremath{\mathrm{O_{2}}}\xspace saturation \\
$\rho_{\ensuremath{\mathrm{H_{2}O}}\xspace}$ & 0.60 & 0.27 & telluric \ensuremath{\mathrm{H_{2}O}}\xspace saturation \\
\hline
\end{tabular}
\end{table}
\begin{acknowledgements}
We are grateful to the technical and scientific staff of the
University of Hawaii 2.2-meter telescope for their assistance in
obtaining these data. D.~Birchall assisted with acquisition of the
data presented here. We also thank the people of Hawaii for access to
Mauna Kea. This work was supported in France by CNRS/IN2P3,
CNRS/INSU, CNRS/PNC, and used the resources of the IN2P3 computer
center; This work was supported by the DFG through TRR33 ``The Dark
Universe'', and by National Natural Science Foundation of China (grant
10903010). C. WU acknowledges support from the National Natural
Science Foundation of China grant 10903010. This work was also
supported in part by the Director, Office of Science, Office of High
Energy and Nuclear Physics and the Office of Advanced Scientific
Computing Research, of the U.S. Department of Energy (DOE) under
Contract Nos. DE-FG02-92ER40704, DE-AC02-05CH11231,
DE-FG02-06ER06-04, and DE-AC02-05CH11231; by a grant from the Gordon
\& Betty Moore Foundation; by National Science Foundation Grant Nos.
AST-0407297 (QUEST), and 0087344 \& 0426879 (HPWREN); by a Henri
Chr\'etien International Research Grant administrated by the American
Astronomical Society; the France-Berkeley Fund; by an Explora'Doc
Grant by the R\'egion Rh\^one-Alpes. We thank the AERONET principal
investigator Brent Holben and his staff for its effort in establishing
and maintaining the AERONET sites. The Caltech Submillimeter
Observatory is operated by the California Institute of Technology
under cooperative agreement with the National Science Foundation
(AST-0838261).
\end{acknowledgements}
\bibliographystyle{aa}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,955
|
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:drawable="@drawable/navigation" android:state_pressed="false"/>
<item android:drawable="@drawable/navigation2" android:state_pressed="true"/>
<item android:drawable="@drawable/navigation2" android:state_focused="true"/>
</selector>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,133
|
require 'capybara'
require 'capybara/rspec'
#require 'capybara/poltergeist'
# require 'capybara/webkit'
#https://github.com/jnicklas/capybara
#neither of these headless drivers work for different reasons. would be great to get them working.
#uses phantomJS:https://github.com/teampoltergeist/poltergeist
#Capybara.javascript_driver = :poltergeist
#headless webkit driver using qt https://github.com/thoughtbot/capybara-webkit
#Capybara.javascript_driver = :webkit
Capybara.app_host = 'http://127.0.0.1:8080'
if (ENV['SAUCE'])
puts 'Running tests on Saucelabs...'
require 'sauce_helper'
elsif (ENV['FIREFOX'])
#nothing to do, this is the default
else
Capybara.register_driver :selenium_chrome do |app|
Capybara::Selenium::Driver.new(app, :browser => :chrome)
end
Capybara.javascript_driver = :selenium_chrome
end
#Capybara.default_wait_time = 10
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,194
|
\section{Introduction}\label{intro}
From the early days of the black hole information paradox it has been suggested that, in order to bring together quantum mechanics and the equivalence principle consistently, we have to give up locality and the semiclassical description in the near horizon region. Key ingredients in the framework, that is likely to replace them, include the principle of holography~\cite{thooftBHQM,preskillbhInfo}, complementarity~\cite{susskindbhComp} and strongly chaotic dynamics~\cite{susskindscramble}.
Recent discussions on the nature and the dynamics of microscopic degrees of freedom in the near horizon region of black holes have, also, strengthened the conviction that semiclassical physics is inadequate to guarantee the compatibility of the equivalence principle and quantum mechanics.Therefore it seems imperative that a drastic departure from conventional, semiclassical, physics is needed ~\cite{AMPSS}.
We should stress here that recent developments in string theory~\cite{Sen12,Sen}, which take into account the correct definition of the black hole entropy at the quantum level, have improved considerably our understanding of the black hole microscopic degrees of freedom. We understand now some important quantum statistical properties, in particular, the exact black hole quantum entropy, for a certain class of extremal black holes.
Important issues have remained open, which dominate the recent literature. They pertain to the description of the nature and dynamics of the near horizon microstates.
In what follows we consider the simplest dynamical context for the discussion of the black hole information paradox \cite{Susskind-Lindesay}.
Consider two observers, one at infinity, $O_a$ and the other in free fall, $O_b$ into the black hole. The question is, if there exists a unitary transformation, connecting the description of a freely falling particle near the horizon by the two observers.
In the seminal paper~\cite{dAFF} the corresponding quantum mechanical evolution operators have been constructed, by two, different, conformal invariant, Hamiltonians.
The ``first'' Hamiltonian, which, in our case, describes the time evolution for observer $O_a$, has continuous spectrum, while the ``second'' Hamiltonian, which describes the time evolution for observer $O_b$, in our case, has discrete spectrum.
There has been extensive discussion in the literature, how these two descriptions can be related to the paradox between the infinite number of states near the horizon thus described and the finite entropy of the black hole, which is proportional to its area~\cite{Kallosh,Strominger,townsendetal}. Though developments in string theory~(reviewed in \cite{Sen12}), around the same time, succeeded in resolving the problem of counting these states and of reproducing the thermodynamical result for the black hole entropy, how to extend this calculation for the case of a ``small'' number of microstates, i.e. for ``small'' black holes, remains open~\cite{Sen12}.
The information paradox, in the case of extremal black holes, which have an AdS$_2$ radial and temporal, near horizon, geometry, is expressed by the putative mismatch between the description accessible to observer, $O_a$, on the boundary, using a CFT$_1$ dynamics and that of the, free--falling, observer, $O_b$, who is using the bulk, AdS$_2$, dynamics. The resolution of this paradox is currently the subject of intense research activity~\cite{paradoxADSCFT}.
In the present and subsequent works we study the above issues using a space-time discretization of the near horizon geometry, $\mathrm{AdS}_2 =SL(2,\mathbb{R})/SO(1,1,\mathbb{R})$, of extremal black holes.
We assume it to be discrete and finite as a consequence of the finite dimension of the Hilbert space of black hole microstates. Indeed, the existence of a finite number of linearly independent wavefunctions as probes implies a finite resolution of its spacetime geometry.
This is achieved by replacing the set of real numbers $\mathbb{R}$ by the set of integers modulo $N$, $\mathbb{Z}_N$, for any positive integer $N$. The discretization thus replaces the continuous spacetime AdS$_2$ by the finite arithmetic geometry, AdS$_2[N] =SL(2,\mathbb{Z}_N)/SO(1,1,\mathbb{Z}_N)$~\cite{terras,finitegeometry}.
This discretization, which we call ``modular discretization'', has the merit of preserving corresponding symmetry properties of the continuous, classical, space-time geometry and provides a means of defining a consistent quantum geometry, through holography. In this discretized setting we will construct the corresponding unitary evolution operators for the two observers.
The discretization defines an infrared cutoff $L$ as well as a UV one $ \frac{L}{N}$.
We obtain a discretized spacetime by considering the lifting to the AdS$_{2}$ of the $L\times L$ square ligth cone lattice by stereographic projection. It is obvious that
the continuum limit can be recovered by taking first the $N\to \infty$ limit at fixed $L$ and, afterwards, the limit $L \to \infty$.
It is important to stress at this point the independence of the cutoff, $L$
from the AdS$_2$ radius $R_{\mathrm{AdS}_2}$.
In order to describe the dynamics of probes, at both the classical and quantum level, we use the Arnol'd cat maps ${\sf A}$, which are elements of $SL(2,\mathbb{Z}_N)$.
They are known to possess properties of strong arithmetic chaos, ergodic mixing and non-locality~\cite{Arnold,Vivaldi}.
These maps also satisfy factorization properties in the discretization cutoff $N$ , which induce fast quantum information processing between the probe and the near horizon geometry~\cite{fastqmaps,entangledfqm}.
Our present work builds on our earlier work on Finite Quantum Mechanics (FQM)~\cite{floratos89}. Therein we introduced the discretized toroidal membrane $\mathbb{Z}_N\times \mathbb{Z}_N$ as a tool for studying the
Matrix model truncation of the membrane dynamics~\cite{BFSS}. It renders the discrete membrane as a quantum phase space of finite quantum mechanics, which possesses the canonical transformation group $SL(2,\mathbb{Z}_N)$~\cite{Berry,Ford,balian_itzykson,athanasiu_floratos,afnholo}.
Interestingly enough, the discretized membrane, in the black hole setting, describes the geometry of the stretched horizon~\cite{Susskind-Lindesay} and the Matrix model describes the M--theoretic dynamics of its microscopic degrees of freedom~\cite{susskindscramble}.
We extend these results to the case of the AdS$_2[N],$
discrete, near horizon, bulk geometry, and the dynamics of the infalling observer, $O_b$,
along with its associated boundary CFT$_1[N]$ and the observer $O_a$ in order to obtain a holographic correspondence. In the discrete case the boundary is constructed as a coset space $SL(2,\mathbb{Z}_N)/\mathfrak{B}_N$, which is identified with the discrete projective line, $\mathbb{RP}_N^1$.
Here $\mathfrak{B}_N$ is the Borel subgroup of $SL(2,\mathbb{Z}_N)$, which fixes the point at infinity.
The group $SL(2,\mathbb{Z}_N)$, in the present context, plays three different roles:
a)as the isometry group of AdS$_2[N]$, b)
as the symplectic group of AdS$_2[N]$, which is considered to be a (stringy) phase space and
c) as the conformal group of the boundary. Properties (a) and ( c ) are the basic reasons for the existence of the AdS$_2[N]$/CFT$_1[N]$ correspondence and (b) will be used for the quantization of both the geometry(states) and the dynamics(evolution operator).
We construct the discrete time evolution, quantum unitary maps explicitly and discuss their action
on the common $N-$dimensional Hilbert space of both the bulk and the boundary.
The natural action of these quantum maps is realized on the set of coherent states, appropriate for the bulk and boundary coset geometries.
These states inherit classical chaotic dynamics, define isometric invariant (bulk-bulk, bulk-boundary and boundary-boundary) propagators and are convenient for the discussion of localization problems
in the AdS/CFT correspondence, since they saturate the uncertainty relation
of the UV/IR connection ~\cite{adscft}.
The plan of the present work is as follows:
In section~\ref{LCWeyl} we review
the construction of the smooth AdS$_2$ geometry and its boundary, as a doubly ruled surface by rotating the light
cone lines around the time-like circle. We establish various global coordinate
systems, through appropriate coset parametrizations.
More specifically we show that the light cone coordinates, on the stereographic projection plane,
parametrize holographically both the bulk and its boundary.
In order to describe the high energy dynamics for the the radial motion of probes, we employ linear isometry maps, ${\sf A}\in SL(2,\mathbb{R})$ , which are appropriate for the description of the infalling (bulk) and static (boundary) observers.
In section~\ref{HorModN} we motivate the introduction of the arithmetic discretization $\mathrm{mod}\,N$. We define the Finite Quantum Mechanics for both the bulk and the boundary on the same Hilbert space.
We shall work in the Hilbert space of the metaplectic representation of $SL(2,\mathbb{Z}_N)$ of dimension $N$ for the simplest case $N=p$ of an odd prime. In this case $Z_{N}=\mathbb{F}_p$ is the simplest Galois field.
The methods to be presented apply also for all other irreps of this group.
In the case $N=p$, an odd prime, the number of irreps is $p+4$. They have been worked out in detail , using the method of induced representations, which correspond to the multiplicative characters of $\mathbb{F}_p$ ,or $\mathbb{F}_{p^2}$~\cite{Silberger}.
The boundary is also constructed as a coset space $SL(2,\mathbb{F}_p)/\mathfrak{B}_p$, which is identified with the discrete projective line, $\mathbb{RP}_p^1$. Here $\mathfrak{B}_p$ is the Borel subgroup of $SL(2,\mathbb{F}_p)$, which fixes the point at infinity.
In section~\ref{CohStates} we explicitly construct the bulk and the boundary overcomplete set of discrete coherent states. We discuss their basic properties as they are appropriate for the corresponding coset geometries.
The states and the observables are also expanded on the coherent states and their time evolution is defined through the quantum cat maps.
The correlation functions of various observables are defined, as well as the method of their evaluation.
Finally, in section~\ref{Hologcat}, we exhibit the reconstruction (holography) of the bulk coherent states from those of the boundary, via the
bulk-boundary, bulk-bulk, boundary-boundary propagators and the
consequent reconstruction of the scalar bulk observables from the boundary ones. The correlation functions of scalar observables in the bulk and the boundary are connected through this holography which can be explicitly calculated.
In the last section~\ref{Concl} we summarize our results and their role in the context of the problem of black hole information processing. We also comment on future work on the complete description of finite conformal quantum mechanics on the boundary as well as on how the scrambling time bound might be saturated~\cite{susskindscramble}.
\section{Observers, geometry of cosets and Weyl dynamics on AdS$_2$ }\label{LCWeyl}
Consider the dynamics of freely falling bodies, in the near horizon region of spherically symmetric 4d extremal black holes. The geometry is known to be of the form
AdS$_2\times S^2$, where the AdS$_2 =SL(2,\mathbb{R})/SO(1,1,\mathbb{R})$, factor describes the geometry of the radial and time coordinates
and $S^2$ is the horizon surface.
We will compare the description of high energy radial dynamics as seen by (radial) observers(static or freely falling), for which the transverse and longitutinal motion is decoupled.
To each of these observers corresponds a global space-time coordinate system and in the following we shall exhibit some of them using
group theory.
The AdS$_2$ spacetime, is a one-sheeted hyperboloid defined through its
global embedding in Minkowski spacetime with one space-- and two time--like
dimensions, ${\mathscr M}^{1,2}$, by the equation~\cite{Gibbons,Bengtsson}.
\begin{equation}
\label{AdS2_M21}
x_0^2 + x_1^2 - x_2^2 = 1
\end{equation}
The boundaries of AdS$_2$ consist of two time--like disconnected circles, where
AdS$_2$ approaches asymptotically the light cone of ${\mathscr M}^{1,2}$
\begin{equation}
\label{M21_LC}
x_0^2 + x_1^2 - x_2^2 = 0
\end{equation}
AdS$_2$ is at the same time the homogeneous space
$SO(1,2)/SO(1,1)$. This case
is special in that $SO(1,2)$ has a double cover, $SL(2,\mathbb{R}),$ so we have $\mathrm{AdS}_2 =
SL(2,\mathbb{R})/SO(1,1)$.
In order to establish our notation and conventions, we proceed with the Weyl construction of the
double covering group, $SL(2,\mathbb{R})$.
To every point $x_\mu\in\mathrm{AdS}_2$, $\mu=0,1,2$, we assign the
traceless and real, $2\times 2$ matrix
\begin{equation}
\label{Weyl}
{\sf M}(x)\equiv\left(\begin{array}{cc} x_0 & x_1+x_2 \\x_1-x_2 & -x_0\end{array}\right)
\end{equation}
Its determinant is
$\mathrm{det}\,{\sf M}(x)=-x_0^2-x_1^2+x_2^2=-1$.
The action of any ${\sf A}\in SL(2,\mathbb{R})$ on AdS$_2$ is defined through the non-linear
mapping
\begin{equation}
\label{Weyl_mapping}
{\sf M}(x') = {\sf A}{\sf M}(x){\sf A}^{-1}
\end{equation}
This induces an $SO(1,2)$
transformation on $(x_\mu)_{\mu=0,1,2}$,
\begin{equation}
\label{induced_transf}
x' \equiv {\sf L}({\sf A}) x
\end{equation}
Choosing as the origin of coordinates the base point $\bm{p}\equiv (1,0,0)$, its
stability group $SO(1,1)$ is the group of Lorentz transformations in the
$x_0=0$ plane of ${\mathscr M}^{1,2}$ or equivalently, the ``scaling''
subgroup ${\sf D}$ of $SL(2,\mathbb{R})$
\begin{equation}
\label{scaling}
{\sf D}\ni {\sf S}(\lambda)\equiv \left(\begin{array}{cc} \lambda & 0 \\ 0 & \lambda^{-1}\end{array}\right)
\end{equation}
for $\lambda\in\mathbb{R}^\ast$.
For this choice of the stability point, we define the coset $h_{\sf A}$ by decomposing ${\sf A}$ as
\begin{equation}
\label{Acoset}
{\sf A} = h_{\sf A}{\sf S}(\lambda_{\sf A})
\end{equation}
Thus, we associate uniquely to every point $x\in\mathrm{AdS}_2$
the corresponding coset representative $h_{\sf A}(x)$.
We introduce now the global coordinate system defined by the
straight lines that generate AdS$_2$ and for which it can be checked easily
that they form its complete set of light cones.
Consider the two lines,
$\bm{l}_\pm(\bm{p})$, passing through the point $\bm{p}\in{\mathscr M}^{1,2}$
orthogonal to the $x_0$ axis and at angles $\pm\pi/4$ to the $x_1=0$ plane. They are defined by the intersection of AdS$_2$ and the plane $x_0=1$~cf.~fig.~\ref{LCAdS2}.
The coordinates of any point, $\bm{q}_+\in\bm{l}_+(\bm{p})$ and
$\bm{q}_-\in\bm{l}_-(\bm{p})$ are given by $(1,\mu_\pm,\pm\mu_\pm)$,
$\mu_\pm\in\mathbb{R}$ respectively.
Rotating these lines ,
around the $x_0,x_1$ time circle by appropriate angles
$\phi_\pm\in[0,2\pi)$ , we can parametrize any point by their intersection with coordinates
\begin{equation}
\label{new_points}
\begin{array}{l}
\displaystyle
x_0 = \cos\phi_\pm -\mu_\pm\sin\phi_\pm\\
\displaystyle
x_1 = \sin\phi_\pm +\mu_\pm\cos\phi_\pm\\
\displaystyle
x_2 = \pm\mu_\pm
\end{array}
\end{equation}
The corresponding pair of crossing lines ,$\bm{l}_\pm(\bm{x})$, define the local light cone.
Another form of the previous equation is:
\begin{equation}
\label{inverse_mapping}
\begin{array}{ccc}
\displaystyle
e^{\mathrm{i}\phi_\pm} = \frac{x_0\pm\mathrm{i}x_1}{1\pm x_2} &
\displaystyle &
\displaystyle
\mu_\pm = \pm x_2
\end{array}
\end{equation}
The corresponding coset parametrization (group coset motion which brings the origin to the point $x$) is:
\begin{equation}
\label{cosets}
h(\mu_\pm,\phi_\pm) = {\sf R}(\phi_\pm){\sf T}_\pm(\mu_\pm)
\end{equation}
where
\begin{equation}
\label{coset_rot}
{\sf R}(\phi) = \left(\begin{array}{cc} \cos\phi/2&
-\sin\phi/2\\\sin\phi/2& \cos\phi/2\end{array}\right)
\end{equation}
and
\begin{equation}
\label{coset_trans}
{\sf T}_+(\mu) = \left[{\sf T}_-(-\mu)\right]^\mathrm{T} =
\left(\begin{array}{cc} 1 & -\mu\\ 0 & 1\end{array}\right)
\end{equation}
We notice that ${\sf T}_\pm(\mu_\pm)$, acting on the base point
$X(\bm{p})$, generate the light cone~$l_\pm(\bm{p})$. Hence we identify these one parameter subgroups with the light cones at $p$.
\begin{figure}[thp]
\begin{center}
\includegraphics[scale=0.8]{LCAdS2.jpeg}
\end{center}
\caption[]{The light cone of AdS$_2$ at $\bm{p}=(1,0,0)$.}
\label{LCAdS2}
\end{figure}
At this point we should like to pause and discuss the physical interpretation of the rotation~(\ref{coset_rot}) and translation~(\ref{coset_trans}) groups.
Consider the two observers, in the AdS$_2$ background. One of them, $O_a$, is located at infinity and the other one, $O_b$, is in free fall. Their corresponding classical description, for a freely falling particle near the horizon, is given, for $O_a$, by the finite group of translations, with time parameter $\mu$. For observer $O_b$ it is given by the finite group of rotations, with time parameter $\phi$ . In the seminal paper~\cite{dAFF} the corresponding quantum mechanical evolution operators have been constructed in order to describe models for confinement and asymptotic freedom, by two different conformal Hamiltonians.
The ``first'' Hamiltonian, which in our case describes the time evolution for observer $O_a$, has continuous spectrum. The ``second'' one which describes the time evolution for observer $O_b$, has discrete spectrum.
After this intermezzo, we proceed with the description of AdS$_2$ as a phase space.
We observe that the variables, $\phi$ and $\mu$, are in fact Darboux coordinates
with respect to the natural $SO(2,1)$ invariant Poisson structure, which promotes AdS$_2$ to a phase space. They are conjugate variables
and they parametrize the time evolution, at the quantum mechanical level, of the two observers, $O_a$ and $O_b$, thereby realizing the complementarity of the physics they describe~\cite{susskindbhComp}.
It is also possible to use the light cone coordinates , $\mu_\pm$ in order to parametrize AdS$_2$, thereby
eliminating the angles $\phi_\pm$. The corresponding cosets are:
\begin{equation}
\label{translcoset}
h(\mu_+,\mu_-) = {\sf T}_-(\mu_-){\sf T}_+(\mu_+)
\end{equation}
which define a global light cone coordinate system. The map between
$(\mu_+,\mu_-)$ and $(x_0,x_1,x_2)$ is easily obtained:
\begin{equation}
\label{trans_map}
\begin{array}{ccc}
\displaystyle
\mu_+ = \frac{x_1 + x_2}{2} & \displaystyle\mathrm{and} &
\displaystyle
\mu_- = \frac{x_1 - x_2}{1+ x_0}
\end{array}
\end{equation}
The light cone cosets establish the causal patches of any observer on AdS$_2$ and thus
the causal diamonds of any pair of observers~\cite{Gibbons_Patricot}.
For completeness, we exhibit also the standard system of hyperbolic global coordinates,
\begin{equation}
\label{hyperbolic_global}
\begin{array}{ccccc}
\displaystyle x_0 = \cosh\psi\cos\chi, & \displaystyle &
\displaystyle x_1 = \cosh\psi\sin\chi, & \displaystyle &
\displaystyle x_2 = \sinh\psi
\end{array}
\end{equation}
and the corresponding coset parametrization,
\begin{equation}
\label{coset_hyp}
h(\psi,\chi)={\sf R}(\chi){\sf H}(\psi)
\end{equation}
with
\begin{equation}
\label{hypcoset}
{\sf H}(\psi) = \left(\begin{array}{cc} \cosh\psi/2& \sinh\psi/2\\
\sinh\psi/2 & \cosh\psi/2\end{array}\right)
\end{equation}
an element of the Lorentz group that acts in the $x_1=0$ plane.
These coset parametrizations induce also specific metrics on AdS$_2$. For the
parametrization~(\ref{cosets}) we obtain
\begin{equation}
\label{AdS2metric_phi_mu}
ds^2 = (1+\mu^2)d\phi^2 + 2d\phi d\mu
\end{equation}
Substituting in this expression $\mu\equiv\tan\sigma$,
$\sigma\in\left(-\pi/2,\pi/2\right)$, we obtain the Einstein strip
\begin{equation}
\label{Einstein_cylinder}
ds^2 = \frac{1}{\cos^2\sigma}\left[-d\tau^2 + d\sigma^2\right]
\end{equation}
with $\tau\equiv\sigma + \phi\in\mathbb{R}$ with the two disconnected boundaries at $\sigma\equiv\pm\pi/2$
For the standard hyperbolic global coordinate system, $(\psi,\chi )$, we obtain
the metric
\begin{equation}
\label{hyperbolic_metric}
ds^2 = -\cosh^2\psi\ d\chi^2 + d\psi^2
\end{equation}
with $\psi\in(-\infty,\infty)$ and $\chi\in[0,2\pi)$.
Finally, for the light cone coordinates $\mu\equiv\mu_+, \nu\equiv\mu_-$ of
eq.~(\ref{trans_map}) we find the metric
\begin{equation}
\label{trans_metric}
ds^2 = -4\left(\nu^2 d\mu^2 + \mu^2 d\nu^2 -d\mu d\nu\right)
\end{equation}
In the rest of this section we discuss the dynamics of probes, appropriate for the description of the string ground state geometry.
The AdS$_2$ coset geometry, inherits a symplectic structure and
a non-degenerate Poisson bracket from the isometry group, given by
\begin{equation}
\label{poisson_bracket}
\begin{array}{ccccc}
\displaystyle \left\{x_0,x_1\right\} = -x_2 & \displaystyle &
\displaystyle \left\{x_1,x_2\right\} = x_0 & \displaystyle &
\left\{x_2,x_0\right\} = x_1
\end{array}
\end{equation}
These relations are realized, for example, in the global coordinate system
$(\phi,\mu)$, where the area element is $d\phi d\mu$ and the coordinates
$\phi$ and $\mu$ are Darboux coordinates, as
\begin{equation}
\left\{f,g\right\} =
\frac{\partial f}{\partial\phi}\frac{\partial g}{\partial\mu} -
\frac{\partial f}{\partial\mu}\frac{\partial g}{\partial\phi}
\end{equation}
The corresponding Hamilton's equations for incompressible flows on $AdS_2$ are,
\begin{equation}
\label{poissonflow}
\dot{x}_\mu = \left\{x_\mu, H\right\}
\end{equation}
The simplest classical motions of probes of lowest energy are described by the isometric maps ${\sf A}\in
SL(2,\mathbb{R})$.
At the level of discrete time evolution (maps), this isometric motion is parametrized as follows:
If, for instance, $h(\phi,\mu)\in SL(2,\mathbb{R})$ is a coset, describing the probe's position in AdS$_2$,
at proper time $\tau$, then at time $\tau + 1$ it will evolve as
\begin{equation}
\label{one_step_op}
h(\phi_{\tau+1},\mu_{\tau+1})= {\sf A}h(\phi_\tau,\mu_\tau)(\mathrm{mod}\,{\sf D})
\end{equation}
Using the decompositions~(\ref{Acoset},\ref{cosets}),
the parameters $\phi_{\sf A},\mu_{\sf A},\lambda_{\sf A}$ in can be given explicitly, in terms of the matrix elements of
$$
{\sf A} = \left(\begin{array}{cc} a & b\\ c & d\end{array}\right)
$$
by the expressions
\begin{equation}
\label{coset_params}
\begin{array}{ccccc}
\displaystyle
\cos\frac{\phi_{\sf A}}{2} = \frac{d}{\sqrt{b^2 + d^2}} &
\displaystyle &
\displaystyle
\sin\frac{\phi_{\sf A}}{2} = \frac{b}{\sqrt{b^2 + d^2}} &
\displaystyle &
\displaystyle
\mu_{\sf A} = -\frac{ac + bd}{b^2 + d^2}
\end{array}
\end{equation}
\begin{equation}
\label{lamda}
\lambda_{\sf A}=\frac{1}{\sqrt{c^2 + d^2}}
\end{equation}
Applying the above decomposition on the RHS of equation(2.25) we find the LHS.
The corresponding Hamiltonians to the above discrete maps ${\sf A}\in SL(2,\mathbb{R})$ must be linear in the generators
$x_\mu$, of $SL(2,\mathbb{R})$~(\ref{poisson_bracket}). Moreover as they
are generators of infinitesimal Lorentz transformations $SO(1,2)$ they respect causality.
In our approach, the AdS$_2$ radial and time directions are treated as "phase space" variables, whereas time evolution is
determined by the group action. We may note that the stringy uncertainty relations hold between the energy and the corresponding physical length or equivalently in our case, between time and the radial extent. As such, the interpretation of AdS$_2$ as phase space is suitable for strings moving in this background.
It is essential for the AdS/CFT holographic correspondence to define a conformal compactification of its boundary.The frequently used compactification is the conformal rescaling of the metric which gives rise to the Poincare patch,
covering half of the AdS$_2$ spacetime.
Another conformal compactification, in Minkowski signature of AdS$_2$, is obtained by stereographic projection to the $x_0=0$ plane. Any point, $(\xi_0,\xi_1,\xi_2)\in\mathrm{AdS}_2$,
is projected through the base point $\bm{p}$ to a point on the
plane $x_0=0$ with coordinates $(x_1,x_2)$:
\begin{equation}
\label{stereogproj}
\begin{array}{l}
\displaystyle
x_1 = \frac{\xi_1}{1-\xi_0}\\
\displaystyle
x_2 = \frac{\xi_2}{1-\xi_0}
\end{array}
\end{equation}
Introducing the light cone coordinates of the projection plane, $x_\pm\equiv x_1\pm x_2$, we can
parametrize AdS$_2$ as follows
\begin{equation}
\label{AdS2parametrize}
\begin{array}{l}
\displaystyle
\xi_1 = \frac{x_+ + x_-}{1+x_+ x_-}\\
\displaystyle
\xi_2 = \frac{x_+ - x_-}{1+x_+ x_-}\\
\displaystyle
\xi_0= \frac{x_+ x_- -1}{1+x_+ x_-}\\
\end{array}
\end{equation}
We observe that the stereographic projection from the point $\bm{p}=(1,0,0)$, maps each of the light cones,
$\bm{l}_\pm({\bf p})=\{(1,\pm\mu,\mu)|\mu\in\mathbb{R}\}$, to two points on the boundaries.
In order to parametrize uniquely the points on $\bm{l}_\pm(\bm{p})$, we must use the stereographic projection from the ``antipode'', $\bm{q}=(-1,0,0)$. If we call the new coordinates on the $x_0=0$ plane, $y_\pm$, we have
\begin{equation}
\label{AdS2parametrizeY}
\begin{array}{l}
\displaystyle
\xi_1 = \frac{y_+ + y_-}{1+y_+ y_-}\\
\displaystyle
\xi_2 = \frac{y_+ - y_-}{1+y_+ y_-}\\
\displaystyle
\xi_0= \frac{1-y_+y_-}{1+y_+ y_-}\\
\end{array}
\end{equation}
This coordinate system has the same problem for the points on the light cones $\bm{l}_\pm(\bm{q})$. We easily check that the stereographic projection from $\bm{p}$ (respectively $\bm{q}$) maps the light cone axes, $x_+=0$ or $x_-=0$ of the projective plane, $x_0=0$ to $\bm{l}_\pm(\bm{q})$ (respectively $\bm{l}_\pm(\bm{p})$). More generally, the curves, on AdS$_2$, defined by
$x_+=\mathrm{const}$ or $x_-=\mathrm{const}$ (correspondingly for $y_\pm$) are the light-cone straight lines, which generate AdS$_2$.
The transition functions between the two coordinate system are
\begin{equation}
\label{x2y}
x_- y_+ = 1 = x_+y_-
\end{equation}
In terms of $x_\pm$, the induced metric takes the form:
\begin{equation}
\label{lightlikemetric}
ds^2 = 4\frac{dx_+ dx_-}{(1+x_+ x_-)^2}
\end{equation}
Similarly, for $y_\pm$.
We observe now that the induced metric is invariant under the M\"obius transformations
\begin{equation}
\label{Moebius_boundary}
\begin{array}{l}
\displaystyle
x_+\to {\sf A} x_+\equiv \frac{ax_+ + b}{cx_+ +d}\\
\displaystyle
x_-\to \left[{\sf A}^{-1}\right]^\mathrm{T} x_-\equiv\frac{dx_- -c}{-bx_- +a}
\end{array}
\end{equation}
These transformations result from the Weyl action~(\ref{Weyl_mapping}) through the use of the stereographic light cone parametrizations of AdS$_2$~(\ref{AdS2parametrize}) and~(\ref{AdS2parametrizeY}). In contrast to the other coordinate systems the variables $(x_+,x_-)$ do not mix under the isometry group.
By definition the following identity holds, for any ${\sf A}\in SL(2,\mathbb{R})$:
\begin{equation}
\label{SL2Rident}
\left[{\sf A}^{-1}\right]^\mathrm{T} = \varepsilon{\sf A}\varepsilon^\mathrm{T}
\end{equation}
where
\begin{equation}
\label{epsilon}
\varepsilon\equiv\left(\begin{array}{cc} 0 & 1 \\ -1 & 0\end{array}\right)
\end{equation}
Therefore, eq.~(\ref{Moebius_boundary}) implies that $(x_+,x_-)$ are conjugate variables and the stereographic projection plane is promoted to a phase space. Indeed, the AdS$_2$/CFT$_1$ correspondence is based on the fact that $SL(2,\mathbb{R})$ plays three different roles: (a) as the isometry group of AdS$_2$, (b) as the symplectic group of AdS$_2$ being taken as a phase space and (c) as the conformal, M\"obius group of the boundary CFT$_1$.
The variables, $x_\pm$, are thus appropriate holographic variables,
because the isometry transformation group of AdS$_2$ is reduced on them to two conjugated copies of the
1d M\"obius conformal group.
We come now to the parametrization of the boundary, which is disconnected and consists of two circles at $x_2\to\pm\infty$. In the covering space the boundary is
$\mathbb{R}\times\{1,-1\}$.
Because of their transformation properties, the variables $x_\pm$ are the most suitable to use in order to define the
two disconnected components of the boundary, in terms of the branches of the
hyperbola (cf. eq.~(\ref{AdS2parametrize}))
\begin{equation}
\label{hyperbola_AdS2bound}
1+x_+x_- = 0
\end{equation}
This relation allows us to write $x_+$ ($x_-$) as a M\"obius transformation of the other:
\begin{equation}
\label{FourierDuality}
x_+ = -\frac{1}{x_-}\equiv \varepsilon\cdot x_-
\end{equation}
This relation is invariant under the M\"obius transformations~(\ref{Moebius_boundary}).
Therefor the two components of the boundary are two copies of the projective line $\mathbb{RP}^1$.
We notice here that the stereographic projection maps each one of the boundary components to the two branches of the hyperbola.
The boundary can also be described as the coset space, $SL(2,\mathbb{R})/\mathfrak{B}$, where $\mathfrak{B}$ is the Borel subgroup of dilatations and translations,
\begin{equation}
\label{borelsgRP1}
\mathfrak{B}=\left\{
\left. {\sf B}(b,\lambda)=
\left(\begin{array}{cc}
\lambda & b\\ 0 & \lambda^{-1}
\end{array}\right)
\right|
\lambda\in\mathbb{R}^\ast,b\in\mathbb{R}
\right\}
\end{equation}
which preserves the point at infinity, $(x_+=\infty,x_-=0)$.
For any ${\sf A}\in SL(2,\mathbb{R})$ we have the decomposition
\begin{equation}
\label{SL2Rdecomp}
{\sf A} = {\sf R}(\phi){\sf B}(b,\lambda)
\end{equation}
and the elements ${\sf R}(\phi)\in SO(2,\mathbb{R})$ parametrize the boundary.
It will be useful later to parametrize the bulk coset representatives, $h(\phi,\mu)$ and the boundary representatives, ${\sf R}(\phi)$ by the light cone coordinates $x_\pm$. The map is the following:
\begin{equation}
\label{bulkboundLC}
\begin{array}{l}
\displaystyle
x_+ = \frac{1}{\tan\frac{\phi}{2}}\\
\displaystyle \\
\displaystyle
x_- = \frac{1-\mu\tan\frac{\phi}{2}}{\mu+\tan\frac{\phi}{2}}
\end{array}
\end{equation}
The boundary is reached when $\mu\to\pm\infty$. Indeed, a measure of the distance from the boundary is
\begin{equation}
\label{boundary_dist}
z\equiv 1+x_+x_-=\frac{2}{\sin\phi(\mu+\tan\frac{\phi}{2})}
\end{equation}
The coset representatives of the bulk and the boundary become functions $h(x_+,x_-)$ and
${\sf R}(x_+)$ respectively. So $x_+$ parametrizes motions parallel to the boundary and $x_-$ motions towards the boundary.
In order to relate the classical action of $SL(2,\mathbb{R})$ in the bulk~(\ref{one_step_op}) with the corresponding action on the boundary defined as
\begin{equation}
\label{one_step_bd}
{\sf R}(\phi_{\tau+1}) = {\sf A}{\sf R}(\phi_\tau)
\end{equation}
we must compute the RHS through the decomposition
\begin{equation}
\label{RHScoset}
{\sf A}{\sf R}(\phi_\tau) = {\sf R}(\phi_{\tau+1}){\sf B}(b_{\tau+1},\lambda_{\tau+1})
\end{equation}
and mod out the Borel factor ${\sf B}(b_{\tau+1},\lambda_{\tau+1})$.
Closing this section we explain in the following, why we have chosen to parametrize the bulk - boundary geometry and dynamics by the corresponding group cosets.
The basic reason is their role in a new proposal for the holographic correspondence, which we think presents an extension of the standard AdS$_2$/CFT$_1$ one.
By construction the ${\sf R}(\phi)$ coset representatives cannot detect the distance from the boundary, i.e. $x_-$. Only at the quantum level, where an uncertainty relation, between $x_+$ and $x_-$, exists and it reflects the UV/IR connection, is it possible from the distribution of $x_+$ on the boundary, to get information for the distribution of $x_-$ in the bulk.
The quantum mechanical states which maximize the flow of quantum mechanical information between the bulk and the boundary, given the coset structure
of their geometries, are the corresponding coherent states (wavelets).
They form overcomplete sets of states with classical transformation properties but powerful enough to describe quantum dynamics and geometry at the same time~\cite{coherent_states}.
We shall present the construction of these states and their properties in the next section, after we have introduced the modular discretization of the geometry and dynamics on AdS$_2$.
\section{Modular discretization and quantum dynamics on AdS$_2[N]$}\label{HorModN}
Recent discussions on the quantum black hole entropy of extremal black holes and the AdS$_2$/CFT$_1$ correspondence suggest the identification of the black hole entropy with the logarithm of the string ground state degeneracy~\cite{Sen} .
This is an integer, $N$, fixed by the set of the black hole's electric and magnetic charges.
Since in the Hilbert space of the degenerate ground state, we have at most $N$ linearly independent wave functions, the geometry resolved by the probe is fuzzy, with resolution $1/N$.
In order to model the geometry and the dynamics of black hole information processing, we should take into account the following constraints, which have been discussed in the literature on the black hole information paradox:
\begin{itemize}
\item In the vicinity of the black hole horizon, the dynamics is chaotic and strongly mixing. Any additional bit of information, that falls into the black hole, in a very short time, reaches (dynamic) equilibrium with the other microscopic degrees of freedom comprising the blackhole horizon.
Furthermore, the mixing should be holographic: any subset of horizon qubits has a coarse--grained representation of the total infalling information.
This leads to the following constraints on the geometry and the dynamics:
\item Randomness, non-locality and factorization of the space-time geometry. It implies that the
total Hilbert space factorizes into a tensor product of local, coarse--grained, Hilbert spaces~\cite{Giddings,Bousso,Banks}.
\item The dynamics should provide the fastest possible quantum information processing, saturating the scrambling time bound~\cite{Page,Preskill_Hayden,Susskind_scramble,Avery,Harlow:2013tf}.
\end{itemize}
We propose to model this random, non--local and factorizable geometry by a number--theoretic discretization, that preserves the corresponding group-theoretical structure of AdS$_2$ spacetime.
This is done by replacing AdS$_2$ by the discrete cosets, AdS$_2[N] =SL(2,\mathbb{Z}_N)/SO(1,1,\mathbb{Z}_N)$. We thereby replace the set of real numbers, $\mathbb{R}$, by the set of integers modulo $N$. We call this ``modular discretization''.
This is a finite, random, set of points in the embedding Minkowski spacetime $\mathscr{M}^{2,1}$.
In the mathematical literature, such a set of points is called a {\em finite geometry}~\cite{terras,finitegeometry}.
Introducing appropriate length scales and taking the large $N$ limit we can check that the smooth geometry of AdS$_2$ emerges.
To accommodate the above requirements on the dynamics, we employ discrete time maps. These are the Arnol'd cat maps, ${\sf A}$ in $SL(2,\mathbb{Z}_N)$. These are known to exhibit strong mixing, ergodic properties~\cite{Arnold,Vivaldi,Berry,Ford}., non-locality and factorization in the cutoff discretization parameter, $N$~\cite{fastqmaps,afnholo}.
We restrict our construction to the case $N=p$ prime for the technical simplicity of the presentation of our arguments. In this case, the set of integers modulo $p$ is the simplest Galois field, $\mathbb{F}_p$. The unitary, irreducible, representations of the isometry group of AdS$_2[p]$, $SL(2,\mathbb{F}_p)$, are known~\cite{Silberger}.
The restriction to $N$ prime can be removed by noticing some interesting
factorizations: If $N=N_1 N_2$, with $N_{1,2}$ coprime, then we
have~\cite{fastqmaps}
\begin{equation}
\label{factorizationSL2ZN}
SL(2,\mathbb{Z}_{N_1 N_2}) = SL(2,\mathbb{Z}_{N_1})\otimes
SL(2,\mathbb{Z}_{N_2})
\end{equation}
and
\begin{equation}
\label{factorizationAdS2N}
\mathrm{AdS}_2[N_1 N_2] = \mathrm{AdS}_2[N_1] \otimes \mathrm{AdS}_2[N_2]
\end{equation}
These factorizations imply that all powers of primes, $2^{n_1},
3^{n_2},5^{n_3},\ldots$, are the building blocks of our construction. The physical interpretation of this factorization is that the most coarse--grained Hilbert spaces on the horizon have dimensions powers of primes.
We observe that by taking tensor products over all powers of a fixed prime, $p$, we can model dynamics over the $p-$adic spacetime, AdS$_2[\mathbb{Q}_p]$.
In order to study the finite geometry of AdS$_2[p]$, we recall the following facts about its ``isometry group'' $SL(2,\mathbb{F}_p)$:
The order of $SL(2,\mathbb{F}_p)$ is $p(p^2-1)$. For the subgroups of rotations, ${\sf R}$, translations,
${\sf T}_\pm$ and dilatations, ${\sf D}$, the orders are $p+1,p$ and $p-1$ respectively. So the finite geometry of AdS$_2[p]$ has $p(p+1)$ points.
The set of points of the finite geometry of AdS$_2[p]$ is, by definition, the set of all solutions of the equation
\begin{equation}
\label{AdS2_p}
x_0^2 + x_1^2 -x_2^2\equiv\,1\,\mathrm{mod}\,p
\end{equation}
This can be parametrized as follows:
\begin{equation}
\label{LCAdS2N}
\begin{array}{l}
\displaystyle
x_0\equiv (a-b\,\mu)\,\mathrm{mod}\,p\\
\displaystyle
x_1\equiv (b+a\,\mu)\,\mathrm{mod}\,p\\
\displaystyle
x_2\equiv\,\mu\,\mathrm{mod}\,p
\end{array}
\end{equation}
where $a^2 + b^2\equiv\,1\,\mathrm{mod}\,p$ and
$a, b, \mu\in\mathbb{F}_p$.
The points of AdS$_2[p]$ comprise the bulk--we must add the points on the boundary.
The boundary is the ``mod $p$'' projective line, $\mathbb{RP}_p^1$, defined as the set
\begin{equation}
\label{RP1p}
\mathbb{RP}_p^1 = GF^\ast[p]\cup\{0,\infty\}
\end{equation}
so the number of boundary points (cosets) is $p+1$.
We shall now define the quantum mechanics of the probes of the bulk AdS$_2[p]$ and its boundary,
as well as the corresponding coherent states~\cite{coherent_states}.
We start with the construction of finite quantum mechanics (FQM) in the bulk. It is obvious that the set of the states and the set of observables should carry a representation of the coset structure of the bulk. We choose the space of states to be the Hilbert space, of dimension $p$, of the metaplectic representation of $SL(2,\mathbb{F}_p)$~\cite{athanasiu_floratos}. This choice is motivated by the fact that the spatial part of AdS$_2[p]$ is the finite field $\mathbb{F}_p$, the set of values of the space--like variable $x_-$. The wavefunctions will be the normalized elements of the complex, projective, space $\mathbb{CP}^{p-1}$.
In the papers~\cite{floratos89,athanasiu_floratos,afnholo} the explicit construction of the metaplectic representation of $SL(2,\mathbb{F}_p)$ has been presented, as well as various sets of coherent states.
The building blocks of the observables of FQM are two $p\times p$,
unitary, matrices, ``clock'', $Q$ and ``shift'', $P$,
representing the ``exponentials'' of the position and
momentum operators (for periodic boundary conditions)~\cite{Schwinger}:
\begin{equation}
\label{PandQ}
\begin{array}{ccc}
\displaystyle Q_{k,l} = \omega^k\delta_{k,l}, &
\displaystyle &
\displaystyle P_{k,l} = \delta_{k-1,l},
\end{array}
\end{equation}
$k,l\in\mathbb{F}_p$ and $\omega=\exp(2\pi\mathrm{i}/p)$ is the $p$th root of
unity.
These matrices satisfy the exponentiated Heisenberg--Weyl commutation
relation
\begin{equation}
\label{HWcomm}
Q P = \omega P Q
\end{equation}
A useful basis is provided by the magnetic translations
\begin{equation}
\label{HWgroup}
J_{r,s} = \omega^{r\cdot s/2} P^r Q^s,
\end{equation}
elements of the (finite) Heisenberg--Weyl group,
where the $1/2$ in the exponent is computed mod\,$p$.
The $J_{r,s}$ realize a projective representation of the translation group on the discrete torus, $\mathbb{T}_p=\mathbb{F}_p\times\mathbb{F}_p$:
\begin{equation}
\label{HWrep}
J_{r,s} J_{r',s'} = \omega^{(r's-rs)/2}J_{r+r',s+s'}
\end{equation}
they are unitary
\begin{equation}
\label{unitaryJrs}
\left[J_{r,s}\right]^\dagger = J_{-r,-s}
\end{equation}
and periodic
\begin{equation}
\label{periodicityJrs}
\left[J_{r,s}\right]^p = I_{p\times p}
\end{equation}
The phase factor in eq.~(\ref{HWrep}) is a cocycle and represents the non-commutativity of the quantized torus,$\mathbb{T}_\theta$ ($\theta=2\pi/p$)~\cite{Manin_Marcolli,Connes}.
The exact quantization of Arnol'd cat maps, ${\sf A}\in SL(2,\mathbb{F}_p)$,
is given by unitary matrices, $U({\sf A})$, satisfying
\begin{equation}
\label{metaplectic}
U({\sf A})J_{r,s}U({\sf A})^\dagger = J_{(r,s){\sf A}^{-1}}
\end{equation}
This is the definition of the metaplectic representation of
$SL(2,\mathbb{F}_p)$, which, in general, is projective.
We can find a proper representation of
$SL(2,\mathbb{F}_p)$ which, then satisfies
the relation
\begin{equation}
\label{exactirrep}
U({\sf A})U({\sf B}) = U({\sf AB})
\end{equation}
for all ${\sf A},{\sf B}\in SL(2,\mathbb{F}_p)$. This can be done because of the following theorem: Every projective representation of $SL(2,\mathbb{F}_p)$ can be lifted to a proper representation~\cite{Weil}.
The proper representation that corresponds to the metaplectic one is given
by the following expression~\cite{balian_itzykson,athanasiu_floratos}
\begin{equation}
\label{exactirrepA}
\left[U({\sf A})\right]_{k,l} =
\frac{1}{\sqrt{p}}(-2c|p)\left\{\begin{array}{c}
1\\ -\mathrm{i}\end{array}\right\}\omega^{-\frac{ak^2 -2kl+dl^2}{2c}}
\end{equation}
for $c\not\equiv\,0\,\mathrm{mod}\,p$ and
the Jacobi symbol, $(-2c|p)=\pm 1$, depending on whether $-2c$
is a quadratic residue mod $p$ or not and the upper term between the brackets
pertains if $p=4k+1$, while the lower if $p=4k-1$.
In the case $c\equiv\,0\,\mathrm{mod}\,p$ and $a\in\mathbb{F}_p^\ast$, then
\begin{equation}
\label{c=0_A}
{\sf A} = \left(\begin{array}{cc} a & b \\ 0 & a^{-1}\end{array}\right)
\end{equation}
and
\begin{equation}
\label{c=0_U}
U({\sf A})_{k,l} =
\frac{\left(-a|p\right)}{\sqrt{p}}\left\{\begin{array}{c} 1 \\ -1\end{array}\right\}
\omega_p^{-\frac{a b}{2}k^2}\delta_{k,a^{-1}l}
\end{equation}
An important application of eq.~(\ref{exactirrepA}) is for the Quantum Fourier Transform (QFT). For
\begin{equation}
\label{FourierS}
{\sf F}=\left(\begin{array}{cc} 0 & -1 \\ 1 & 0\end{array}\right)
\end{equation}
the corresponding unitary operator is given by
\begin{equation}
\label{FourierU}
U({\sf F}) = \frac{1}{\sqrt{p}}(-2|p)\left\{\begin{array}{c} 1 \\ -\mathrm{i}\end{array}\right\}\omega_p^{kl}=
(-2|p)\left\{\begin{array}{c} 1 \\ -\mathrm{i}\end{array}\right\}F
\end{equation}
and
\begin{equation}
\label{FourierFT}
F_{k,l}=\frac{1}{\sqrt{p}}\omega_p^{kl}
\end{equation}
is the QFT matrix.
The representation given in eq.~(\ref{exactirrepA}) is reducible: It is decomposed into two, irreducible, components~\cite{balian_itzykson}:
\begin{equation}
\label{irrepAsibspaces}
U({\sf A})_\mathrm{L,R} = U({\sf A})\frac{I_{p\times p}\pm S}{2}
\end{equation}
where
\begin{equation}
\label{projS}
S=F^2
\end{equation}
From eqs.~(\ref{FourierS}) and~(\ref{FourierU}) we deduce that
\begin{equation}
\label{Fourier4}
U({\sf F}^4) = F^4 =I_{p\times p}
\end{equation}
and, thus, the eigenvalues of $S$ are $\pm 1$, which label the chiralities of the two irreducible components. The dimension of the corresponding eigenspaces is $(p\pm 1)/2$.
It is possible to generalize the metaplectic representation from the discretization $N=p$ prime to any integer $N$ by noting that, if $N$ is composite, $N=N_1 N_2$, with $N_1,N_2$ coprimes, for
every ${\sf A}\in SL(2,\mathbb{Z}_{N_1N_2})$ we obtain
\begin{equation}
\label{Afactorization}
{\sf A} = {\sf A}_1\cdot{\sf A}_2
\end{equation}
with ${\sf A}_i\in SL(2,\mathbb{Z}_{N_i})$, $i=1,2$.
It can be proved that the unitary matrix $U({\sf
A})$ of eq.~(\ref{exactirrepA}) ``inherits'' this property as follows
\begin{equation}
\label{Ufactorization}
U({\sf A}) = U({\sf A}_1)\otimes U({\sf A}_2)
\end{equation}
The $N_1N_2\times N_1N_2$ matrix $U({\sf A})$ decomposes into a tensor product
of an $N_1\times N_1$ and an $N_2\times N_2$ unitary matrix. This leads to an
acceleration of the computation of the action of the quantum map $U({\sf A})$
on the Hilbert space of states, ${\cal H}_{N_1N_2}$, from $O(N^2)$ to $O(N\ln
N)$ operations~\cite{fastqmaps}.
Thus, the building blocks of FQM are the Hilbert spaces of dimension
$N=p^n$, with $p$ an odd prime and $n\in\mathbb{N}$.
\section{Bulk and boundary coherent states}\label{CohStates}
The coherent state method selects an invariant state under the stability group, as the ground state, $|0\rangle_{\sf D}$~\cite{Grosse}.
For the bulk the stability group is the scaling group, ${\sf D}$.
The corresponding quantum map, $U({\sf D}(\lambda))$, for $\lambda\in\mathbb{F}_p^\ast$, is the circulant matrix:
\begin{equation}
\label{circulant_matrix}
U({\sf D}(\lambda))_{k,l} = \left(-\lambda|p\right)\left\{\begin{array}{c} 1 \\ -1\end{array}\right\}
\delta_{k,\lambda^{-1}l}
\end{equation}
We choose as ground state a common eigenvector of $U({\sf D}(\lambda))$ for all $\lambda$, namely
\begin{equation}
\label{groundstate0D}
|0\rangle_{\sf D} = \frac{1}{\sqrt{p}}\left(\underbrace{1,1,\ldots,1}_{p}\right)
\end{equation}
The coherent states for AdS$_2[p]$ are now defined as
\begin{equation}
\label{coherent_states}
|h\rangle = U({\sf h})|0\rangle_{\sf D}
\end{equation}
for all $h\in\mathrm{AdS}_2[p]$
and $U(h)$ the $p\times p$ unitary matrix constructed in eq.~(\ref{c=0_U}).
We notice here that the vacuum $|0\rangle_{\sf D}$ is annihilated by the projector $P_-=(I-S)/2$. In effect it belongs to the subspace of the projector, $P_+=(I+S)/2$, which has dimension $p_+\equiv (p+1)/2$.
The matrix $S$ commutes with all matrices $U({\sf h})$. It implies that the coherent states $|h\rangle$ belong
to the eigenspace of $P_+$. This is the positive chirality eigenspace. It is possible to construct coherent states, belonging to the orthogonal eigenspace of dimension $p_-=(p-1)/2$, by choosing the common eigenstate of the dilatation group among the eigenvectors of $S$ with opposite chirality.
We can use the parametrization of the cosets by rotations and translations in order to obtain explicit expressions for the coherent states, $|h\rangle$:
For
\begin{equation}
\label{hcoset}
h(\phi,\mu)=\left(\begin{array}{cc} a & -b \\ b & a\end{array}\right)
\left(\begin{array}{cc} 1 & -\mu\\ 0 & 1\end{array}\right)
\end{equation}
with $a^2+b^2\equiv 1\,\mathrm{mod}\,p$. Using eqs.~(\ref{bulkboundLC}) we find the relation between
$a,b,\mu$ and $x_\pm$, namely
\begin{equation}
\label{bulkboundLCxpm}
\begin{array}{c}
\displaystyle
x_+ = \frac{a}{b}\\
\displaystyle
x_- = \frac{a-b\mu}{a\mu+b}
\end{array}
\end{equation}
In components:
\begin{equation}
\label{coherent_states_h}
\left\langle k|h\right\rangle =
\frac{1}{\sqrt{p}}\left((a-b\mu))|p\right)
\omega_p^{\frac{b+\mu a }{2(a-b\mu)} k^2} =
\frac{1}{\sqrt{p}}\left((a-b\mu))|p\right)
\omega_p^{\frac{k^2}{2x_-}}
\end{equation}
where $k=0,1,2,\ldots,p-1$.
The coherent states $|h\rangle$ of the bulk, can be, therefore, parametrized in terms of $x_\pm$ which will be denoted by $|x_+,x_-\rangle$.
These definitions imply the classical transformation of coherent states under the isometry group, namely
\begin{equation}
\label{classicaltransfcohstates}
U({\sf A})|h\rangle = |{\sf A}h\rangle
\end{equation}
These states form an overcomplete set of normalized states with very useful properties:
\begin{itemize}
\item
Resolution of the identity:
\begin{equation}
\label{resol_ident}
\frac{1}{d}\sum_{h}|h\rangle\langle h| = P_+
\end{equation}
where $d$ is defined
\begin{equation}
\label{resol_ident_state}
d = \sum_{h} \left| \left\langle h| h_1\right\rangle\right|^2
\end{equation}
for any state $|h_1\rangle$.
The above identity is based on the irreducibility of the metaplectic representation on the subspace of positive chirality.
\item Propagator property for the function
\begin{equation}
\label{propagator}
\Delta(h_1,h_2) = \left\langle h_2|h_1\right\rangle
\end{equation}
This function has the property of a ``reproducing kernel'' (propagator)
\begin{equation}
\label{reprokernel}
\Delta(h_1,h_2)=\frac{1}{d}\sum_h\Delta(h_1,h)\Delta(h,h_2)
\end{equation}
and is invariant under the isometry group.
\item For a general state $|\psi\rangle$ we have
\begin{equation}
\label{genstatedecomp}
|\psi\rangle = \frac{1}{d}\sum_h |h\rangle\langle h|\psi\rangle
\end{equation}
\item The symbol of operators: To an operator $\widehat{A}$ we associate a scalar function
$\widetilde{A}(h_1,h_2)$ with
\begin{equation}
\label{opsymbol}
\widetilde{A}(h_1,h_2)\equiv \langle h_2|\widehat{A}|h_1\rangle
\end{equation}
so that
\begin{equation}
\label{opsymbolinv}
\widehat{A}=\frac{1}{d^2}\sum_{h_1,h_2}\widetilde{A}(h_1,h_2)|h_2\rangle\langle h_1|
\end{equation}
For two operators $\widehat{A},\widehat{B}$ we assign as symbol of their product the expression
\begin{equation}
\label{symbolprod}
\widetilde{AB}(h_1,h_2)=\frac{1}{d}\sum_h \widetilde{A}(h_1,h)\widetilde{B}(h,h_2)
\end{equation}
\end{itemize}
The local quantum observables, that have ''nice'' transformation properties, are the $p\times p$ hermitian matrices $Q_I(h)$, such that
\begin{equation}
\label{Qvectobs}
U(\textsf{A})Q_I(h) U(\textsf{A})^\dagger=R_{IJ}Q_J(\textsf{A}h),\, I,J=1,\ldots,\mathrm{dim}\,R
\end{equation}
For scalar observables (scalar fields) we must have
\begin{equation}
\label{Qscalobs}
U(\textsf{A})Q(h) U(\textsf{A})^\dagger= Q(\textsf{A}h)
\end{equation}
The simplest ones are the (pure state) density matrices,
\begin{equation}
\label{density_matrix}
\rho(h) = |h\rangle\langle h|
\end{equation}
that can be used as a basis for measurement.
For any scalar function, $f(h)$, on AdS$_2[p]$, we can construct the quantum observable
\begin{equation}
\label{Qobs}
\mathcal{O}(f)\equiv \sum_{h}f(h)|h\rangle\langle h|
\end{equation}
which is hermitian if $f(\cdot)$ is real.
The one--time--step evolution of these observables is given by
\begin{equation}
\label{1tstepevol}
\mathcal{O}_{n+1}(f) = U(\textsf{A})\mathcal{O}_n(f)U(\textsf{A})^\dagger
\end{equation}
with initial condition $\mathcal{O}_{n=0}(f)=\mathcal{O}(f)$.
We may write this relation in the following way:
\begin{equation}
\label{1tstepevol1 }
\mathcal{O}_{n+1}(f) = \mathcal{O}_n\left(f\circ \textsf{A}^{-1}\right)
\end{equation}
The set of time correlation functions for these observables defines FQM on AdS$_2[p]$:
\begin{equation}
\label{tcorrfuns}
G(t_1,t_2,\ldots,t_n|f_1,f_2,\ldots,f_n) =
_{\sf D }\hskip-0.16truecm\left\langle 0|
\mathcal{O}_{t_1}(f_1)\mathcal{O}_{t_2}(f_2)\ldots \mathcal{O}_{t_n}(f_n)|0\right\rangle_{\sf D}
\end{equation}
We shall present the bulk/boundary correspondence for the quantum map dynamics using the parametrization of both spaces by the light cone variables, $x_\pm$, of the stereographic projection. The coherent state $\left |h\right\rangle\equiv\left|x_+,x_-\right\rangle$ and the action of an $SL(2,\mathbb{F}_p)$ group element $\textsf{A}$ will be lifted to the action of the unitary operator, $U({\sf A})$ as follows:
\begin{equation}
\label{ group_actionbB}
U({\sf A})\left|x_+,x_-\right\rangle = \left|\frac{a x_+ +b}{c x_+ +d},\frac{dx_- -c}{a-bx_-}\right\rangle
\end{equation}
Let us now pass to the construction of the coherent states and the observables on the boundary.
The boundary ${\sf Bd}[p]$ of the discrete space-time $\mathrm{AdS}_2[p]$ is defined, in analogy with the continuum case, by conformal compactification. In light--cone coordinates, $(x_+,x_-)$, of the projective plane, it is described as the set of points
\begin{equation}
\label{BdN}
1+x_+x_-\equiv 0\,\mathrm{mod}\,p
\end{equation}
For every $x_+\in\mathbb{F}_p^\ast$, we have $x_-\equiv -x_+^{-1}\,\mathrm{mod}\,p$. We must also add the points at ``infinity'', $x_+=0,x_-=\infty$ and $x_+=\infty,x_-=0$. So the boundary comprises the $p+1$ points
\begin{equation}
\label{BdNinfty}
{\sf Bd}[p] =\left\{
\left.\left(x_+,-x_+^{-1}\right)\right|
x_+\in\mathbb{F}_p^\ast
\right\}
\cup
\left\{(\infty,0),(0,\infty)\right\}
\end{equation}
The boundary set is invariant under the M\"obius group, $SL(2,\mathbb{F}_p)$. It is the coset space $SL(2,\mathbb{F}_p)/\mathfrak{B}_p$, where $\mathfrak{B}_p$ is the Borel subgroup that preserves the ``point at infinity'', $q=(x_+=\infty,x_-=0)$:
\begin{equation}
\label{borelsg}
\mathfrak{B}_p=\left\{
\left.
\left(\begin{array}{cc}
\lambda & b\\ 0 & \lambda^{-1}
\end{array}\right)
\right|
\lambda\in\mathbb{F}_p^\ast,b\in\mathbb{F}_p
\right\}
\end{equation}
In the $p-$dimensional Hilbert space of the metaplectic representation the quantum maps, corresponding to $\mathfrak{B}_p$, are given as
\begin{equation}
\label{Uborelsg}
U\left[
\left(\begin{array}{cc}
\lambda & b\\ 0 & \lambda^{-1}
\end{array}\right)
\right]_{k,l}=
\left(-\lambda|p\right)\left\{\begin{array}{c} 1 \\ -1\end{array}\right\}
\omega_p^{-\frac{\lambda b}{2}k^2}\delta_{k,\lambda^{-1}l}
\end{equation}
The ``vacuum'' on which we define the coherent states of the boundary must be a common eigenvector of the stability group $\mathfrak{B}_p$~\cite{coherent_states}. We can check from~(\ref{Uborelsg}) that there is only one such eigenvector, $|0\rangle_q$:
\begin{equation}
\label{vacuum}
\left\langle k| 0\right\rangle_q = \delta_{k,0}
\end{equation}
The subgroup of $SL(2,\mathbb{F}_p)$, that acts transitively on the boundary is generated by the rotation subgroup $SO(2,\mathbb{F}_p)$:
\begin{equation}
\label{boundary_transit}
SO(2,\mathbb{F}_p)=\left\{
\left.
\left(\begin{array}{cc} a & -b \\ b & a\\ \end{array}\right)
\right|
a^2+b^2\equiv 1\,\mathrm{mod}\,p
\right\}
\end{equation}
This is an abelian, cyclic, subgroup of order $p+1$, if $p=4k-1$ and $p-1$ if $p=4k+1$. The generator of this cyclic group can be found by random search~\cite{athanasiu_floratos}.
The discrete coherent states of the boundary ${\sf Bd}[p]$ can now be defined as
\begin{equation}
\label{discrete_coherent_states}
| x_+\rangle = U({\sf R}(x_+))|0\rangle_q
\end{equation}
From~(\ref{exactirrepA}) we obtain the expression for the ground state wave function, $\langle k|x_+\rangle$ as
\begin{equation}
\label{groundstate}
\langle k|x_+\rangle = \frac{1}{\sqrt{p}}(-2b|p)\left\{\begin{array}{c} 1\\ \mathrm{i}\end{array}\right\}
\omega_p^{-\frac{x_+k^2}{2}}
\end{equation}
when $x_+\in\mathbb{F}_p^\ast$. For the two additional points of the boundary, $x_+=\infty,x_-=0$(then $b=0$) and $x_+=0,x_-=\infty$ (then $a=0$) the corresponding coherent states are:
\begin{itemize}
\item
When $b=0$, we need, in principle, to distinguish two cases: $a=+1$ and $a=-1$--but, since the action of the group is projective, these lead to the same state,
$|x_+=\infty\rangle = |0\rangle_q$, whence $\langle k|x_+\rangle = \delta_{k,0}$.
\item
For $a=0$, then $b=-1$, $x_+=0$ The state $|x_+=0\rangle$ leads to the constant wavefunction
\begin{equation}
\label{xplusinfty}
\langle k| x_+\rangle = \frac{1}{\sqrt{p}}\left(2|p\right)\left\{\begin{array}{c} 1\\ \mathrm{i}\end{array}\right\}
\end{equation}
for all $k=0,1,\ldots,p-1$.
\end{itemize}
In total we get $p+1$ states, matching the number of points on the boundary.
In analogy with the bulk coherent states we observe that the ground state $|0\rangle_q$ of the boundary is annihilated by the projector $(I-S)/2$, as are all the coherent states $|x_+\rangle$.
Thus, these coherent states live in the eigenspace of $P_+=(I+S)/2$, which is $(p+1)/2-$dimensional. They form an overcomplete set of states and display all the expected features of coherent states.
Let us now turn our attention to the boundary observables. We construct the following class of operators, which have nice transformation properties under the M\"obius conformal group and form a basis for measurement.
Using the magnetic translations, $J_{r,s}$, of the Heisenberg--Weyl group, where $r,s=0,1,2,\ldots,p-1$, we define the operators
\begin{equation}
\label{Oxplus}
\mathcal{O}(x_+)=\frac{1}{p}\sum_{s=0}^{p-1} J_{s(1,-x_+)}
\end{equation}
with $x_+=0,1,\ldots,p-1$. Their matrix elements are
\begin{equation}
\label{Oxplusmatrixels}
\left[\mathcal{O}\right]_{k,l}=\frac{1}{p}\omega_p^{-\frac{x_+(k^2-l^2)}{2}}
\end{equation}
These operators are projectors:
\begin{equation}
\label{Oxplusproj}
\begin{array}{l}
\displaystyle
\mathcal{O}(x_+)^2 = \mathcal{O}(x_+)\\
\displaystyle
\mathcal{O}(x_+)^\dagger = \mathcal{O}(x_+)
\end{array}
\end{equation}
and they transform conformally (for all ${\sf A}\in SL(2,\mathbb{Z}_N)$):
\begin{equation}
\label{conformalOxplus}
U({\sf A})\mathcal{O}(x_+)U({\sf A})^\dagger = \mathcal{O}\left(\frac{ax_+ +b}{cx_+ +d}\right)
\end{equation}
For example, under the Fourier transform,
$$
{\sf S}=\left(\begin{array}{cc} 0 & -1 \\ 1 & 0\\ \end{array}\right)
$$
we find that
\begin{equation}
\label{FourierT}
U({\sf S})\mathcal{O}(x_+)U({\sf S})^\dagger = \mathcal{O}\left(-\frac{1}{x_+}\right)
\end{equation}
We use eq.~(\ref{FourierT}) to {\em define}
\begin{equation}
\label{Oxplusinfty}
{\mathcal O}(x_+=\infty)\equiv U({\sf S}){\mathcal O}(0)U({\sf S})^\dagger = |0\rangle_q { }_q\langle 0|
\end{equation}
In the notation of eq.~(\ref{discrete_coherent_states}) the state $|\infty\rangle$ is the ground state, $|0\rangle_q$.
The operators $\mathcal{O}(x_+)$ have the nice property that they are projectors on the discrete coherent states,
$|x_+\rangle$.
One can, indeed, check that
\begin{equation}
\label{Oxplus1}
\mathcal{O}(x_+)=|x_+\rangle\langle x_+|
\end{equation}
This holds for all $x_+=0,1,2,\ldots,p-1,\infty$.
The boundary observables can, therefore, be expressed in the $\mathcal{O}(x_+)$ basis:
To any function, $f:{\sf Bd}[N]\to \mathbb{C}$, we assign the observable
\begin{equation}
\label{Of}
\mathcal{O}(f)=\sum_{x_+}f(x_+)\mathcal{O}(x_+)
\end{equation}
Their transformation properties are
\begin{equation}
\label{Oftransform}
\begin{array}{l}
\displaystyle
U({\sf A}) \mathcal{O}(f)U({\sf A})^\dagger =
\mathcal{O}(f\circ{\sf A}^{-1})
\end{array}
\end{equation}
In this way we may establish contact between modular functions and forms of the finite M\"obius group $SL(2,\mathbb{F}_p)$, and conformal operators of definite conformal weight~\cite{terras,AxFlNic}.
Once we have defined appropriate conformal operators, it is possible to calculate their correlation functions
(and the identities these satisfy) in any state. The two-- and three--point functions can be determined from conformal invariance; the higher--point functions depend strongly on the quantum dynamics~\cite{dAFF,chamonetal}.
In the next section we shall reconstruct bulk observables, when the boundary observables are known~\cite{Hamilton:2006az,Papadodimas:2012aq}
\section{AdS$_2[N]$/CFT$_1[N]$ coherent state holography}\label{Hologcat}
In this section we present a new AdS$_2$/CFT$_1$ correspondence, AdS$_2[p]$/CFT$_1[p]$,
based on the coherent states of positive chirality, in the bulk and the boundary. A similar method can be applied to the subspace of negative chirality.
By CFT$_1[p]$ we understand the quantum mechanics on the discrete, projective line, $\mathbb{RP}_p^1$, defined by the evolution operator $U({\sf A})$, for ${\sf A}\in SO(2,\mathbb{F}_p)$. In analogy to
the conformal quantum mechanics of ref.~\cite{dAFF}, the generator of this group corresponds to their ``second'' Hamiltonian, which has discrete spectrum. From the point of view of radial observers of the AdS$_2$ near horizon geometry of an extremal black hole, this evolution corresponds to that of freely infalling observers~\cite{Strominger,townsendetal}.
To motivate the use of coherent states for the correspondence we notice the following:
The basic strength of the AdS/CFT correspondence relies on two important facts: First, the conformal boundary completion of the AdS space-time is very specific in selecting those boundary observables, which are appropriate for the reconstruction of those in the bulk--and this is holography. The second one is more constraining in that the AdS/CFT holography satisfies a new uncertainty principle, the IR/UV connection, which is a stringy effect. The higher the energy of the probe of a system on the boundary, in order to localize it, the bigger the distance form the boundary of the gravity dual system in the bulk. In the language of the stringy uncertainty principle, the higher the energy of the closed string state in the bulk, the larger the average length of the string: $\Delta x_+ \Delta x_-\geq 1/\alpha'$.
In the light cone coordinates, $(x_+, x_-)$, on AdS$_2$, $x_+$ is parallel to the boundary and $x_-$ is a measure of the distance from it. Strictly speaking, the appropriate quantity is $z\equiv 1+x_+x_-$, so, for fixed $x_+$, $x_-\to -1/x_+$, when $z\to 0$.
In section~\ref{LCWeyl} we observed that the variables $(x_+,x_-)$ are appropriate holographic coordinates for the bulk, since they transform under the isometry group by M\"obius, conformal, transformations:
\begin{equation}
\label{Moebius_conformal}
\begin{array}{l}
\displaystyle
x_+\to {\sf A} x_+\equiv \frac{ax_+ + b}{cx_+ +d}\\
\displaystyle
x_-\to \left[{\sf A}^{-1}\right]^\mathrm{T} x_-\equiv\frac{dx_- -c}{-bx_- +a}
\end{array}
\end{equation}
Notice that $\left[{\sf A}^{-1}\right]^\mathrm{T}$ is the Fourier transform of ${\sf A}\in SL(2,\mathbb{Z}_N)$,
\begin{equation}
\label{Fourier_group_transform}
\left[{\sf A}^{-1}\right]^\mathrm{T} = \varepsilon{\sf A}\varepsilon^\mathrm{T}
\end{equation}
So $(x_+,x_-)$ are conjugate variables, similar to position and momentum. Indeed, for AdS$_2$, they represent time and length, promoting AdS$_2$ into a stringy phase space. To saturate the stringy uncertainty principle we must employ the corresponding coherent states. In the bulk they have been defined as $|x_+,x_-\rangle$. The coordinates denote the center of the coherent state on the boundary as $|x_+\rangle$.
The bulk coherent states are the discrete analogs of the well--known wavelets, used in signal processing~\cite{Mallat}, which determine the spectrum of scales of a given signal as a function of position.
The boundary coherent states are, also, the discrete analogs of the usual coherent states of the harmonic oscillator (albeit on a different vacuum state)~\cite{balian_itzykson,athanasiu_floratos}.
We shall describe now the reconstruction method of the bulk observables (states) from appropriate boundary ones, using the wavelet representation and its properties.
Let us choose, for any value of the variable $x_+$, an independent variable, $x_-$, which takes values on the projective line, $\mathbb{RP}_N^1$, and define the state
\begin{equation}
\label{xtildeminus}
|\widetilde{x}_-\rangle\equiv F|x_-\rangle
\end{equation}
with $ F$ the finite Fourier transform~(\ref{FourierFT}). Since $|x_-\rangle$ is a boundary coherent state, we deduce that
\begin{equation}
\label{conformalFourier}
\widetilde{x}_-=-\frac{1}{x_-}
\end{equation}
In section~\ref{CohStates} we constructed the chiral scalar operators ${\mathcal O}(x_+)$. It is obvious that the scalar operator,
\begin{equation}
\label{Oxminus}
\widetilde{{\mathcal O}}(x_-)\equiv{\mathcal O}(\widetilde{x}_-)
\end{equation}
has conjugated transformation properties, i.e.
\begin{equation}
\label{Oxminuschirality}
U({\sf A})\widetilde{{\mathcal O}}(x_-)U({\sf A})^\dagger = \widetilde{\mathcal O}\left(
\left[({\sf A}^{-1}\right]^{\mathrm{T}}x_-
\right)
\end{equation}
for any ${\sf A}\in SL(2,\mathbb{F}_p)$.
We observe now that the composite operator,
\begin{equation}
\label{compositeO}
{\mathcal O}(x_+,x_-)\equiv {\mathcal O}(x_+)\widetilde {\mathcal O}(x_-)
\end{equation}
is a scalar operator in the bulk. Indeed,
\begin{equation}
\label{bulk_scalar}
U({\sf A}){\mathcal O}(x_+,x_-)U({\sf A})^\dagger =
{\mathcal O}({\sf A}x_+,\left[{\sf A}^{-1}\right]^{\mathrm{T}}x_-)
\end{equation}
We shall use these operators to reconstruct Hermitian bulk scalar operators, so we must symmetrize the
product in~(\ref{compositeO}).
On the boundary, the operators ${\mathcal O}(x_+,x_-=-1/x_+)={\mathcal O}(x_+)$.
The reconstruction of the bulk operators from boundary data will be described next.
The bulk/boundary correspondence in our construction is based on the fact that the Hilbert space of the bulk coincides with the Hilbert space of the boundary and carries the positive chirality component of the metaplectic
representation of $SL(2,\mathbb{F}_p)$. This places constraints on the algebra of observables on both sides of the correspondence.
Since both the bulk and boundary coherent states are overcomplete systems in the eigenspace of $P_+$, , we get the relation
\begin{equation}
\label{bulkboundcohstat}
|x_+,x_-\rangle = \frac{1}{d}\sum_{y_+}K(x_+,x_-|y_+)|y_+\rangle
\end{equation}
where the bulk/boundary propagator, $K(x_+,x_-|y_+)$, can be explicitly calculated:
\begin{equation}
\label{bulkboundprop}
K(x_+,x_-|y_+) = \langle y_+|x_+,x_-\rangle
\end{equation}
From eqs.~(\ref{coherent_states_h}) and~(\ref{groundstate}) we find that
\begin{equation}
\label{Kxplusxminusyplus}
K(x_+,x_-|y_+) = \left((a-b\mu)|p\right)(-2b'|p)\left(\left.\frac{1}{2}\left(y_++\frac{1}{x_-}\right)\right|p\right)
\end{equation}
In this expression
\begin{equation}
\label{bulk_data}
\begin{array}{l}
\displaystyle
a=\frac{x_+}{\sqrt{x_+^2+1}} \\
\displaystyle
b = \frac{1}{\sqrt{x_+^2+1}}\\
\displaystyle
\mu=\frac{x_+-x_-}{1+x_+x_-}
\end{array}
\end{equation}
for the bulk coherent states and
\begin{equation}
\label{boundary_data}
\begin{array}{l}
\displaystyle
a' = \frac{2y_+}{1+y_+^2}\\
\displaystyle
b' = \frac{1-y_+^2}{1+y_+^2}
\end{array}
\end{equation}
for the boundary ones.
The normalization constant, $d$, is defined through the overcompleteness relation of the boundary coherent states
\begin{equation}
\label{overcompletebound}
\sum_{y_+}|y_+\rangle\langle y_+| = d P_+
\end{equation}
Using eq.~(\ref{groundstate}) we find that
\begin{equation}
\label{cohstatenorm}
d\langle l |P_+| m\rangle = \sum_{y_+}\langle l | y_+\rangle \langle y_+ | m\rangle \Rightarrow d = 2
\end{equation}
The range of the bulk variables, $x_\pm$, is determined by the light cone parametrization of the bulk, while the range of the boundary variable, $y_+$ runs over the projective line, $\mathbb{RP}_N^1$, i.e.
$y_+\in\{0,1,2,\ldots,p-1\}\cup\{\infty\}$.
The correspondence between bulk/boundary observables can be constructed through the relation
\begin{equation}
\label{bulkboundobs}
|x_+,x_-\rangle\langle x_+,x_-| = \frac{1}{d^2}\sum_{y_+,y_-}G(x_+,x_-|y_+,y_-){\mathcal O}(y_+,y_-)
\end{equation}
The coefficient function, $G(x_+,x_-|y_+,y_-)$, can be determined from the bulk/boundary propagator
\begin{equation}
\label{bulkboundpropG}
G(x_+,x_-|y_+,y_-) = \frac{K(x_+,x_-|y_+)K^\ast(x_+,x_-|-1/y_-)}{\langle y_+|-1/y_-\rangle}
\end{equation}
The denominator is, in fact, a boundary/boundary propagator, whereas the numerator is the product of bulk/boundary, resp. boundary/bulk propagators.
\section{Summary and conclusions }\label{Concl}
Before we summarize our results, let us review the conditions satisfied by our finite, discrete, model of the black hole near horizon geometry for the radial and temporal directions:
\begin{itemize}
\item
We managed to single out the proposed type of modular discretization in the space of all possible finite geometries, by imposing the additional condition of existence of a holographic correspondence. This is to be satisfied through the replacement of AdS$_{2}$ by AdS$_2[N]$ and its boundary by CFT$_1[N]=\mathbb{RP}_N^1$, .
\item
Indeed, the finite geometry inherits the symmetry properties of its continuous counterpart (isometry group, coset structure and its quantum representations as well as bulk-boundary correspondence) albeit in a discretized disguise.
\item
In the framework of the specific finite geometry it is very natural to choose as a model for the dynamics of probes
the isometry group elements which, interestingly, possess strongly chaotic mixing properties. They are the well
known Arnol'd cat maps, defined as M\"obius transformations on the stereographic lightcone plane.
\item
Moreover, special properties of the modular representations guarantee the factorization with respect to the
ultraviolet cut-off $N$. This is important for fast quantum information processing on the near horizon region. As we
plan to show in forthcoming works the proposed framework is capable of providing an example for a mechanism for the saturation of the fast scrambling conjecture.\cite{susskindscramble,Barbon,Preskill_Hayden}.
\end{itemize}
In the present work we have studied the modular discretization of the AdS$_2$ geometry, AdS$_2[N]$, and the ensuing classical and quantum dynamics of probes using generalized Arnold cat maps.
We have demonstrated that our toy model is successful in realizing all of the properties which are considered key ingredients for a departure from semiclassical and local physics, namely those of non-locality, chaotic dynamics and fast quantum information processing.
With the discretization parameter, $N$, which provides both an ultraviolet and an infrared cutoff, the coset space nature of the AdS$_2$ geometry ``carries over'' at the discretized level. The corresponding, effective Planck constant, $\hbar=2\pi/N$ can be identified, also, with the non commutativity parameter of the quantum coset geometry.\cite{Grosse}
The strong arithmetic chaos of the Arnol'd cat map dynamics is inherited in a transparent way by the coset quantum states, which are the coherent states of $SL(2,\mathbb{Z}_N)$. It is rather interesting that there is a correspondence between the bulk and the boundary states and observables of AdS$_2[N]$; the latter belong to the discrete projective line, $\mathbb{RP}_N^1$. In a unique Hilbert space of finite dimension and given chirality, by using the overcompleteness of the corresponding coherent states of the bulk and the boundary, we provided a method to reconstruct the bulk states and observables from the corresponding boundary data. To this end we constructed the bulk--bulk, bulk--boundary and boundary--boundary propagators, which are invariant under the isometries of AdS$_2[N]$. They are given by the overlap amplitudes between the corresponding coherent states.
These propagators realize the UV/IR connection between the bulk and the boundary scales, since the corresponding coherent states saturate the string uncertainty relation
$\Delta x_+ \Delta x_-\geq 1/\alpha'$.
Our present work can be a basis for further extensions:
\begin{enumerate}
\item In the study of the AdS$_2[N]$/CFT$_1[N]$ correspondence for different representations of the discrete isometry group,
$SL(2,\mathbb{Z}_N)$~\cite{AxFlNic}. In particular, it is interesting to study the modular
discretization of the boundary conformal quantum mechanics of ref.~\cite{dAFF, chamonetal}. It requires at the group level the definition of primary operators, their dimensions, as well as their fusion algebra.
\item Since the classical Arnol'd cat maps possess factorization in the parameter $N$ and strong chaotic properties by choosing $ N=p^{n}$ where $p$ is a prime integer, we can construct the corresponding p-adic dynamics at both the classical and quantum levels. Indeed all of our amplitudes possess factorization properties. Therefore by taking their infinite product over $n$ from 1 to infinity it is possible to construct the corresponding p-adic amplitude~\cite{p-adics}. In recent works by Barb\'on {\em et al.}~\cite{Barbon}. it has been shown that ultrametric or p-adic structures of the Hilbert space of black hole microstates which are supported by specific expander graphs guarantee the saturation of scrambling time bound for the black hole information processing \cite{susskindscramble}.
\item Since the quantum Arnol'd cat maps possess factorization in the parameter $N$ and strong chaotic properties~\cite{Barbon}, they are also appropriate for the construction of quantum circuit models of deterministic chaos for the qubit information processing in black holes~\cite{susskindbhComp,Giddings,Preskill_Hayden,Avery,Harlow:2013tf}.
In analogy with the quantum circuit realization of Shor's factorization algorithm~\cite{Shor}, it is expected that quantum circuits for the quantum Arnol'd cat maps will provide similar (exponential) improvements over their classical factorization properties and may saturate the scrambling bound~\cite{AxFlNic}, as well.
\end{enumerate}
\vskip1.5truecm
{\bf Acknowledgements:} MA and SN acknowledge the warm hospitality of the CERN Theory Division as well as of the LPTENS. We are thankful to A. Dabholkar, M.Porrati and E. Rabinovici for discussions and their constructive comments. EF is especially grateful to L. Alvarez-Gaum\'e, S.Ferrara and I. Antoniadis for making his stay at CERN such a rewarding experience.
The research of EF is
implemented under the ``ARISTEIA-I'' action (Code no. 1612, D.654) and title ``Holographic Hydrodynamics'' of the ``operational programme
education and lifelong learning'' and is co-funded by the European Social Fund (ESF) and National Resources. The research of MA was
supported in part by the General Secretariat for Research and Technology of Greece and the European Regional Development Fund
MIS-448332-ORASY(NSRF 2007-13 ACTION, KRIPIS).
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 742
|
Q: My Android game cancelled the payment callback when returning from the background after payment,while another game successfully completed the payment Switching to the background card page and then returning to the foreground can be a successful payment, but switching directly to the background cancels the payment.
How to handle this situation so that the payment can still be completed after switching to the background after payment?
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,656
|
package part2.simple_search;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.util.List;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.en.EnglishAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.Version;
import part2.medlars.MedlarsDocument;
import part2.medlars.MedlarsDocumentReader;
/**
* @author Stamatis Pitsios
*
* This class provides the necessary functionality to query the Lucene
* search engine.
*/
public class SimpleLuceneSearcher
{
/**
* The location of the index.
*/
private String indexLocation;
/**
* The field that we will be searching.
*/
private String searchField;
/**
* The queries that we will use for searching.
*/
private List<MedlarsDocument> queries;
/**
* The reader that will read the queries from the file.
*/
private MedlarsDocumentReader reader;
/**
* The path of the file where the results of the search will be kept.
*/
private static final String resultsPath = "medlars/RESULTS_SIMPLE.txt";
/**
* The number of the documents that we want to retrieve.
*/
private int numOfDocuments;
/**
* Constructor.
*
* @param indexLocation The folder that keeps the indexes.
* @param searchField The field to search in.
* @param queriesPath The path to the file that contains the queries.
* @param numOfDocuments The number of the documents that we want to retrieve.
*/
public SimpleLuceneSearcher(String indexLocation , String searchField , String queriesPath , int numOfDocuments)
{
this.indexLocation = (indexLocation);
this.searchField = searchField;
this.reader = new MedlarsDocumentReader(queriesPath);
this.queries = reader.getDocuments();
this.numOfDocuments = numOfDocuments;
}
/**
* Searches the index created with all the queries and saves the results to a file.
*/
public void search()
{
//Useful to count how much time did it take to get the answers for all the queries.
long start = System.currentTimeMillis();
try
{
System.out.println("Starting to query Lucene Engine...");
//Access the index using indexReader
IndexReader indexReader = DirectoryReader.open(FSDirectory.open(new File(this.indexLocation)));
//Create an IndexSearcher for searching in indexes.
IndexSearcher indexSearcher = new IndexSearcher(indexReader);
//Define which analyzer to use for the normalization of user's query.
Analyzer analyzer = new EnglishAnalyzer(Version.LUCENE_4_9);
//Create a query parser on the field "TEXT"
QueryParser parser = new QueryParser(Version.LUCENE_4_9 , this.searchField , analyzer);
//A string that will hold the query.
String q = "";
//The query.
Query query ;
//A writer to write the results of the query to the file.
BufferedWriter bw = new BufferedWriter(new FileWriter(new File(resultsPath)));
//For each query , get the results and save them to a file.
for(MedlarsDocument doc : queries)
{
q = doc.getText();
query = parser.parse(q);
this.saveResults(bw, query, indexSearcher , doc);
}
//Close the writer.
bw.close();
//Close indexReader.
indexReader.close();
//Print the elapsed time.
long end = System.currentTimeMillis();
System.out.println("Time took to get the top " + this.numOfDocuments + " answers for all the queries is : " + (double)(end-start)/(double)1000 + " seconds.");
}
catch(Exception e)
{
e.printStackTrace();
}
}
/**
* Gets the results for the given query and saves them to a file in Trec Eval's format.
*
* @param bw The writer.
* @param query The query.
* @param indexSearcher The IndexSearcher.
* @param doc The query as a MedlarsDocument object.
*/
private void saveResults(BufferedWriter bw , Query query , IndexSearcher indexSearcher , MedlarsDocument doc)
{
try
{
//Search the index using the indexSearcher.
TopDocs results = indexSearcher.search(query, this.numOfDocuments);
ScoreDoc[] hits = results.scoreDocs;
//Save the results.
for(int i=0; i<hits.length; i++)
{
Document hitDoc = indexSearcher.doc(hits[i].doc);
bw.write(String.valueOf(doc.getId()) + " 0 " + String.valueOf(hitDoc.get("id")) + " " +String.valueOf(i+1)+ " " + String.valueOf(hits[i].score) + " STANDARD\n");
}
}
catch(Exception e)
{
e.printStackTrace();
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 388
|
{"url":"https:\/\/kiwifarms.net\/threads\/meme-technology.84839\/#post-8325485","text":"# Meme Technology\n\n#### stares at error messages\n\nkiwifarms.net\nI want the meme technology. You must give it to me. How to write stories that become the NPC. Where the NPC is my neighbour I and want free shit. For they started calling me your high-Bess now. I don't like pot it makes me schizophrenic. THE TECHNOLOGY is how to tell stories that are simile enough that people already believe them before they have heard them. Like all memes the have a Evolutionary structure. Memes that give no children will not survive and their disciple will die off in numbers times two. The power of the Vulk in what is at stake. This is now and forever the ZODIAC.\n\n\\section{Pepe is my meme.}\nWhen I think of meme I think of harmless images on the internet that are more like comics than the Scientologean. But the commic is not really meme. Whay do we copy-paste certain memes more then others, what are the mechanics that govern this behaviour is the question? Many a Pepe is ignored. But I rememebre Wagi-Kagi and many others. So it's not just \\textit{liking} that make it a meme. \\textbf{Buy GME} for instance. Would it be really called a conspiracy if it wasn't social? If everyone just did the same thing, not because it was popularly condoned, but because they wanted to. And if it's were popularly condoned, then they must have wanted to but also wanted to be safe for ridicule simultaneously. Hug-boxs is what breed these things. How to get a fresh heard of nigger caddle to go you bidding while the shit. I do not know but I want to find out. I suspect it has much to do with social clout, where clout is the quantisation of statis in social group. Commission clout metric.\n\nIf you created deep face CIA leak it would be belived. Must study this need to be tested. GIMP is free and easy to use. Those evil niggers should shut up and move back to africa where they can die in peace and squealer. Begin document. Classifed top secrete. With real seal you can get on the Net. Take the paper and write on them: Operation BUY IT NOWONOWONOW''. Plan to create international incedent regarding pro-isrealy palistian realtions. Plan leverages the weaponization of SSDI recipients in market manipulation to influance current UN negotiations on currany and the West Bank. \\quote{\\section{Background} It is believe by special intelance corespondent that Isreal is going to be making consssions to West Bank under assurances of increased US involmeant in the region. The US is trying to pull out per current and past admi. policy. Operations in the region have been unideal for US stand point and going forward a getter deal should be arranged. Isreal had been very clear in UN hearings that increased US involmeant would lead to lessening of West Bank Tentions. Isreal has nuclear capabilities and has been very liberal about their belief in their use. From the Isrealy perspective, there is no reason for the US to change its commitment to the regoin. \\textit{Operation BUY IT NOWONOWONOW} has been keyly tasked by the secratary general to implement new West Bank while maintaining status-quo US-Isrealy relations. \\section{Jewish Matriarchal Family Structure And Economics} ... the report continues ...} Just like that it is more easy then ever to create the stuff of memes. So it is understood to thoroughly not be a question of technology, but a question of stickyness'' of the message.\n\nPerhaps not that many of you care that the Jewes are being tricked by the CIA to who cased GME so that the Jews would police there own backyard. But the same time these stories need not be so hokey that they seem like mere conspiracies. An investigation of the Flat Earth phenomena is neeeded. To the outside observer it the investment of the mentally ill to obsess over the veracity of extra pixelated images about outter space objects. To anyone who can fathum taking a rocket plane to space and seeing it for themselves, the systems and ability of technology to do so is easily in the scope of plausable and likely. But those brave soles how take up the call of truth for truth's sake, it is a life long battle of epic proportions to secure the justic that is certain already in their minds. A degree of being self-removed is necessary. Your acceptance of facts makes you a liability as much as also you acceptance of mal-truth. \\textit{Trust but verify} I lawys keep in my pocket. Though could you. You alone personally afford to visit the moon. Or, at the very least a space jont though ISS like Musk and Branson. So in honestly the Flat Earther is more correct. You should not trust in the expertise that cannot be your own. Pitty noone has tried to take Flat Earthers on a Jont to ISS.\n\n\\section{Reality-isum}\nAge 60. Wage-cuck. Healthcare Education. Voted Democrat since '95. Hording tendencies in old age. Arthritis and Alzheimer's inherited. To this, \\textit{the voting public}, any rationalisation about the nation that fits with the NPC world is right on the money. Take Global Warming from two decades ago. I ember on day a few years about it being the anniversary of being told that in 20 years the damage to the environment would be irreversible. It was a cool spring (early summer) day and I was dog walking. It had been this weather about all week. Last year in this month it had been this same weather and since moving there it had been like this season every year. But even in this natural pattern the will of the people the culture and the clout it generates was to state that this cool air was actually even more confirmation of Global Warming. It is very unlikely that a '07 Buick town-can is capable of the same carbon-dioxide output that a T-rex or Brontosaurus or pterodactyl with a wing spans of two of these Buick end to end would ever accomplish. In nature it's unbelievable but in the memes it's correctness.\n\nHow to tell people what they already know but need works to say it?\n\n... a study of the target's culture may be required.\n\n#### Andy Bandy Man\n\nkiwifarms.net\nI didn't know you worked with NLP\n\n#### The Jumping Dwarf\n\n##### Aye, me bottle o' scrumpy\nkiwifarms.net\nOkay, here's what you need to do, OP.\n\nStep 1: Cover yourself in oil.\n\n#### Red Hood\n\n##### Vote for me\nkiwifarms.net\nThe virgin meme technology vs the Chad Meme Bean Machine\n\n#### StreetGangsta\n\n##### Hell's Gate Arrested - And Shine Heaven Now\nkiwifarms.net\nbruh do you need stroke medicine\n\n#### Spooky Bones\n\n##### I am a deceitful person. Be wary of my intentions!\nTrue & Honest Fan\nkiwifarms.net\nBased schizophrenic LaTeX tags\n\nkiwifarms.net","date":"2022-05-16 11:24:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24019533395767212, \"perplexity\": 4594.556158521408}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662510117.12\/warc\/CC-MAIN-20220516104933-20220516134933-00206.warc.gz\"}"}
| null | null |
Skip to main content Skip to local navigation Skip to global navigation Skip to footer
Ontario Human Rights Commission
OHRC Social Links
The Ontario Human Rights Code
The Human Rights System
The Ontario Human Rights Commission
The Human Rights Legal Support Centre
The Human Rights Tribunal of Ontario
Code Grounds
Family and marital status
Gender identity and gender expression
Race and related grounds
Receipt of public assistance
Record of offences
Goods, services and facilities
Membership in vocational associations and trade unions
Teaching human rights in Ontario
Public education requests
Backgrounders and research
Brochures, fact sheets and guides
Home » "Next Stop, Accessibility" Report on the public transit stop announcements in Ontario » Transit provider responses
"Next Stop, Accessibility" Report on the public transit stop announcements in Ontario
Background: Accessible transit and stop announcements
The transit stop announcements inquiry
Transit provider responses
Appendix A: Summary of transit provider responses
Appendix B: Stop announcement practices and commitments
RE: Transit Stop Announcements
Human rights cases settled as transit providers offer more accessible services
Three Quarters of Ontario Transit Providers Commit to Announce all Stops
Ontario Human Rights Commission files complaints against three public transit providers
Page controls
+ show tags
Book Prev / Next Navigation
Over the fall and winter of 2007-2008, the Commission communicated with 41 transit providers. Of these, 40 provided the requested information.[7] On the whole, most of the organizations expressed broad commitment to accessibility within the transit sector.[8] This commitment was further demonstrated by the willingness of many transit providers to develop or speed up stop announcement plans in response to the Commission's inquiry.
The Commission is encouraged by the overall movement toward implementing stop announcements. However, there is significant variation among transit providers in terms of the methods, processes and timelines they have put forward. While many have responded by taking concrete steps to improve the breadth and timeliness of accessibility through stop announcements, some have limited or delayed their commitments, and some have not indicated any clear commitment to address the specific issues raised in this inquiry.
At the outset of the inquiry, only the TTC was announcing all transit stops, in accordance with HRTO decisions. As of the end of March 2008, of the 38 provincially regulated transit providers: [9]
254 committed to begin announcing all stops by June 30, 2008
2 more appeared likely to do so.
Of the remaining 11 provincially regulated organizations:
4 indicated that they would begin announcing all stops announcements in the fall of 2008
2 described plans for longer term compliance over the next 2-4 years
2 made no commitments to our inquiry, referring instead to commitments to meet any future AODA standard
3 did not provide sufficient information about their intentions or timelines for implementation.
Two of the three federally regulated agencies contacted indicated plans for partial or long-term compliance; the other did not reply.
The Commission is very pleased that a number of organizations took immediate and positive steps in response to its request, and that they committed to fully implement universal stop announcement plans within short time frames. In particular, in addition to the TTC, four transit providers – Brampton, Durham, Owen Sound, and Sault St. Marie – indicated that they would announce all stops before or by the end of the first quarter of 2008.
In all, more than two-thirds of provincially regulated transit operators committed to announce all stops by the end of the second quarter of 2008. These transit providers span the province, and range in size from small to very large systems: their efforts represent a considerable advancement in transit accessibility. The result is that over a relatively short period of time, persons with visual impairments and other transit riders will enjoy a significant improvement in services that are so essential to their daily lives.
The means by which organizations choose to announce stops has been the subject of some discussion over the course of this inquiry. Broadly speaking, there are two ways to deliver stop announcements:
Manual stop announcement:
A simple verbal call-out is made by the driver or vehicle operator, often with amplification using a PA system, to ensure greater audibility. Transit provider responses showed that plans for putting these systems into place may include steps such as: policy development; developing and implementing training for drivers on accessibility principles, policies and announcement procedures; naming mid-block or rural stops; providing cards or other tools listing the stops; and in some cases, acquiring and installing public address systems.
Of the many organizations that committed to announce all stops by June 30, 2008, all but one indicate that they will do so, at least initially, through manual call-out.[10] Overall, manual call-out offers a fairly quick and inexpensive solution, and has been the means by which many organizations, of a range of sizes, have been able to begin stop announcements within short time frames.
Automated systems:
These systems vary in complexity and cost. Because they can include both audio and visual announcement, they can provide broader accessibility than manual call-out, in that they benefit persons with either hearing or vision impairments, as well as other transit riders. A number of transit providers cited broader accessibility, convenience for the drivers, or alleged safety concerns (addressed and found not to have amounted to undue hardship in Lepofsky) as their rationale for pursuing this option. However, transit providers also indicated that these systems are expensive, and due to cost, availability and complexity, may require several months to several years to acquire, develop and install.
A number of organizations chose to focus solely on implementing electronic systems in the long term, or simply restated their commitment to meet any transit standard eventually finalized under the AODA, without addressing the immediate need for an interim solution. These responses do not sufficiently address the specific barriers and remedy described in the Lepofsky decision, relating to persons with visual impairment.
It should be noted that the Commission position, as set out in its Policy and Guidelines on Disability and the Duty to Accommodate (the "Policy"), is that organizations should actively identify and remove barriers to inclusion. Furthermore, the Policy states that service providers have a duty to:
provide timely solutions to accommodation requests
provide interim solutions where a broader or more appropriate accommodation may take some time to implement
provide accommodation to the point of undue hardship. This excludes business inconvenience, employee morale and third-party preference, and involves a detailed analysis of any alleged cost or health and safety concerns.[11]
A few transit providers felt that their current accommodation practices for persons with visual impairment, such as access to separate specialized transit systems, or stop call-out on request, were sufficient. The Lepofsky decision has already established that "on request" call-out is unreliable and insufficient. These responses also fail to take into account the human rights principles of integration, inclusive design and the dignity of the person. [12]
The question of dignity arises where persons with visual disabilities have to request and depend upon assistance from others every time they use transit, as a condition of gaining access, when other people do not have to do so. There is also a practical impact based in regular human error, in that, where call-out is not routine, drivers sometimes forget to call out the requested stop. These issues can all be addressed through inclusive design, that is, through consistently and audibly announcing all stops.
Several transit providers indicated that, while working toward long-term implementation of automated audio-visual announcement systems, or completing a phase-in of such systems, they will implement manual call-out systems. The Commission is pleased that these organizations are incorporating the concept of interim accommodation in their planning.
As several transit providers have noted, some steps taken toward manual call-out, such as the naming of stops, will facilitate eventual transition to automated systems. Others have pointed out that manual call-out remains an important back-up for automated systems. Organizations that have provided drivers with tools and training for manual call-out will be able to ensure seamless delivery of stop announcements, even if there are delays in implementing automated systems, or when there are system failures or malfunctions.
[7] Coach Canada, which is federally regulated, did not respond to the Commission's inquiry. Two other federally regulated transit providers, Windsor and Ottawa (OC Transpo), did respond. See appendices for details.
[8] Many transit services provided detailed information about their programs and plans to improve access for transit users with disabilities, such as staff training , fare-free use of public transit for persons with visual impairment, accessibility improvements to bus stops and fleet vehicles, among others.
[9] See Appendix A for a summary breakdown of responses, and Appendix B for details of responses, arranged alphabetically by municipality or organization.
[10] Guelph indicated that it will have an automated system in place in Spring 2008. See appendices for more detailed information.
[11] See section 4 of the Policy.
[12] See section 3.1 of the Policy.
Expense Disclosure
© Queen's Printer for Ontario
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,381
|
\section{Schrieffer Wolff Transformation}
The Hamiltonian describing a multi-dots linear chain system is given by
\begin{eqnarray*}
\hat{H} = \hat{H}_{\mu} + \hat{H}_t + \hat{H}_U + \hat{H}_J,
\label{eqn:H2}
\end{eqnarray*}
where $\hat{H}_\mu$ is the chemical potential, $\hat{H}_t$ is the hopping term, $\hat{H}_U$ is the intradot and interdot Coulomb potential and $\hat{H}_J$ is the spin exchange interaction term. Each components of the Hamiltonian are given by
\begin{eqnarray*}
\hat{H}_\mu&=&\sum_{i,\sigma} -(\mu_i + \varepsilon_i\left( \tau \right)) n_{i, \sigma}\\
\hat{H}_t&=& \sum_{<i,j>,\sigma} \left( -t c^\dag_{i\sigma} c_{j \sigma} + \mathrm{H.c.} \right)\\
\hat{H}_U&=& \sum_{i} U_i n_{i,\uparrow}n_{i,\downarrow} +\frac{1}{2}\sum_{<i,j>} U_{ij}( n_{i,\uparrow}n_{j,\downarrow}+ n_{j,\downarrow}n_{i,\uparrow})\\
\hat{H}_J&=& -\frac{1}{2}\sum_{<i,j>} J_e c^\dag_{i\downarrow}c^\dag_{j\uparrow} c_{j \downarrow} c_{i \uparrow}.
\end{eqnarray*}
The Hamiltonian can be categorised into Coulomb interaction consisting of the inter-dot and intra-dot Coulomb interaction with the tunneling, exchange interaction and the detunings treated as perturbation. For the singlet states, the basis states are
\begin{eqnarray*}
\Ket{S}_{3,3} &=& \Ket{\uparrow_3 \downarrow_3} \\
\Ket{S}_{2,3} &=& \frac{1}{\sqrt{2}}\left(\Ket{\uparrow_2 \downarrow_3}-\Ket{\downarrow_2 \uparrow_3}\right)\\
\Ket{S}_{1,3} &=& \frac{1}{\sqrt{2}}\left(\Ket{\uparrow_1 \downarrow_3}-\Ket{\downarrow_1 \uparrow_3}\right)\\
\Ket{S}_{2,2} &=&\Ket{\uparrow_2 \downarrow_2}\\
\Ket{S}_{1,2} &=& \frac{1}{\sqrt{2}}\left(\Ket{\uparrow_1 \downarrow_2}-\Ket{\downarrow_1 \uparrow_2}\right)\\
\Ket{S}_{1,1} &=&\Ket{\uparrow_1 \downarrow_1},
\end{eqnarray*}
while the basis states for triplet staes are
\begin{eqnarray*}
\Ket{T_0}_{1,2} &=& \frac{1}{\sqrt{2}}\left(\Ket{\uparrow_1 \downarrow_2}+\Ket{\downarrow_1 \uparrow_2}\right)\\
\Ket{T_0}_{1,3} &=& \frac{1}{\sqrt{2}}\left(\Ket{\uparrow_1 \downarrow_3}+\Ket{\downarrow_1 \uparrow_3}\right)\\
\Ket{T_0}_{2,3} &=& \frac{1}{\sqrt{2}}\left(\Ket{\uparrow_2 \downarrow_3}+\Ket{\downarrow_2 \uparrow_3}\right).
\end{eqnarray*}
Accordingly, the Hamiltonian can then be written in matrix given by
\begin{widetext}
\begin{eqnarray*}
\hat{H}_{S}&=&
\begin{pmatrix}
U-2(\varepsilon_3+\mu) & -\sqrt{2}t &0 &0 &0 &0\\
-\sqrt{2}t & J_e+U_{12}-\varepsilon_2-\varepsilon_3-2\mu &-t &-\sqrt{2}t &0 &0 \\
0 & -t &-(\varepsilon_1+\varepsilon_3+2\mu) &0 &-t &0\\
0 & -\sqrt{2}t &0 &U-2(\varepsilon_2+\mu) &-\sqrt{2}t &0\\
0 &0 &-t &-\sqrt{2}t &J_e+U_{12}-\varepsilon_1-\varepsilon_2-2\mu &-\sqrt{2}t \\
0 &0 &0 &0 &-\sqrt{2}t &U-2(\varepsilon_1+\mu)\\
\end{pmatrix}\\
\hat{H}_{T}&=&
\begin{pmatrix}
-J_e+U_{12}-\varepsilon_1-\varepsilon_2-2\mu &-t &0\\
-t &-(\varepsilon_1+\varepsilon_3+2\mu) &-t\\
0& -t &-J_e+U_{12}-\varepsilon_2-\varepsilon_3-2\mu\\
\end{pmatrix},
\end{eqnarray*}
\end{widetext}
where $\hat{H}_{S}$ is the Hamiltonian for singlet states and $\hat{H}_{T}$ is the Hamiltonian for triplet states. Together, they form a block diagonal Hamiltonian $\hat{H}$ since the singlet and triplet states do not mixed due to the absence of spin-orbit coupling and magnetic field.
For Schrieffer-Wolff transformation \cite{SWTransform, SWTransform2}, we have
\begin{eqnarray*}
\hat{H}_{\mathrm{eff}}&=&e^{i S} \hat{H} e^{-i S}\\
&=& \hat{H} + \left[ iS, \hat{H} \right]\\
&=& \hat{H}_U + \hat{H}_T + \left[ iS, \hat{H}_U \right] + \left[ iS, \hat{H}_t + \hat{H}_J \right]+... ,
\end{eqnarray*}
where $\hat{H}_T=\hat{H}_t + \hat{H}_J$ and we require $\hat{H}_T + \left[ iS, \hat{H}_U \right]=0$. This can be done by choosing the operator, $i S = \sum_{n,m} \Bra{\psi_n}\hat{H}_T\Ket{\psi_m}/\left(\Bra{\psi_n}U\Ket{\psi_n}-\Bra{\psi_m}U\Ket{\psi_m}\right) \Ket{\psi_n} \Bra{\psi_m}$ with $\Ket{\psi_n}$ are the eigenstates of the diagonal Hamiltonian of the Coulomb interaction, $\hat{H}_U$ \cite{SWTransform2}. As a result, the effective Hamiltonian obtained is block diagonalised. With the given choice of the operator $S$, we have
\begin{eqnarray*}
\Bra{\psi_k}\left[ iS, \hat{H}_t + \hat{H}_J \right] \Ket{\psi_l} &=& \sum_m \frac{\Bra{\psi_k}\hat{H}_T\Ket{\psi_m}\Bra{\psi_m}\hat{H}_T\Ket{\psi_l}}{U_k-U_m} \\
&&- \sum_m \frac{\Bra{\psi_k}\hat{H}_T\Ket{\psi_m}\Bra{\psi_m}\hat{H}_T\Ket{\psi_l}}{U_m-U_l} .
\end{eqnarray*}
The effective Hamiltonian is then given by $\hat{H}_{\mathrm{eff}}=\hat{H}_U + \left[ i S, \hat{H}_T \right]$. From $\hat{H}_{\mathrm{eff}}$, the effective coupling between the initial state and the final state, i.e. second order transition, for both the singlet and triplet states can be obtained.
\begin{eqnarray}
J_S= -t^2 \left(\frac{-4}{U-K+\varepsilon}+\frac{2}{K+\varepsilon}\right)\\
J_T= -t^2 \left(\frac{2}{K+\varepsilon}\right)
\end{eqnarray}
where $J_S$ is the effective coupling between $\Ket{S}_{1,2}$ and $\Ket{S}_{2,3}$ while $J_T$ is the effective coupling between $\Ket{T_0}_{1,2}$ and $\Ket{T_0}_{2,3}$.
\section{State Evolution without Detuning Control}
In the absence of control over detunings, state transfer can be implemented by initialising the singlet-triplet qubit at one end and letting the system evolve with time. Then, the state evolution is stopped and quantum gate operations or read out measurements can be performed immediately. The initial state is initialised in dots 1 and 2 with the state
\begin{eqnarray}
\Ket{\psi_0} = \cos \theta \Ket{S}_{1,2} +\sin \theta e^{i \phi} \Ket{T_0}_{1,2}
\label{initialState}
\end{eqnarray}
where $\Ket{S}_{i,j} = \frac{1}{\sqrt{2}} (\Ket{\uparrow \downarrow}_{i,j}-\Ket{\downarrow \uparrow}_{i,j})$ and $\Ket{T_0}_{i,j}=\frac{1}{\sqrt{2}} (\Ket{\uparrow \downarrow}_{i,j}+\Ket{\downarrow \uparrow}_{i,j})$ with the indices \textit{i}(\textit{j}) referring to the quantum dots \textit{i}(\textit{j}) and $\theta$ determines the component of the singlet and triplet states of the initial state. Here, we assume $\phi=0$ without loss of generality. The state transfer for initial states with relative phase between the singlet and triplet states, where $\phi=\pi/4$ and $\phi=\pi/6$, are demonstrated in Sec. \ref{Sec:Phase} for the pulse-gated scheme. The state is then allowed to evolve with time and stopped upon achieving coherent state transfer with a fidelity that is higher than 0.7 for all initial states. The definition for fidelity and average fidelity are given by
\begin{eqnarray*}
\mathcal{F} = \left| \Braket{\psi_t|\psi_f} \right|^2, \quad
\mathcal{F}_\mathrm{ave} = \frac{1}{2}\int^\pi_0 \mathrm{d}\theta \mathcal{F(\theta, \tau)} \sin \theta ,
\end{eqnarray*}
where $\tau$ is the instantaneous time, $\theta$ determines the initial state and $\Ket{\psi_f}$ is the final state of the singlet-triplet qubit.
Hence, the average fidelity which is averaged over all initial states can be plotted against evolution time.
\begin{figure} [h!]
\includegraphics[width=3.4in,keepaspectratio]{CJ_Kwong_SuppFig1}
\caption{ (Colour Online) (a) Average fidelity for $t=0.12$ meV. The blue arrow indicates the highest peak within an interval of 1.2 ns at 0.47 ns while red arrow gives the earliest coherent state transfer achievable via free evolution at 0.45 ns. (b) The fidelity for both times indicating that the state transfer exceed the classical limit of efficiency of quantum transfer. The tunnel coupling is $t=0.12$ meV. (c) Average fidelity for $t=0.3$ meV. The blue arrow indicates the highest average fidelity at 81 ps and earliest coherent state transfer achievable at 77 ps. (d) The fidelity calculated from the system with tunneling, $t=0.3$ meV for comparison.}\label{fig:TimeEvolveCombined}
\end{figure}
Assuming that the quantum dots are on resonance, Fig. \ref{fig:TimeEvolveCombined} shows the average fidelity plotted from 0 to 1.2 ns for (a) $t=0.12$ meV and (c) $t=0.30$ meV. The highest peak in Fig. \ref{fig:TimeEvolveCombined}~(a) is at 0.47 ns (blue arrow) with an average fidelity of 0.96 while the shortest evolution time is 0.45 ns (red arrow). Corresponding to these two times, the fidelity for all $\theta$ is plotted in Fig. \ref{fig:TimeEvolveCombined}~(b), demonstrating that the fidelity is above the threshold of 0.7 for all initial states. Hence, with free evolution, we can already obtain a state transfer that is better than the classical limit for the efficiency.
In comparisons, the envelope in Fig. \ref{fig:TimeEvolveCombined}~(a) is absent from Fig. \ref{fig:TimeEvolveCombined}~(c) due ot its faster state evolution. The higher tunnel coupling allows a higher probability for the electrons to tunnel between adjacent quantum dots leading to a faster state evolution. This in turn leads to shorter time required for coherent state transfer as seen in highest peak at 81 ps (blue arrow) in Fig. \ref{fig:TimeEvolveCombined}~(c) with an average fidelity of 0.91 and shortest time evolution of 77 ps (red arrow). In addition, the faster state evolution also leads to lower fidelity. Fig. \ref{fig:TimeEvolveCombined}~(d) shows the fidelity corresponding to the highest average fidelity as well as the earliest instance of coherent state transfer.
From the numerical simulation of the $N = 3$ dots linear chain, we have found the required evolution time to achieve coherent state transfer. However, as we have seen from the results, the fidelity obtained is less than desirable. Hence, we require the adiabatic and pulse gate state transfer scheme in order to have high fidelity state transfer.
\section{Pulse-gated scheme with different parameters }
A different set parameters for the Coulomb interactions, i.e. $U=4.6$~meV and $K=2.3$~meV, while the resonant detunings of dots 1 and 3, $\epsilon = 5$~meV is used. Due to the change in the parameters, the transfer time is different with a slightly longer time of 0.74 ns. In addition, the change in the parameters causes a change in the energy level thereby together with the difference in transfer time, caused a different dynamical phase accumulated during the state transfer. As a result, the required wait time is found to be given by $\tau_{\mathrm{wait}}\approx 12 \hbar/\varepsilon$.
\begin{figure} [h!]
\includegraphics[width=3.2in,keepaspectratio]{CJ_Kwong_SuppFig2}
\caption{ (Colour Online) Plot of infidelity, $1-\mathcal{F}$ against mixing angle $\theta$. The results for pulse-gated scheme (red circles) are compared with the simulation results of the adiabatic scheme (blue squares). We observed that similar to the results in the main text, the pulse-gated scheme generally has a lower state transfer fidelity. However, the state transfer time is shorter with approximately 0.74 ns transfer time. }\label{fig:diffPara}
\end{figure}
\section{Initial States with Relative Phase} \label{Sec:Phase}
In this section, we investigate the state transfer for initial states with $\phi=\pi/6$ and $\phi=\pi/4$. The wait time, $\tau_\mathrm{wait}$ is found numerically to be approximately $3 \hbar/\varepsilon$ for all values of $\phi$. Fig. \ref{fig:pulsegatePhase} shows the plot of infidelities for both cases as well as the case of $\phi=0$. There is no visible difference between the plot of infidelities obtained for arbitrary values of $\phi$. This is due to the identical fidelities obtained with the same system parameters and wait time.
\begin{figure} [h!]
\includegraphics[width=3.2in,keepaspectratio]{CJ_Kwong_SuppFig3}
\caption{ (Colour Online) Plot of infidelity, $1-\mathcal{F}$ for the initial states with $\phi=0$ (brown triangle), $\phi=\pi/6$ (blue squares) and $\phi=\pi/4$ (red circles). The infidelity obtained suggests that state transfer for any arbitrary initial state of $\Ket{S}_{1,2}$ and $\Ket{T_0}_{1,2}$ with relative phase, $\phi$ can be achieved with high fidelity via the pulse-gated state transfer. With the same wait time, $\tau_\mathrm{wait}$ and system parameters, the fidelities obtained are identical for any arbitrary values of $\phi$. }\label{fig:pulsegatePhase}
\end{figure}
Similarly, the simulation for adiabatic scheme for the state transfer for $\phi=\pi/6$ and $\phi=\pi/4$ are carried out. The plot of infidelities for all three cases including $\phi=0$ are shown in Fig. \ref{fig:adiabaticPhase}.
\begin{figure} [h!]
\includegraphics[width=3.2in,keepaspectratio]{CJ_Kwong_SuppFig4}
\caption{ (Colour Online) Plot of infidelity, $1-\mathcal{F}$ for the initial states with $\phi=0$ (brown triangle), $\phi=\pi/6$ (blue squares) and $\phi=\pi/4$ (red circles). The infidelity obtained suggests that state transfer for any arbitrary initial state of $\Ket{S}_{1,2}$ and $\Ket{T_0}_{1,2}$ with relative phase, $\phi$ can be achieved with high fidelity via the adiabatic state transfer. With the same wait time, $\tau_\mathrm{wait}$ and system parameters, the fidelities obtained are identical for any arbitrary values of $\phi$. }\label{fig:adiabaticPhase}
\end{figure}
Here, we show only the results for $\phi=\pi/6$ and $\phi=\pi/4$ as an example to illustrate that both the adiabatic and pulse-gated state transfer schemes can be implemented irrespective of the initial relative phase of the singlet-triplet qubit. In addition, with the same $\tau_\mathrm{wait}$, the fidelities obtained are identical to each other for arbitrary values of $\phi$.
\section{Leading Order Error in Pulse Gate Scheme}
In the pulse gate scheme, we impose a condition where $U=2U_{12}$. In order to find out the leading error term, consider a perturbation on the intradot Coulomb interaction, where $U=2U_{12}+\delta U$. As a result, the singlet states effective coupling are then given by
\begin{eqnarray*}
J_s &=& -t^2 \left( \frac{2}{U_{12}+\varepsilon} - \frac{4}{U_{12}+\varepsilon+\delta U} \right) \\
&=& -t^2 \left( \frac{2}{U_{12}+\varepsilon} - \frac{4}{U_{12}+\varepsilon} \left( 1 + \frac{\delta U}{U_{12}+\varepsilon} \right)^{-1} \right).
\end{eqnarray*}
With Taylor's expansion, we have
\begin{eqnarray*}
\left( 1 + \frac{\delta U}{U_{12}+\varepsilon} \right)^{-1} \approx 1 - \frac{\delta U}{U_{12}+\varepsilon}.
\end{eqnarray*}
Hence,
\begin{eqnarray*}
J_s &=& -t^2 \left( \frac{2}{U_{12}+\varepsilon} - \frac{4}{U_{12}+\varepsilon} + \frac{4 \delta U}{\left(U_{12}+\varepsilon\right)^2} - \ldots \right) \\
&\approx& -J_t + \frac{4 \delta U}{\left( U_{12}+\varepsilon^2 \right)^2}.
\end{eqnarray*}
With the leading order error term of the effective coupling strength given by $4 \delta U /(U_{12}+\varepsilon)^2$, the leading error term of the zenith angle in bloch sphere is then given by $4 \delta U \tau /(U_{12}+\varepsilon)^2$. Hence, for sufficiently large $U_{12}+\varepsilon$, the error in the tuning of the quantum dots potential can be minimised.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,534
|
Q: Only allowing ints and floats to be entered into field with javascript and mootools I require some sort of direction with how to only allow either an int or float to be entered into a form input field. The validation must happen on the key up event. The problem I am having is that when entering, for example, 1.2 the check within the keyup event function sees 1. which is not a number.
Here is the code I have:
document.id('inputheight').addEvent('keyup', function(e) {
this.value = this.value.toFloat();
if (this.value == 'NaN') {
this.value = 0;
}
});
Any help is appreciated!
A: You could simply clean up the value of the field on keyup. Something like this should do the trick:
this.value = this.value.replace(/([^\d.]+)?((\d*\.?\d*)(.*)?$)/, "$3");
The regular expression instantly replaces the value with the first numeric string it encounters.
([^\d.]+)? // optionally matches anything which is not
// a number or decimal point at the beginning
(\d*\.?\d*) // tentatively match any integer or float number
(.*)?$ // optionally match any character following
// the decimal number until the end of the string
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 623
|
Als Frieden von Hamburg bezeichnet man folgende Friedensschlüsse:
Frieden von Hamburg (1536), Friedensschluss zwischen der Hanse und Dänemark
Frieden von Hamburg (1762), Friedensschluss vom 5. Mai 1762 zwischen Preußen und Schweden während des Siebenjährigen Krieges
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,575
|
With this materials, the candidates will have the confidence to take the exam. EMC Certification E20-393 exam questions in the Passtcert are the best training materials for the candidates. With Passtcert E20-393 Unity Solutions Specialist Exam for Implementation Engineers, you will pass the exam easily.Passtcert not only provide the products which have high quality to each candidate, but also provides a comprehensive after-sales service. If you are using our EMC Certification E20-393 exam questions, we will let you enjoy one year of free updates. So that you can get the latest exam information in time.
EMC Certification E20-393 exam questions provided by Passtcert are very practical, and they are absolutely right for you. We can make you have a financial windfall.Passtcert to provide you with the real exam environment to help you find the real EMC E20-393 exam preparation process. If you are a beginner or want to improve your professional skills, Passtcert EMC Certification E20-393 exam questions will help you, let you approached you desire step by step. If you have any questions on the exam question and answers, we will help you solve it.
This material is EMC Certification E20-393 exam questionss, which including questions and answers.When we started offering EMC Certification E20-393 exam questions, we did not think that we will get such a big reputation. What we are doing now is incredible form of a guarantee. Passtcert EMC Certification E20-393 exam questions can not only help customers 100% pass their first time to attend EMC certification E20-393 exam, but also provide a one-year of free online update service for them, which will delivery the latest exam materials to customers at the first time to let them know the latest certification exam information.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,154
|
package com.msgilligan.bitcoinj.spring.config;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.messaging.converter.DefaultContentTypeResolver;
import org.springframework.messaging.converter.MappingJackson2MessageConverter;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolver;
import org.springframework.messaging.handler.invocation.HandlerMethodReturnValueHandler;
import org.springframework.messaging.simp.config.ChannelRegistration;
import org.springframework.messaging.simp.config.MessageBrokerRegistry;
import org.springframework.util.MimeTypeUtils;
import org.springframework.web.socket.config.annotation.EnableWebSocketMessageBroker;
import org.springframework.web.socket.config.annotation.WebSocketTransportRegistration;
import org.springframework.web.socket.config.annotation.StompEndpointRegistry;
import org.springframework.web.socket.config.annotation.WebSocketMessageBrokerConfigurer;
import java.util.List;
/**
*/
@Configuration
@EnableWebSocketMessageBroker
@ComponentScan(basePackages="com.msgilligan.bitcoinj.spring")
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
@Autowired ObjectMapper objectMapper; // Seems to get injected with objectMapper with our added Module
@Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/stomp").withSockJS();
}
@Override
public void configureWebSocketTransport(WebSocketTransportRegistration registration) {
registration.setSendTimeLimit(15 * 1000).setSendBufferSizeLimit(512 * 1024);
}
@Override
public void configureClientInboundChannel(ChannelRegistration registration) {
}
@Override
public void configureClientOutboundChannel(ChannelRegistration registration) {
registration.taskExecutor().corePoolSize(4).maxPoolSize(10);
}
@Override
public void addArgumentResolvers(List<HandlerMethodArgumentResolver> argumentResolvers) {
// TODO: ??
}
@Override
public void addReturnValueHandlers(List<HandlerMethodReturnValueHandler> returnValueHandlers) {
// TODO: ??
}
@Override
public boolean configureMessageConverters(List<MessageConverter> messageConverters) {
// Workaround for issue 2445: https://github.com/spring-projects/spring-boot/issues/2445
DefaultContentTypeResolver resolver = new DefaultContentTypeResolver();
resolver.setDefaultMimeType(MimeTypeUtils.APPLICATION_JSON);
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setObjectMapper(objectMapper);
converter.setContentTypeResolver(resolver);
messageConverters.add(converter);
return false;
}
@Override
public void configureMessageBroker(MessageBrokerRegistry configurer) {
configurer.enableSimpleBroker("/queue/", "/topic/");
configurer.setApplicationDestinationPrefixes("/app");
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,592
|
Die Kriegsdenkmünze war eine militärische Auszeichnung im Königreich Hannover. Stifter war der König Ernst August.
Der Stiftungstermin war der 11. Mai 1841 und fiel mit dem Stiftungstermin einer Auszeichnung gleichen Namens, der Kriegsdenkmünze 1813, zusammen. Der Unterschied in den beiden Ehrenzeichen war der Grund und die Medaillengestaltung. Diese Auszeichnung war für die freiwilligen Soldaten gedacht, die ab 1803 bis vor dem Ersten Pariser Frieden 1814 in die Königlich-Großbritannisch-Deutsche Legion eintraten, während die andere Auszeichnung für die ab 1813 eingetretenen Freiwilligen der regulären Hannoveraner Armee bestimmt war.
Die Auszeichnung war für Militärs aller Dienstgrade und Militär-Ärzte bestimmt, wobei gleichermaßen "Hannoveraner und Ausländer berechtigt" waren. Nach dem Tode des Inhabers verblieb die Denkmünze dessen Erben.
Ordensdekoration
Die Ordensdekoration war aus dem Material eines erbeuteten Geschützes gefertigt worden. Die bronzene Medaille hat auf der Vorderseite in einem Kreuz unter der Königskrone den Namenszug des Stifters. Im Revers war vom Lorbeerkranz umschlossen die Devise "TAPFER UND TREU". Dazu der Hinweis auf den Stiftungsgrund "KÖNIGLICH DEUTSCHE LEGION".
Ordensband
Das Ordensband war ein weißes Band mit zwei gelben Streifen.
Literatur
Hof- und Staats-Handbuch für das Königreich Hannover 1856, Verlag der Berenberg'schen Buchdruckerei, S.30
Orden und Ehrenzeichen (Königreich Hannover)
Orden und Ehrenzeichen (Koalitionskriege)
Ereignis 1841
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,347
|
\section{Introduction}\label {Sec:intro}
General relativity (GR) accurately describes all known gravitational
phenomena. Still, it has a theoretical flaw: it is not
renormalizable~\cite{Goroff_Sagnotti1986} and thus cannot be a
complete theory of quantum gravity.
One way to address this problem is to
introduce terms with higher powers of the curvature tensor which make the theory
renormalizable~\cite{Stelle1977}. However, if Lorentz invariant, these
higher curvature terms lead to loss of unitarity. This motivated
P. Ho\v{r}ava to propose a framework to render gravity power-counting
renormalizable by
abandoning Lorentz invariance~\cite{Horava:2009uw}.
By breaking Lorentz invariance, we can introduce higher spatial
derivative terms, while avoiding higher time derivative terms and
thus making the theory compatible with unitarity.
A key role in the power-counting argument is played by an approximate
invariance of the theory at high energies and momenta with respect to
the so-called Lifshitz scaling transformations. These stretch space
and time by different amount, so they are also often referred to as
anisotropic scaling. The dispersion relations of various degrees of
freedom at high energies, compatible with anisotropic scaling, have
the form $\omega\propto p^z/M_*^{z-1}$, where $\omega$ and $p$ are
particle's energy and momentum, $z$ is the Lifshitz exponent ($z$
equals the number of spatial dimensions in Ho\v rava's proposal)
and $M_*$ is the energy threshold, above which the
anisotropic scaling sets in.
This framework has received the name of Ho\v{r}ava--Lifshitz (HL)
gravity and the so-called projectable subclass of the resulting
theories has been rigorously demonstrated to be perturbatively
renormalizable~\cite{Barvinsky:2015kil,Barvinsky:2017zlx}. Moreover,
in 2 spatial and 1 time dimensions the theory
exhibits asymptotic freedom~\cite{Barvinsky:2017kob} which strongly
suggests that it is ultraviolet (UV) complete.
Deviations from Lorentz invariance are tightly constrained in the
Standard Model sector~\cite{Mattingly:2005re,Kostelecky:2008ts,
Liberati:2013xla}. In the gravity sector constraints come from observations at low
energies such as Solar
System tests~\cite{Liberati:2013xla, Will:2005va}, pulsar
timing~\cite{Yagi:2013qpa,Yagi:2013ava, Shao:2013wga, Jimenez:2015bwa},
cosmology~\cite{Saridakis:2009bv,Dutta:2009jn,Kobayashi:2010eh,APSG,
Blas:2012vn,Audren:2013dwa,Audren:2014hza}
and direct detection of the gravitational waves~\cite{Blas:2016qmn,
Yunes:2016jcc,Monitor:2017mdv,Gumrukcuoglu:2017ijh}.
By contrast, Lorentz violation (LV) in the gravity sector is poorly
constrained at high energies
where it is motivated by renormalization
of gravity.
In order to examine the consequences of LV at high energies we
study in this paper its effect on
cosmic inflation in the early universe.
One may expect that breaking of Lorentz invariance during inflation
will leave an imprint on the
primordial perturbations generated during inflation.
This possibility has been explored in a number of
works~\cite{ArkaniHamed:2003uz,TakahashiSoda,
Calcagni:2009ar,Kiritsis:2009sh,Mukohyama:2009gg,Kobayashi:2009hh,
Donnelly:2010cr,Creminelli:2012xb,Solomon:2013iza,Ivanov:2014yla}.
In HL gravity, where 4D diffeomorphism (Diff) is reduced to foliation
preserving Diff, there appears a scalar degree of
freedom in gravity sector, so called khronon.
It is tempting to speculate that this additional degree of freedom can
play the role of inflaton. However, at the moment this seems to be
forbidden due the restrictive symmetry structure of the theory.
Therefore, to drive inflation, we
need to introduce a scalar field,
as usual. Then, in general, generation of
the primordial scalar perturbation is described by a coupled system
for two fields, the inflaton and khronon perturbations.
To provide a prediction of the observable quantities, we need to
solve consistently the two field system of the inflaton and khronon,
which are coupled with each other during inflation. When 4D Diff is
preserved and the universe is dominated by a single component, it
is well-known that the adiabatic curvature perturbation
$\zeta$ stays constant in
time after the Hubble crossing (see, e.g.,
Refs.~\cite{Wands:2000dp, Weinberg:2003sw}). On the other hand, in HL
gravity the number of scalar degrees of freedom is
always greater than one due to
the presence of khronon and it is not clear a priori
if there exists a conserved variable or not.
The inflaton and khronon are gravitationally coupled even in the
absence of a direct interaction between them. In this paper we compute
the primordial power spectra by consistently solving the two field
models with the inflaton and
khronon. The previous studies mostly focused on the regime where the
Hubble scale of inflation is low, $H<M_*$, so that the higher derivative terms
in the action are unimportant and the theory is described by its
infrared (IR) limit. By contrast, in this paper we are interested in
the high-energy regime of Lifshitz scaling relevant for the
case\footnote{Recent observation of gravitational waves from neutron
star merger in coincidence with the electromagnetic signal
\cite{Monitor:2017mdv} points towards an upper bound on the scale $M_*$ in
non-projectable HL gravity, $M_*\lesssim 10^{11}$GeV
\cite{Gumrukcuoglu:2017ijh}. Hence, in this theory
the Lifshitz regime is relevant whenever the
inflationary Hubble exceeds $10^{11}$GeV.
}
$H>M_*$.
We consider both projectable and non-projectable versions of
HL gravity.
As discussed in Refs.~\cite{Blas:2009yd,Blas:2010hb},
the khronon sector of the projectable HL gravity
suffers from either the gradient instability or the strong coupling in the
IR limit. This means that it cannot describe the physics all the way
down to low energies, unless inflationary epoch is separated from the
later hot universe by a phase transition that eliminates khronon from
the spectrum. Still,
the projectable version is perfectly well-behaved in the high-energy
regime and its study is instructive to make comparison with
the non-projectable version.
When the fluctuations are deep inside the Hubble scale, the gravitational
interaction is suppressed and we simply have two decoupled Lifshitz
scalars. On the other hand, in the super Hubble scales, the gravitational
interaction makes the inflaton and khronon coupled. Then one may naively
expect that the primordial spectrum will depend on the time evolution of
these two fields and we will need to solve the evolution all along also
after the Hubble crossing time. Indeed, this is the case for the
projectable version. On the other hand, in the non-projectable version,
we will find that khronon gets decoupled from the adiabatic curvature
perturbation $\zeta$. As a result, $\zeta$ is conserved at large scales
and the power spectrum of $\zeta$ is solely determined by the
inflaton. Thanks to the presence of the conserved quantity, we can
easily calculate the spectrum of the fluctuation at the end of
inflation. Then the consequence of the LV in the spectrum of $\zeta$
only stems from the modification of the dispersion relation.
The spectrum of primordial gravitational waves in HL gravity was
computed in Ref.~\cite{TakahashiSoda}. Once the scalar perturbation is
obtained, we can also compute the tensor to scalar ratio $r$. In a 4D
Diff invariant theory, there exists a universal relation between $r$ and
the tensor spectral tilt $n_t$, the so-called consistency relation. We
will show that this consistency relation can be broken if the primordial
perturbations are generated in the anisotropic scaling regime. The violation of
the consistency relation provides a signal of LV
in the gravity sector in the high energy regime.
This paper is organized as follows. In Sec.~\ref{Sec:Lif} we
describe our setup and review the computation of the power
spectrum of the Lifshitz scalar and the gravitational waves generated in
the anisotropic scaling regime. In Sec.~\ref{Sec:Khronon} we discuss the
behaviour of the khronon perturbation. We show that khronon stays
gapless in the projectable version, while it is gapped in the
non-projectable version, which leads to the decoupling from the
adiabatic mode. In Sec.~\ref{Sec:LV} we discuss violation of
the consistency relation by inflationary perturbations with Lifshitz scaling.
We conclude in Sec.~\ref{Sec:concl}.
Appendices summarize some technical details.
\section{Primordial perturbations with anisotropic scaling}
\label{Sec:Lif}
In this section we describe our setup and briefly summarize the
computation of the primordial
spectra of the Lifshitz scalar and gravitational
waves.
\subsection{Projectable and non-projectable Ho\v{r}ava gravity}
\label{ssec:HL}
\subsubsection{Lagrangian densities}
First, we consider the non-projectable version of HL gravity
\cite{Horava:2009uw} with the extension introduced in
\cite{Blas:2009qj}. Due to the complexity of the most general Lagrangian in this framework, we restrict only to
the terms that contribute to the action at quadratic order in the
perturbations around spatially flat backgrounds and that preserve the
parity invariance. This restriction is sufficient to capture the
qualitative features of the theory.
The complete list of
these terms is given in \cite{Blas:2009qj} and leads to the following
Lagrangian density,
\begin{align}
{\cal L}_{HG} =N\sqrt{h}\bigg\{&\frac{M_*^2}{2}
\bigg[\frac{1}{\a_1}K_{ij}K^{ij}-\frac{1}{\a_2}K^2+\frac{1}{\a_3}R+a_i
a^i\bigg] \cr
& -\frac{1}{2}\bigg[\frac{R_{ij}R^{ij}}{\b_1}+\frac{R^2}{\b_2}
-\frac{R\nabla_ia^i}{\b_3}+\frac{a_i\Delta a^i}{\b_4}\bigg] \cr
& -\frac{1}{2M_*^2}\bigg[\frac{(\nabla_iR_{jk})^2}{\gamma_1}
+\frac{(\nabla_i R)^2}{\gamma_2}+\frac{\Delta R\nabla_ia^i}{\gamma_3}
-\frac{a_i\Delta^2 a^i}{\gamma_4}\bigg]\bigg\},
\label{Lagrfull}
\end{align}
where we used the ADM line element, given by
\begin{equation}
{\rm d} s^2=(N^2-N_iN^i){\rm d} t^2-2N_i{\rm d} t {\rm d} x^i-h_{ij}{\rm d} x^i {\rm d} x^j\,.
\end{equation}
Here $R_{ij}$, $\nabla_i$ and $\Delta$ denote the 3-dimensional Ricci
tensor,
the covariant derivative with respect to $h_{ij}$ and the covariant
Laplacian,
\begin{equation}
K_{ij}=\frac{\dot h_{ij}-\nabla_i N_j-\nabla_j N_i}{2N}
\end{equation}
is the extrinsic curvature and we have defined $a_i$ as
\begin{align}
a_i \equiv \frac{ \partial_iN}{N}\,.
\end{align}
Note that we included the integration measure in the definition of the
Lagrangian density. The terms in the first line of
Eq.~(\ref{Lagrfull}) describe the low energy part of the action, and
the
parameters entering it are constrained by the present-day
observations\footnote{We assume that during inflation these parameters
have the same values as nowadays. This assumption can
be relaxed in a more general setup.}. The relation between
these parameters and
the parameters $\a,\lambda,\xi$
introduced in \cite{Blas:2010hb} is
\begin{align}
& \label{relpar}
M_*^2=M_P^2\a\;,~~ \a_1=\a\;,~~\a_2=\a/\lambda\;,~~\a_3=\a/\xi\;,
\end{align}
where $M_P$ is the Planck mass. In what follows, we will write
\begin{align}
& \alpha_1 - \alpha_2 = 2 \alpha_1 \bar{\alpha} \,.
\end{align}
We also discuss the projectable version, where the lapse function is
postulated to be space-independent,
\begin{align}
& N = N(t)\,.
\end{align}
The action for the projectable version can be obtained simply by dropping the
perturbation of the lapse function in the action for the non-projectable
version. Then the parameters $\beta_3$, $\beta_4$, $\gamma_3$, and
$\gamma_4$ are irrelevant in the projectable theory.
For both the non-projectable and projectable versions, we add as the inflaton
a Lifshitz scalar field whose Lagrangian density is given by:
\begin{align}
& \label{LagrInf}
{\cal L}_{inf}=N\sqrt{h}\biggl\{\frac{(\dot\Phi-N^i
\partial_i\Phi)^2}{2N^2}
- \frac{\varkappa_1}{2} \nabla_i \Phi \nabla^{i} \Phi -
\frac{\varkappa_2}{2 M_*^2} \nabla_i\nabla_j \Phi \nabla^i\nabla^j \Phi \cr
& \qquad \qquad \qquad \qquad
- \frac{\varkappa_3}{2M_*^4} \nabla_i\nabla_j \nabla_k \Phi \nabla^i \nabla^j\nabla^k \Phi-V(\Phi)\biggr\}\;.
\end{align}
In principle, the coefficients $\varkappa_{1,2,3}$ here can be functions of
the field $\Phi$ which has zero scaling dimension. We concentrate on
the case of constant coefficients for simplicity.
We assume that the inflaton is minimally coupled to the gravity
sector. We will briefly discuss a non-minimally coupled case in
Sec.~\ref{Sec:concl}.
\subsubsection{Parameter hierarchy}
The Lagrangian density (\ref{Lagrfull}) contains a number of
parameters. Here we discuss the hierarchy between them. Stability and
constraints on deviations from Lorentz invariance
at low energies require~\cite{Blas:2010hb},
\begin{align}
& 0 < \alpha_1 \ll 1\,.
\end{align}
Consider now
the propagation of gravitational waves in flat spacetime where their
dispersion relation is given by
\begin{align}
& \omega^2(p) = p^2\,
\sum_{z=1}^3 \varkappa_{\gamma,z} \left( \frac{p}{M_*} \right)^{2(z-1)}
\, ,
\end{align}
with
\begin{align}
& \varkappa_{\gamma,1} \equiv \frac{\alpha_1}{\alpha_3}\,, \qquad \varkappa_{\gamma,2} \equiv
\frac{\alpha_1}{\beta_1}\,, \qquad \varkappa_{\gamma,3} \equiv \frac{\alpha_1}{
\gamma_1}\,. \label{Def:vk}
\end{align}
The coefficient $\varkappa_{\gamma, 1}$ determines (the square
of) the propagation speed of the gravitational waves at low energies.
According to the constraints from the observation of the Hulse-Taylor
pulsar~\cite{Jimenez:2015bwa} and more directly from the detections of
the gravitational waves at the two detector sites~\cite{Blas:2016qmn},
the propagation speed of the gravitational
waves in the IR should be of order of the speed of light, which imposes
$\alpha_1 \simeq \alpha_3$. The recent detections of GW170817 and
GRB170817A give a tight constraint
$|\varkappa_{\gamma,1}-1|<10^{-15}$ \cite{Monitor:2017mdv}. (See also
Ref.~\cite{Moore:2001bv} for the constraint on the subluminal
propagation of the gravitational waves from the absence of
the gravitational Cherenkov radiation.)
Next, requiring that the transition from linear dispersion relation to
the Lifshitz scaling happens at $p\sim M_*$ we obtain the requirements
$\varkappa_{\gamma,2}, \varkappa_{\gamma,3}\simeq 1$.
By combining these
two conditions, we obtain
\begin{align}
& \alpha_1 \simeq \alpha_3 \simeq \beta_1 \simeq \gamma_1 \ll 1
\,. \label{parameterGW}
\end{align}
Let us now turn to khronon.
In the projectable version its dispersion relation reads,
\begin{align}
& \omega_{pr}^2(p) = \frac{\alpha_1\bar{\alpha}
}{1+ \bar{\alpha}} p^2 \left[ -\frac{1}{\alpha_3} + \biggl( \frac{3}{\beta_1} + \frac{8}{\beta_2}
\biggr) \biggl( \frac{p}{M_*} \biggr)^2+ \biggl( \frac{3}{\gamma_1} + \frac{8}{\gamma_2}
\biggr) \biggl( \frac{p}{M_*} \biggr)^4 \right]\,. \label{Exp:DRP}
\end{align}
The first term in the square brackets is negative and is responsible
for gradient instability in IR. On the other hand, the remaining
terms in (\ref{Exp:DRP}) can be chosen positive, so that at $p>M_*$
the dispersion relation is well-behaved.
Again,
setting the transition to Lifshitz scaling
at around $p \simeq M_*$ and taking into account (\ref{parameterGW}) we
obtain
\begin{align}
& \alpha_{1,3} \simeq \beta_{1,2} \simeq \gamma_{1,2} \ll 1\,. \label{parameterP}
\end{align}
Further requiring that the overall magnitude of the
frequency $\omega_{pr}(p)$
in UV is
${\cal O}(p^z/M_*^{z-1})$ we set $\bar{\alpha} \simeq {\cal O}(1)$.
To sum up, in the projectable case we will work under the assumptions,
\begin{equation}
\label{parameterProj}
\a_{1,2,3}\simeq\b_{1,2}\simeq\gamma_{1,2}\ll 1\,,\qquad\bar\a={\cal O}(1)
\qquad\qquad \text{(projectable).}
\end{equation}
In the non-projectable version, the dispersion relation for khronon
becomes more complicated and is given by
\begin{align}
& \omega_{npr}^2(p) = \omega_{pr}^2(p)+ \frac{2\alpha_1 \bar{\alpha}}{1+ \bar{\alpha}}\, p^2 \, \frac{\left[ -
\frac{1}{\alpha_3} + \frac{1}{\beta_3} (
\frac{p}{M_*})^2 + \frac{1}{\gamma_3} (
\frac{p}{M_*})^4 \right]^2 }{1+ \frac{1}{\beta_4} (
\frac{p}{M_*})^2 + \frac{1}{\gamma_4} (
\frac{p}{M_*} )^4 }\;, \label{Exp:DRNP}
\end{align}
where the second piece comes from integrating out the lapse function
$N$ which enters into the action without time derivatives.
Setting the transition scale at $p \simeq M_*$ and using
Eq.~(\ref{parameterGW}), we obtain
\begin{align}
& \alpha_{1,3} \simeq \beta_{1,2,3} \simeq \gamma_{1,2,3} \ll 1
\,,\qquad \beta_4 \simeq \gamma_4 = {\cal O}(1)\,. \label{parameterNP0}
\end{align}
Similarly to the discussion of the projectable version, we assume that
$\omega(p)$ becomes ${\cal O}(p^z/M_*^{z-1})$ in UV and obtain
\begin{align}
& \bar{\alpha} \simeq \gamma_3^2/\a_1 \ll 1 \,.
\end{align}
Notice that the order of $\bar{\alpha}$ in the
non-projectable version is different from the one in the projectable
version, cf. Eq.~(\ref{parameterProj}).
Combining all
conditions together, we obtain
\begin{align}
& \alpha_{1,2,3} \simeq \beta_{1,2,3} \simeq \gamma_{1,2,3} \simeq
\bar{\alpha} \ll 1 \,,\qquad \b_4\simeq\gamma_4={\cal O}(1)
\qquad\qquad \text{(non-projectable).}
\label{parameterNP}
\end{align}
The parameters which satisfy these
conditions are consistent with the experimental data in
IR\footnote{We leave aside the question of stability of the parameter
hierarchy under radiative corrections.}~\cite{Blas:2010hb}.
\subsection{Background equations}
\label{ssec:bg}
Equations for the inflationary background read,
\begin{align}
&3M_P^2\frac{1+\bar\a}{1-2\bar\a} H^2
= \frac{\dot\phi^2}{2}+V\,, \label{FRWeqs} \\
& \ddot\phi+ 3 H\dot\phi+V_\phi=0\,, \label{FRWeq3}
\end{align}
where $\phi$ is the background value of the inflaton and
$V_\phi$ denotes the derivative of $V$ with respect to $\phi$.
Positivity of the l.h.s. in the Friedmann equation (\ref{FRWeqs})
requires $\bar{\alpha}$ to be in the range $- 1 < \bar{\alpha} < 1/2$.
We define the slow-roll parameters,
\begin{align}
& \varepsilon_1 \equiv - \frac{\dot{H}}{H^2}
= \frac{1- 2 \bar{\alpha}}{2(1+\bar{\alpha})} \left(
\frac{\dot{\phi}}{M_P H} \right)^2 \,, \label{epsilon1}
\end{align}
and
\begin{align}
& \varepsilon_n = \frac{{\rm d} \ln \varepsilon_{n-1}}{{\rm d} \ln a}\,,
\end{align}
for $n\geq 2$. The expressions for the slow-roll
parameters agree with the standard ones up to ${\cal O}(\bar{\alpha})$
corrections. Using $\varepsilon_2$ we can express the second
derivative of $\phi$ as
\begin{align}
& \frac{\ddot{\phi}}{H \dot{\phi}} = \frac{\varepsilon_2}{2}
- \varepsilon_1\,.
\end{align}
We also define the slow-roll parameters $\varepsilon_V$ and $\eta_V$ as
\begin{align}
&\varepsilon_V \equiv \frac{M^2_P}{2}\left(\frac{V_\phi}{V}\right)^2 = \frac{1-2\bar{\alpha}}{1+\bar{\alpha}} \varepsilon_1 + {\cal O}(\varepsilon^2)\,, \label{epsilonV}\\
&\eta_V \equiv M^2_P \frac{V_{\phi \phi}}{V} =
\frac{1-2\bar{\alpha}}{1+\bar{\alpha}}\left( 2\varepsilon_1-
\frac{\varepsilon_2}{2} \right )+{\cal O}(\varepsilon^2)\,,
\label{etaV}
\end{align}
where $V_{\phi \phi} \equiv {\rm d}^2 V/{\rm d} \phi^2$.
In the limit $\bar{\alpha} \rightarrow 0$ the relations between
$(\varepsilon_1,\, \varepsilon_2)$ and $(\varepsilon_V,\, \eta_V)$
agree with those in GR.
\subsection{Lifshitz scalar in a fixed background}
\label{ssec:Lif}
As a warm-up exercise, in this subsection we briefly review the
computation of the spectrum of a probe massless scalar
field $\varphi$ in a fixed inflationary
background. From now on we will work in conformal time $t$ and denote
derivatives with respect to it by primes. The action for Fourier modes
of the field reads,
\begin{align}
& S_{scalar} = \frac{1}{2} \int {\rm d} t \int {\rm d}^3 \bm{p}\,a^2 \left[
\varphi'_{\sbm{p}} \varphi'_{- \sbm{p}} -
{\omega^2_\varphi(t,\, p)} \varphi_{\sbm{p}} \varphi_{-\sbm{p}} \right]\,.
\end{align}
Anisotropic scaling in UV implies modified dispersion
relation~\cite{Mukohyama:2009gg},
\begin{align}
& \frac{\omega_\varphi^2 (t, p)}{{\cal H}^2} =\frac{p^2}{{\cal
H}^2}\left[ \varkappa_1 + \varkappa_2 \left( \frac{p}{a M_*} \right)^2+
\varkappa_3 \left( \frac{p}{a M_*} \right)^4 \right]
\,,
\label{LSdisp}
\end{align}
where ${\cal{H}} =a'/a = aH$. The mode equation is given by
\begin{align}
& \varphi_p'' + 2{\cal{H}}\varphi_p' + \omega^2_{\varphi}\varphi_p = 0\,.
\end{align}
During inflation, we have
\begin{align}
& {\cal{H}} =-1/t\,, \label{Eq:cH}
\end{align}
where we have neglected the corrections suppressed by the slow-roll
parameters.
When the contribution from either of $z=1,\, 2,\, 3$ dominates the others,
using Eq.~(\ref{Eq:cH}) and imposing the adiabatic initial condition:
\begin{align}
& \varphi_p(t) \to \frac{1}{a} \frac{1}{\sqrt{2
\omega_\varphi}} e^{- i \int {\rm d} t\, \omega_\varphi}\,,
\end{align}
we can solve the mode equation as
\begin{align}
\varphi_p = \frac{1}{2a}\sqrt{\frac{-\pi t}{z}}
e^{i\frac{\pi(2\nu+1)}{4}}H^{(1)}_\nu
\left[\frac{\sqrt{\varkappa_z}}{z} \frac{p}{{\cal H}} \left(
\frac{p}{a M_*} \right)^{z-1} \right]\,,\label{phip}
\end{align}
where the index of the Hankel function is given by
\begin{align}
\label{nu}
\nu= \frac{3}{2z} \;.
\end{align}
At the Hubble crossing, $\omega_\varphi/{\cal H} \simeq z$, we obtain
the power spectrum
of Lifshitz scalar $\varphi$ as
\begin{align}
& {\cal P}_{LS}(p) \equiv \frac{p^3}{2\pi^2} \left| \varphi_p \right|^2 =
\frac{\a_1^{\nu(z-1)}}{\varkappa_z^\nu}\frac{(2^\nu \Gamma[\nu])^2}{8\pi^3} z^{\frac{3}{z}-1}
M_P^2\left( \frac{H_{p}}{M_P} \right)^{\frac{3}{z} - 1} \,, \label{Exp:Pvarphi}
\end{align}
where $H_p$ denotes the Hubble parameter at this time. In order to
obtain the power spectrum at the end of inflation, we need to solve the
time evolution also after $\omega_\varphi/{\cal H} \simeq z$.
In the massless case
$\varphi$ stops evolving in time soon after the Hubble crossing.
Then Eq.~(\ref{Exp:Pvarphi}) gives the spectrum of $\varphi$ at the end
of inflation. Notice that, as discussed in
Ref.~\cite{Mukohyama:2009gg},
for $z=3$ the spectrum of Lifshitz scalar is exactly flat. This is a
consequence of the fact that for $z=3$ the scaling dimension of the
scalar $\varphi$ vanishes.
If the Lifshitz scalar has a small mass, its evolution must also be
traced after the Hubble crossing and the final spectrum in general
depends on the details of this evolution.
\subsection{Gravitational waves}
\label{ssec:gw}
In this subsection we compute the spectrum of the gravitational waves
generated during inflation in HL gravity. We consider the metric,
\begin{align}
& N= 1, \quad N_i = 0, \quad h_{ij} = a^2 \left( \delta_{ij} + \gamma_{ij} \right)
\end{align}
with the transverse traceless condition on the perturbations:
\begin{align}
& \partial_i \gamma_{ij} =0 \,, \qquad \gamma_{ii}=0\,.
\end{align}
The quadratic Lagrangian density for the gravitational waves is given by
\begin{align}
& {\cal L}_{GW} = \frac{M_*^2}{8 \alpha_1}a^2 \biggl[
{\gamma '}^i\!_j {\gamma '}^j\!_i - \frac{\alpha_1}{\alpha_3}
\partial_k \gamma^i\!_j \partial^k \gamma^j\!_i
-\frac{\alpha_1}{\beta_1 M_*^2} a^{-2} \partial^2 \gamma^i\!_j
\partial^2 \gamma^j\!_i - \frac{\alpha_1}{\gamma_1 M_*^4}a^{-4}
\partial^2 \partial_k \gamma^i\!_j \partial^2 \partial^k \gamma^j\!_i
\biggr] . \label{actionGW}
\end{align}
This is the most general form of the Lagrangian for
linear tensor perturbations in HL gravity in the absence
of parity violation and non-minimal coupling to the inflaton.
(See Ref.~\cite{TakahashiSoda} for the
computation of the polarized gravitational wave spectrum in the presence
of the parity violation.)
Taking variation with respect to $ \gamma_{ij}$, we obtain the mode
equation for $\gamma_{ij}$ as usual,
\begin{align}
& \gamma_{ij\, p}'' + 2{\cal{H}} \gamma_{ij\, p}' +
\omega^2_\gamma \, \gamma_{ij\, p}=0\,,
\end{align}
where the frequency $\omega_{\gamma}$ is given by
\begin{align}
& \frac{\omega^2_{\gamma}(\eta, p)}{{\cal{H}}^2} = \left(
\frac{p}{{\cal{H}}} \right)^{\!2}\,
\sum_{z=1}^3 \varkappa_{\gamma,z} \left( \frac{p}{a M_*} \right)^{2(z-1)}
\, ,
\end{align}
with $\varkappa_{\gamma,z}$ given in Eq.~(\ref{Def:vk}). We quantize the
gravitational waves as
\begin{align}
& \gamma_{ij}(x) = \sum_{\lambda=\pm} \int \frac{{\rm d}^3
\bm{p}}{(2\pi)^{3/2}} \gamma_p (t)
e^{(\lambda)i}\!_j(\bm{p}) e^{i \sbm{p} \cdot \sbm{x}}
a_{\sbm{p}}^{(\lambda)} + ({\rm h.c.})\,, \label{Exp:gI}
\end{align}
where $\lambda$ is the helicity of the gravitational waves,
$e^{(\lambda)}_{ij}$ are the standard transverse and traceless
polarization tensors, and $a_{\sbm{k}}^{(\lambda)}$ are the annihilation operators which satisfy
\begin{eqnarray}
\left[ a_{\sbm{k}}^{(\lambda)},\, a_{\sbm{p}}^{(\lambda')\dagger}
\right] = \delta_{\lambda \lambda'} \delta^{(3)} (\bm{k} - \bm{p})\,.
\end{eqnarray}
The number of the polarizations in HL gravity is the same as in GR.
Imposing the adiabatic initial condition:
\begin{align}
& \gamma_p(t) \to \frac{2}{a M_P} \frac{1}{\sqrt{2
\omega_\gamma}} e^{- i \int {\rm d} t\, \omega_\gamma}\,,
\end{align}
we obtain the mode functions $\gamma_p$ as
\begin{align}
& \gamma_p(t) = \frac{1}{M_P a}\sqrt{\frac{-\pi t}{z}}
e^{i \frac{\pi (2\nu +1)}{4}} H_{\nu}^{(1)} \left[
\frac{\sqrt{\varkappa_{\gamma,z}}}{z} \frac{p}{{\cal H}} \left(
\frac{p}{a M_*} \right)^{z-1} \right]\,,
\label{gammap}
\end{align}
where the Hankel index $\nu$ is given in Eq.~(\ref{nu}).
Like in the GR, $\gamma_p$ is conserved in time for
$\omega_\gamma/{\cal H} < z$. Using Eq.~(\ref{gammap}) we obtain the power spectrum of the gravitational waves as
\begin{align}
& {\cal P}_\gamma \equiv \frac{p^3}{\pi^2} \left| \gamma_p \right|^2 =
\frac{\alpha_1^{\nu (z-1)}}{\varkappa_{\gamma,z}^{\nu}}
\frac{(2^{\nu} \Gamma[\nu])^2}{\pi^3} z^{\frac{3}{z}-1}
\left( \frac{H_{p, \gamma}}{M_P} \right)^{\frac{3}{z} - 1} \,,\label{Exp:powertens}
\end{align}
where $H_{p, \gamma}$ denotes the Hubble parameter when
$\omega_\gamma/{\cal H} \simeq z$.
The spectral index for the gravitational waves is given by
\begin{align}
& n_t \equiv \frac{{\rm d} \ln {\cal{P}}_{\gamma}}{{\rm d} \ln p} \simeq -
\frac{3-z}{z} \varepsilon_1 \,. \label{Exp:nt}
\end{align}
In 4D Diff invariant theory, the spectrum of the primordial gravitational waves
is generically red-tilted in an inflationary universe with
$\varepsilon_1 > 0$~\cite{CGNV}. By contrast, in HL gravity, for
$z=3$, the spectral index $n_t$ vanishes even if $\varepsilon_1 \neq 0$.
This serves as a distinctive feature of the anisotropic scaling
regime of gravity.
Since the lapse function is irrelevant to the
gravitational waves at the linear order of perturbation, the results
of this section apply both to the projectable
and non-projectable versions of HL gravity.
\section{Decoupling and non-decoupling of khronon}
\label{Sec:Khronon}
In this section, we consider the scalar linear perturbations including
the inflaton and metric perturbations. We express the fields as,
\begin{align}
& \Phi(t,\, \bm{x}) = \phi(t) + \varphi(t,\, \bm{x})\,,
\quad N=a(1+\delta N)\,,\quad N_i=a^2\partial_iB\;,
\quad h_{ij} = a^2e^{2{\cal R}} \delta_{ij}\,. \label{Exp:ADMmetric}
\end{align}
In general relativity, the
metric perturbation ${\cal R}$ and the fluctuation of
the inflaton $\varphi$ are not independent. By contrast,
in HL gravity ${\cal R}$ serves an
additional scalar degree of freedom, khronon, as a consequence of the
lack of 4D Diff invariance. In this section we discuss the evolution of
khronon both in the projectable
and non-projectable versions of HL gravity. We will find that the khronon
behaviour differs qualitatively in these two cases.
\subsection{Projectable HL gravity}
\label{ssec:P}
First we consider the projectable version of HL gravity. A review of this
version can be found in
Ref.~\cite{Mukohyama:2010xz}.
In this case the lapse function is constrained to be homogeneous and
does not affect local physics. Setting $\delta N=0$
and integrating out the non-dynamical field $B$ we find the action,
\begin{align}
\label{genLagRf}
S = \int {\rm d} t \int {\rm d}^3 \bm{p}
\left[ {\cal{L}}_{{\cal R}} + {\cal{L}}_{\varphi} + {\cal{L}}_{{\cal R}\varphi}\right]\,,
\end{align}
with
\begin{align}
&{\cal L}_{{\cal R}} = a^2M_*^2 \frac{1+
\bar{\alpha}}{\alpha_1\bar{\alpha}} \left[{{\cal R} '}_{\sbm{p}}
{{\cal R} '}_{- \sbm{p}} - \omega_{{\cal R}}^2(t,\,p) {\cal R}_{\sbm{p}}
{\cal R}_{- \sbm{p}} \right]\,, \\
%
& {\cal L}_{\varphi} = \frac{a^2}{2} \left[{\varphi '}_{\sbm{p}}
{\varphi '}_{- \sbm{p}} - \omega_{\varphi}^2(t,\,p) \varphi_{\sbm{p}}
\varphi_{- \sbm{p}} \right]\,, \\
%
& {\cal L}_{{\cal R}\varphi} =a^2
\frac{1-2\bar{\alpha}}{\bar{\alpha}} {\phi '}\varphi_{\sbm{p}}{{\cal R} '}_{-\sbm{p}} \,. \label{mix_pro}
\end{align}
The frequencies $\omega^2_{\cal R}$ and $\omega^2_\varphi$ are given by
\begin{align}
\label{omegaR_pro}
& \frac{\omega^2_{{\cal R}}(t,\,p)}{{\cal{H}}^2} = \frac{\alpha_1\bar{\alpha}
}{1+ \bar{\alpha}} \biggl(\frac{p}{{\cal{H}}}\biggr)^2 \left[ -
\frac{1}{\alpha_3} + \biggl( \frac{3}{\beta_1} + \frac{8}{\beta_2}
\biggr) \biggl( \frac{p}{a M_*} \biggr)^2+ \biggl( \frac{3}{\gamma_1} + \frac{8}{\gamma_2}
\biggr) \biggl( \frac{p}{a M_*} \biggr)^4 \right]\,,\\
&\frac{\omega^2_{\varphi}(t,\,p)}{{\cal{H}}^2} =
\biggl(\frac{p}{{\cal{H}}}\biggr)^2 \left[ \varkappa_1 + \varkappa_2 \biggl(
\frac{p}{a M_*} \biggr)^2 + \varkappa_3 \biggl( \frac{p}{a M_*} \biggr)^4 \right]
-\frac{1+\bar{\alpha}}{\bar{\alpha}}\varepsilon_1+ \frac{3(1+\bar{\alpha})}{1-2\bar{\alpha}}\eta_V\,.
\end{align}
We observe that in a de Sitter universe, where the inflaton is absent,
khronon ${\cal R}$ behaves as a massless Lifshitz scalar and thus is conserved at
super Hubble scales. However, mixing with the inflaton (\ref{mix_pro})
essentially modifies the dynamics.
Positivity of khronon kinetic energy requires,
\begin{align}
& \frac{1+ \bar{\alpha}}{\alpha_1\bar{\alpha}} > 0\,,
\end{align}
which implies that ${\cal R}$ suffers from a gradient instability in the IR
limit, since $\alpha_3 > 0$.
An attempt to suppress this instability by taking the coefficient
$\alpha_1 \bar{\alpha}/\alpha_3(1+ \bar{\alpha})$ to be small leads to
strong coupling and invalidates the perturbative description
(see a detailed discussion in Ref.~\cite{Blas:2010hb}).
Thus, projectable HL gravity cannot provide a viable low-energy
phenomenology in the regime of weak coupling. By analogy with
non-Abelian gauge theories, one might envision a scenario where strong
coupling occurs only in IR and leads to confinement of khronon at low
energies. However, currently there exist no controllable
realizations of this scenario. Here we restrict to the anisotropic
scaling regime where the second and third terms in the brackets in
(\ref{omegaR_pro}) dominate, the theory is stable and weakly coupled.
Estimates of various terms in the Lagrangian show that at
$\omega_{{\cal R}},\omega_{\varphi}\gg {\cal H}\sqrt{\varepsilon}$
the mixing term between ${\cal R}$ and $\varphi$ is negligible and these two
fields evolve independently.
Assuming that either $z=2$ or $z=3$ contribution is
dominant and imposing the standard WKB initial condition we find the
mode functions for ${\cal R}$ and $\varphi$,
\begin{align}
&{\cal R}_p(t) = \frac{1}{M_P a}\,\sqrt{\frac{\bar{\alpha}}{1+ \bar{\alpha}}}\,
\sqrt{\frac{-\pi t}{8z}}
e^{i\frac{\pi(2\nu+1)}{4}}
H^{(1)}_{\nu}\left[ \frac{\omega_{{\cal R}}}{z{\cal H}} \right]\,,
\label{Khronon_pro}\\
&\varphi_p(t) =\frac{1}{2a}\,
\sqrt{\frac{-\pi t}{z}}e^{i\frac{\pi(2\nu+1)}{4}}
H^{(1)}_{\nu}\left[\frac{\omega_{\varphi}}{z{\cal H}}
\right]\,,
\end{align}
where the Hankel index $\nu$ is given by Eq.~(\ref{nu}).
We did not write explicitly the arguments of the Hankel functions;
they have the same dependence on $p$ and $t$ as those
in Eqs.~(\ref{phip}) and (\ref{gammap}). We have also
neglected the slow-roll correction in
$\omega_\varphi/{\cal H}$, since the momentum dependent contribution
dominates it in this regime.
The above solutions cannot be extended to super Hubble evolution where
$\omega_{{\cal R}},\omega_{\varphi}\lesssim {\cal H}\sqrt{\varepsilon}$
and the mixing between ${\cal R}$ and $\varphi$ becomes important.
In this regime we have two light Lifshitz scalars, the inflaton and
khronon, which are mixed with each other. As in a 4D Diff invariant
theory with more than one light scalar fields, in this case we do not
find an adiabatic mode which is conserved in time at large
scales (see, e.g., Ref.~\cite{Gordon:2000hv}). Then, in order to
compute the observed fluctuations, we need to
solve the time evolution which can depend on concrete models of the
reheating, the transition to the isotropic scaling regime, and so on.
As discussed above, this would require also a controllable description
of the mechanism that suppresses the IR instability of the theory, which is
currently missing.
Therefore it appears problematic to provide a robust prediction for
primordial scalar power spectrum
in the projectable version of HL gravity.
\subsection{Non-projectable HL gravity}
\label{ssec:NP}
In this subsection we will find that the time evolution of khronon in
the non-projectable version is qualitatively different from the one in
the projectable version discussed above. In
particular, we will show that khronon is decoupled from the adiabatic
curvature perturbation $\zeta$ at large
scales. Because of that, $\zeta$ is conserved in time as in the single field
model with 4D Diff invariance. Therefore, we can derive a
robust prediction for the power spectrum of $\zeta$ without solving
the detailed evolution after the Hubble crossing.
\subsubsection{Mass gap of khronon and anti-friction}
\label{sssec:mgap}
In the non-projectable version of HL gravity, upon eliminating the
non-dynamical fields $B$ and $\delta N$, we obtain the action for
${\cal R}$ and $\varphi$ in the form (\ref{genLagRf})
with
\begin{align}
& {\cal L}_{{\cal R}} = a^2 M_*^2 \frac{1+
\bar{\alpha}}{\alpha_1\bar\a} \left[(1- \Omega_1(t,\, p)) {\cal R}'_{\sbm{p}}
{\cal R}'_{- \sbm{p}} - \omega_{{\cal R}}^2(t,\,p) {\cal R}_{\sbm{p}}
{\cal R}_{- \sbm{p}} \right]\, \label{mixLagMB}\;.
\end{align}
The expressions for
${\cal L}_\varphi, {\cal L}_{{\cal R}\varphi}$ are given in
Appendix \ref{app:A}. We introduce the functions
$\Omega_i(t,\, p)$ with $i=1,\,2$ as
\begin{align}
& \Omega_1(t,\, p) = \left\{ 1 + \frac{\bar{\alpha}\varepsilon_1}{1- 2
\bar{\alpha}} + \frac{\alpha_1 \bar{\alpha}}{2(1+ \bar{\alpha})}
\biggl(\frac{p}{{\cal{H}}} \biggr)^2
\biggl[1+ \frac{1}{\beta_4} \biggl( \frac{p}{a M_*} \biggr)^2 +
\frac{1}{\gamma_4} \biggl( \frac{p}{a M_*} \biggr)^4 \biggr]
\right\}^{-1}, \label{Def:Omega1} \\
& \Omega_2(t,\,p) = \frac{\alpha_1 \bar{\alpha}}{1+ \bar{\alpha}}
\biggl(\frac{p}{{\cal{H}}} \biggr)^2 \left[ - \frac{1}{\alpha_3} + \frac{1}{\beta_3} \biggl(
\frac{p}{a M_*} \biggr)^2 + \frac{1}{\gamma_3} \biggl(
\frac{p}{a M_*} \biggr)^4 \right] \Omega_1(t,\, p)\,.
\label{Def:Omega2}
\end{align}
In terms of these quantities the frequency $\omega_{{\cal R}}$ is expressed as,
\begin{align}
\frac{\omega^2_{{\cal R}}(t,\,p)}{{\cal{H}}^2} =& \frac{\alpha_1\bar\a
}{1+ \bar{\alpha}} \biggl(\frac{p}{{\cal{H}}}\biggr)^2 \left[ -
\frac{1}{\alpha_3} + \biggl( \frac{3}{\beta_1} + \frac{8}{\beta_2}
\biggr) \biggl( \frac{p}{a M_*} \biggr)^2+ \biggl( \frac{3}{\gamma_1} + \frac{8}{\gamma_2}
\biggr) \biggl( \frac{p}{a M_*} \biggr)^4 \right] \cr
& +
\frac{\Omega_2^2(t,\,p)}{\Omega_1(t,\, p)}
- \frac{( a^2 {\cal H} \Omega_2(t,\,p))'}{ a^2
{\cal H}^2} . \label{Exp:omegaRMB}
\end{align}
We observe that the khronon Lagrangian is now much more complicated
than in the projectable case. A crucial new feature is the dependence
of the coefficient in front of the term with time derivatives in
(\ref{mixLagMB}) on the mode momentum. This leads to a peculiar
behavior of khronon in inflationary universe, as we presently discuss.
It is convenient to introduce the notation,
\begin{align}
X(t,p) &\equiv \biggl(\frac{p}{{\cal{H}}} \biggr)^2
\biggl\{1+ \frac{1}{\beta_4} \biggl( \frac{p}{a M_*} \biggr)^2 +
\frac{1}{\gamma_4} \biggl( \frac{p}{a M_*} \biggr)^4 \biggr\} \,.
\label{Xdef}
\end{align}
Roughly speaking, this quantity characterizes the (square of the)
ratio between the frequencies of the perturbations and the Hubble rate
(the precise expressions will be given below). In the course of
cosmological evolution it goes through four different regimes:
\begin{align}
\label{Xreg1}
&\text{{\it (a) }} \qquad X\gg 1/\a^2\;,\\
\label{Xreg2}
&\text{{\it (b) }} \qquad \varepsilon_1/\a\ll X\ll 1/\a^2\;,\\
\label{Xreg3}
&\text{{\it (c) }} \qquad 1\ll X\lesssim\varepsilon_1/\a\;,\\
\label{Xreg4}
&\text{{\it (d) }} \qquad X\ll 1\;.
\end{align}
Recall that in the non-projectable case the parameters
are assumed to satisfy the hierarchy (\ref{parameterNP}). Here and
below we symbolically denote the small quantities in
(\ref{parameterNP}) by $\a$. Additionally, we will assume for the
moment that
\begin{equation}
\label{epsalhier}
\varepsilon_1/\a\gg 1\;.
\end{equation}
The opposite case will be commented on at the end of the section. Let us
consider the above regimes one by one.
{\it (a)} $X\gg 1/\a^2$. In this case
\begin{gather}
\Omega_1 \simeq \frac{1}{\a_1\bar\a X}\ll 1\;,\\
\frac{\omega_{\cal R}^2}{{\cal H}^2}\simeq 2\a_1\bar\a
\bigg(\frac{p}{\cal H}\bigg)^2\frac{\left[ -
\frac{1}{\alpha_3} + \frac{1}{\beta_3} (
\frac{p}{aM_*})^2 + \frac{1}{\gamma_3} (
\frac{p}{aM_*})^4 \right]^2 }{1+ \frac{1}{\beta_4} (
\frac{p}{aM_*})^2 + \frac{1}{\gamma_4} (
\frac{p}{aM_*} )^4 }\;.\label{omegaRUV}
\end{gather}
In the latter expression we recognize the dispersion relation of
khronon in flat spacetime (\ref{Exp:DRNP}) (up to suppressed
corrections).
In the UV regime,
$p>aM_*$, it behaves as a Lifshitz scalar with $z=3$ and
\[
\omega_{\cal R}^2 \simeq \frac{2\gamma_4\a_1\bar\a}{\gamma_3^2}p^2\Big(\frac{p}{aM_*}\Big)^4\;,
\]
whereas if $p<aM_*$ (but (\ref{Xreg1}) still satisfied) it obeys
the $z=1$ scaling,
\[
\omega_{\cal R}^2\simeq \frac{2\a_1\bar\a}{\a_3^2}p^2\;.
\]
Note that in both limiting cases the ratio $\omega_{\cal R}^2/{\cal H}^2$
is of order $X(t,p)$. From expressions given in Appendix~\ref{app:A}
one can infer that inflaton $\varphi$ also behaves in this regime as a
Lifshitz scalar with dispersion relation (\ref{LSdisp}). Further, by
estimating various terms in the Lagrangian ${\cal L}_{{\cal R}\varphi}$,
Eq.~(\ref{mixLag}), it is straightforward to check that mixing between
modes ${\cal R}$ and $\varphi$ is negligible\footnote{Strictly speaking,
the mixing can be resonantly enhanced if the frequencies
$\omega_{\cal R}(t,p)$ and $\omega_\varphi(t,p)$ happen to cross at some
specific time. In our analysis, we do not consider this
possibility. However, even if the crossing takes place, the mode
functions stay in the WKB form and the time evolution remains
essentially unchanged after the crossing.}.
The IR limit $p\ll aM_*$ of non-projectable HL gravity is closely
related to Einstein-aether theory~\cite{Jacobson:2010mx}.
Evolution of cosmological perturbations in the latter theory was
analyzed in~\cite{APSG} and
it was shown that in the short wavelength limit the khronon ${\cal R}$ and the
fluctuation of the inflaton $\varphi$ are decoupled from each
other, which allows to impose the WKB initial condition as usual.
Our analysis provides a generalization of this result to the UV modes
of HL gravity where terms with Lifshitz scaling $z=2$ and $3$ are
important.
{\it (b)} $\varepsilon_1/\a\ll X\ll 1/\a^2$. In this regime we have,
\[
\Omega_1\approx 1-\frac{\a_1\bar\a}{2}X\;,
\]
and the khronon Lagrangian takes the form,
\[
{\cal L}_{\cal R}=a^2\frac{M^2_*X(t,p)}{2}\big({\cal R}'_{\sbm p}{\cal R}'_{-{\sbm p}}
-\bar\omega_{\cal R}^2(t,p){\cal R}_{\sbm p}{\cal R}_{-{\sbm p}}\big)\;.
\]
The khronon frequency $\bar\omega_{\cal R}$ now reads,
\begin{equation}
\frac{\bar\omega_{{\cal R}}^2}{{\cal H}^2}=\!
\frac{a^2\Big[m_{k,1}^2\!
+\!\frac{m_{k,2}^2}{\b_4}(\frac{p}{aM_*})^2
\!+\!\frac{m_{k,3}^2}{\gamma_4}(\frac{p}{aM_*})^4\Big]}{
{\cal H}^2\left[1+\frac{1}{\b_4}(\frac{p}{aM_*})^2
+\frac{1}{\gamma_4}(\frac{p}{aM_*})^4\right]}
+2\a_1\bar\a
\bigg(\frac{p}{\cal H}\bigg)^{\!\!2}\frac{\left[ -
\frac{1}{\alpha_3} + \frac{1}{\beta_3} (
\frac{p}{aM_*})^2 + \frac{1}{\gamma_3} (
\frac{p}{aM_*})^4 \right]^2 }{1+ \frac{1}{\beta_4} (
\frac{p}{aM_*})^2 + \frac{1}{\gamma_4} (
\frac{p}{aM_*} )^4 },
\label{omegaRbar}
\end{equation}
where
\begin{align}
\label{khmass1}
& m_{k,1}^2 \equiv 2 \frac{\varepsilon_1}{\alpha_3} H^2 \,, \\
\label{khmass2}
& m_{k,2}^2 \equiv 2 \beta_4 \left( \frac{3}{\beta_1} +
\frac{8}{\beta_2} + \frac{1}{\beta_3} \right) H^2 \,, \\
\label{khmass3}
& m_{k,3}^2 \equiv 2 \gamma_4 \left( \frac{3}{\gamma_1} +
\frac{8}{\gamma_2} + \frac{3}{\gamma_3} \right) H^2\,.
\end{align}
The second term in (\ref{omegaRbar}) is the same as
(\ref{omegaRUV}). However, we observe that a new contribution appears
which gives the khronon a mass gap. For the regime where terms with a
given $z=1,2,3$ dominate the mass is given by $m_{k,z}$. Notice that
$m_{k,1}$ is suppressed
by the slow-roll parameter
compared to $m_{k,2}$ and $m_{k,3}$. Still, within our assumption
(\ref{epsalhier}) all the masses are parametrically larger than the
Hubble rate.
Inspection of the terms ${\cal L}_{\varphi}$ and ${\cal L}_{{\cal R}\varphi}$
in the Lagrangian (see Appendix~\ref{app:A}) again shows
that the inflaton fluctuation $\varphi$ behaves as a Lifshitz scalar
with dispersion relation (\ref{LSdisp}) and decouples from
${\cal R}$. Thus we can study evolution of ${\cal R}$ separately.
Due to the mass gap, the khronon rapidly oscillates. However, unlike
one could naively expect, the
amplitude of these oscillations does not decay in the Lifshitz
regime. When either
$z=2$ or $z=3$ contribution is dominant and the khronon frequency
$\bar\omega_{\cal R}$ is dominated by the mass term, as happens for
$X\ll 1/\a$,
the equation for ${\cal R}$ reads,
\begin{align}
& {\cal R}'' - 2 (z-1) {\cal H} {\cal R}' + a^2 m^2_{k,\,z} {\cal R} =0\,. \label{Eq:KhronondS}
\end{align}
The second term here produces an `anti-friction'. The
canonically normalized mode functions have the form in the WKB approximation,
\begin{align}
& {\cal R}(t) =\frac{H M_*^{z-2}}{p^z\sqrt{2m_{k,z}}}\,
(a(t))^{z-3/2} \, e^{- i \int {\rm d} t a(t) m_{k,z}}\,, \label{Exp:cRWKB}
\end{align}
and describe oscillations with a growing amplitude, $|{\cal R}_p|\propto
a^{z-3/2}$. We are going to see in the next subsection that the growth
of khronon perturbations persists also at $X<\varepsilon_1/\a$ as long as the
modes remain in the Lifshitz regime and stops only when they
pass into the isotropic scaling $z=1$. To stay within the validity
of perturbation theory, we will impose the requirement that the
amplitude of khronon perturbations remains small throughout the
cosmological evolution, $p^{3/2}|R_p|<1$. This translates into certain
conditions on the inflationary parameters that will be discussed below.
\subsubsection{Khronon-inflaton mixing}
As the modes are further redshifted, the fields ${\cal R}$ and $\varphi$
get mixed and no longer provide a convenient basis for
perturbations. To find the appropriate basis, we study the Lagrangian
for ${\cal R}$ and $\varphi$ in the regime:
{\it (c)} $1\ll X\lesssim \varepsilon_1/\a$. We will focus in this subsection
on the case when the terms with Lifshitz
scaling $z=2$ or $3$ dominate in the dispersion relation. This is true
if the inflationary Hubble rate $H$ is bigger than $M_*$, which is
the scenario of primary interest to us. For completeness we consider
the case of isotropic scaling
in Appendix~\ref{app:iso}.
In the Lifshitz regime the leading mixing term is the first
contribution in (\ref{mixLag}). Simplifying the expressions using the
assumed parameter hierarchy and introducing the canonically normalized
field
\[
\hat {\cal R}\equiv \sqrt{2\varepsilon_1+\a_1X}\,M_P {\cal R}\;,
\]
we obtain the relevant part of the Lagrangian,
\begin{equation}
\label{Lagmixed}
{\cal L}=\frac{a^2}{2}\big(\hat {\cal R}'_{\sbm{p}}\hat{\cal R}'_{-\sbm{p}} -
\hat\omega_{{\cal R}}^2 \hat{\cal R}_{\sbm{p}}\hat{\cal R}_{-\sbm{p}}\big)
+\frac{a^2}{2} \big( \varphi'_{\sbm{p}} \varphi'_{- \sbm{p}} -
\omega^2_{\varphi} \varphi_{\sbm{p}} \varphi_{-\sbm{p}}\big)
-\frac{a^2}{\sqrt{1+\frac{\a_1 X}{2\varepsilon_1}}}
{\varphi '}_{\sbm{p}} {\hat{\cal R} '}_{- \sbm{p}}\;,
\end{equation}
where $\omega_\varphi^2$ is given by (\ref{LSdisp}) and
\[
\hat\omega_{\cal R}^2=\frac{\omega_{\cal R}^2}{\bar\a(\varepsilon_1+\a_1X/2)}\;.
\]
In deriving these expressions we have neglected contributions of order
${\cal H}$ into the frequencies. Note that $\hat\omega_{\cal R}$ is much
higher than $\omega_\varphi$. Indeed, we have
\begin{equation}
\label{omcromvarphi}
\hat\omega_{\cal R}^2\simeq a^2\frac{\a_1}{\varepsilon_1}m_{k,z}^2 X \simeq {\cal
H}^2\frac{X}{\varepsilon_1} \gg {\cal H}^2 X \simeq \omega_\varphi^2\;.
\end{equation}
The Lagrangian (\ref{Lagmixed}) confirms explicitly our previous
assertion that at $X\gg \varepsilon_1/\a$ the mixing between ${\cal R}$ and
$\varphi$ is negligible. On the other hand, we see that at $X\ll
\varepsilon_1/\a$ it becomes essential. To identify the independent modes we use
the substitution,
\begin{align}
&\chi_+ = \varphi\cos{\theta} -\hat{\cal R} \frac{\hat\omega_{\cal R}}{\omega_\varphi}
\sin{\theta} \,, \label{Def:chi+} \\
&\chi_- = \varphi\frac{\omega_\varphi}{\hat\omega_{\cal R}}\sin{\theta} +
\hat{\cal R} \cos{\theta}\, , \label{Def:chi-}
\end{align}
and find that the mixing term between $\chi_{\pm}$ disappears provided that\footnote{We
again neglect contributions proportional to ${\cal H}$ that come from
time variation of $\hat\omega_{\cal R}$, $\omega_\varphi$, $X$. These are
irrelevant as long as the frequencies of the fields are higher than the
Hubble rate.}
\begin{align}
\tan{2\theta}
=\frac{2\omega_\varphi\hat\omega_{\cal R}}{\hat\omega_{\cal R}^2-\omega_\varphi^2}
\frac{1}{\sqrt{1 + \frac{\alpha_1X}{2\varepsilon_1}}}\,. \label{theta}
\end{align}
Due to (\ref{omcromvarphi})
the mixing angle $\theta$ is always small and the expressions for
the new variables $\chi_\pm$ simplify.
At $X\ll \varepsilon_1/\a$ they become,
\begin{align}
\label{chi+simp}
&\chi_+=\varphi-\sqrt{2\varepsilon_1}M_P{\cal R}\;,\\
\label{chi-simp}
&\chi_-=\sqrt{2\varepsilon_1}M_P{\cal R}
+\Big(\frac{\omega_\varphi}{\hat\omega_{\cal R}}\Big)^2\varphi\;,
\end{align}
where we have switched back to the original metric perturbation
${\cal R}$. In the expression (\ref{chi+simp}) we recognize the standard
gauge invariant variable
\begin{align}
& \zeta \equiv {\cal R} - \frac{\cal H}{\phi'}\,\varphi=
-\frac{\cal H}{\phi'}\chi_+
\label{Def:zeta}
\end{align}
describing curvature perturbation on the slices of constant inflaton
field. The Lagrangian for $\chi_\pm$ reads,
\begin{align}
& {\cal L} = \frac{a^2}{2}
\big( {\chi_{+}'}^2 - \omega_\varphi^2(t,p) \chi_{+}^2\big)
+a^2\,\frac{\a_1X(t,p)}{4\varepsilon_1} \big( {\chi_{-}'}^2
- \bar\omega_{\cal R}^2(t,p) \chi_{-}^2\big)\;,
\label{diagLag}
\end{align}
where $\bar\omega_{\cal R}^2$ is the same as in (\ref{omegaRbar}). We see that
$\chi_+$ (or equivalently $\zeta$) inherits the dispersion relation of
the inflaton, whereas the second mode $\chi_-$ --- that of khronon. In
other words, in the regime (\ref{Xreg3}) we still have two independent
physical excitations, inflaton and khronon, with their respective
dispersion relations (\ref{LSdisp}), (\ref{omegaRbar}). The
corresponding eigenfunctions are connected to the original variables
by (\ref{chi+simp}), (\ref{chi-simp}). This is illustrated in
Fig.~\ref{fig_HL}.
\begin{figure}[ht]
\centering
\includegraphics[width=12.0cm, bb=0 0 750 850]{fig_HL_v4.png}
\caption{Summary of the time evolution of the fluctuations. The
central axis denotes the quantity $X(t,p)$ introduced in Eq.~(\ref{Xdef}). }
\label{fig_HL}
\end{figure}
\subsubsection{Long wavelength evolution and power spectrum}
We have shown that the variables $\chi_\pm$ are independent as long as
the frequencies of the modes remain higher than the Hubble rate. As
the modes redshift and approach the `horizon crossing', $X\simeq 1$, the
situation gets more complicated due to the terms proportional to
${\cal H}$ in the Lagrangian that can no longer be neglected. However,
the situation simplifies again for `super Hubble' modes corresponding
to the regime:
{\it (d)} $X\ll 1$. In the standard relativistic single field
inflation the curvature perturbation $\zeta$ is conserved at these
scales. In Appendix~\ref{app:A} we show that this also holds for
non-projectable HL gravity, despite the presence of khronon, by
explicitly writing the quadratic Lagrangian in terms of $\zeta$ and
$\varphi$. All non-derivative terms in the $\zeta$-equation turn out
to be suppressed by $X$, so that we obtain the solution,
\begin{equation}
\label{zetasuper}
\zeta=const\;.
\end{equation}
This allows to immediately write down the power spectrum for $\zeta$
by matching to the amplitude of $\chi_+$ fluctuations at the Hubble
crossing, see Eq.~(\ref{Def:zeta}),
\begin{align}
& {\cal P}_\zeta (p) = \frac{1}{2 \varepsilon_{1, p} M_P^2}\, {\cal
P}_{LS}(p)\,, \label{Exp:PSzetaH}
\end{align}
where ${\cal P}_{LS}(p)$ is the power spectrum of the Lifshitz scalar
and $\varepsilon_{1,p}$ is the value of the slow-roll parameter
at the Hubble crossing time of the mode $p$. Explicitly we have,
\begin{align}
& {\cal P}_\zeta (p) =
\frac{\alpha_1^{\nu (z-1)}}{\varepsilon_{1, p}\varkappa_z^\nu}
\frac{(2^\nu \Gamma[\nu])^2}{16 \pi^3} z^{\frac{3}{z}-1}
\left( \frac{H_{p}}{M_P} \right)^{\frac{3}{z} - 1}\,,
\qquad\qquad
\nu=\frac{3}{2z}\;.
\label{Exp:powerzeta0}
\end{align}
Note that for $z=3$ the spectrum is independent of the Hubble rate at inflation,
\begin{equation}
\label{Pzeta3}
{\cal P}_\zeta(p)=\frac{1}{8\pi^2}\frac{\a_1}{\varepsilon_{1,p}\sqrt{\varkappa_3}}\;,
\qquad\qquad z=3\;.
\end{equation}
The spectral index is given by
\begin{align}
& n_s -1 \equiv \frac{{\rm d} \ln {\cal{P}}_{\zeta}}{{\rm d} \ln p} = -
\frac{3-z}{z} \varepsilon_1 -\varepsilon_2 \,,
\end{align}
or alternatively,
\begin{align}
& n_s - 1 = - \frac{3(1+z)}{z} \varepsilon_V + 2 \eta_V\,. \label{Exp:nsm1}
\end{align}
For $z=1$ we recover the standard expressions.\\
We now analyze the super Hubble behavior of khronon, or `isocurvature'
mode. Despite the fact that the frequency term for $\varphi$ in the
Lagrangian (\ref{tL}), as well as its mixing with $\zeta$, are
suppressed by $X$, it still evolves non-trivially, because its time
derivative term is also proportional to $X$. When the contributions
with Lifshitz scaling $z=2,3$ dominate, the equation for $\varphi$
following from (\ref{tL}), (\ref{Lagzeta2}) simplifies,
\begin{equation}
\varphi''-2{\cal H}(z-1)\varphi'+a^2m_{k,z}^2
\big(\varphi+\sqrt{2\varepsilon_1}M_P\zeta)=0\;.
\label{varphisup1}
\end{equation}
The combination in brackets in the last term is nothing but
$\sqrt{2\varepsilon_1}M_P{\cal R}$, which also coincides with $\chi_-$, up to
slow-roll suppressed corrections. Also Eq.~(\ref{varphisup1}) is the
same as the khronon equation (\ref{Eq:KhronondS}). We conclude that
khronon preserves its identity through Hubble crossing. Despite very
long wavelength of the modes, they continue to rapidly oscillate with
growing amplitude
due to anti-friction. The decoupling of $\zeta$ and ${\cal R}$ now
receives an intuitive explanation: these excitations have very
different frequencies and therefore cannot mix.
The amplitude of khronon oscillations seizes to grow when the
momentum redshifts down to $p/aM_*\simeq 1$. For
$\sqrt{\varepsilon_1}\ll p/aM_*\ll 1$ the equation for $\varphi$ reads,
\begin{equation}
\varphi''+\b_4\frac{p^2m_{k,2}^2}{M_*^2}
\big(\varphi+\sqrt{2\varepsilon_1}M_P\zeta)=0\;,
\label{varphisup2}
\end{equation}
and describes pure oscillations of ${\cal R}$ with constant amplitude. This
is illustrated in Fig.~\ref{Fig:khronon}. Finally, for
$p/aM_*\ll\sqrt{\varepsilon_1}$ the $\varphi$-equation becomes,
\begin{equation}
\varphi''+\frac{2{\cal H}^2\varepsilon_1}{\a_1}
\Big(\varkappa_1-\frac{\a_1}{\a_3}\Big)\varphi=0\;.
\label{varphisup3}
\end{equation}
First, we notice that $\varphi$ has completely decoupled from
$\zeta$. This is consistent with the result of \cite{APSG} which
studied inflation in the $z=1$ limit of HL gravity and identified the
independent modes in the super Hubble regime as $\zeta$ and
$\delta {\cal N}=({\cal H}/\phi') \varphi$. The latter has geometric
interpretation of the difference in the number of $e$-foldings between
the surfaces of constant inflaton (i.e. constant density) and constant
khronon. Second, the nature of solutions to (\ref{varphisup3}) depends
on the sign of the combination in brackets which has the physical
meaning of the difference between (the squares of) the low-energy
velocities of the inflaton, $c_\varphi^2\equiv \varkappa_1$, and graviton
$c_\gamma^2\equiv\a_1/\a_3$ (see Eqs.~(\ref{Def:vk}), (\ref{LSdisp})). If
it is positive, the mode $\varphi$ performs rapid oscillations with
the physical frequency $\omega_\varphi/ a \simeq H\sqrt{\varepsilon_1/\a_1}\gg H$
and the amplitude decaying as $a^{-1/2}$. On the other hand, if
$\varkappa_1<\a_1/\a_3$, the solutions to (\ref{varphisup3}) exhibit an
exponential runaway behavior, signaling an instability. These two
cases are illustrated in Fig.~\ref{Fig:khronon}. To avoid instability,
we will assume that $\varkappa_1>\a_1/\a_3$.
Equation (\ref{varphisup3}) has been derived under the assumption that
the low-energy velocities of inflaton and graviton differ by a factor
of order one. Alternatively, one can impose the requirement that this
difference should be small, $c_\varphi^2-c_\gamma^2={\cal O}(\a)$, which
corresponds to an emergence of approximate Lorentz invariance at low
energies. In this case one must retain additional contributions of the
same order in the expression (\ref{tomegaphi}) for the frequency of
$\varphi$, so that the $\varphi$-Lagrangian becomes,
\begin{equation}
\tilde{\cal L}_\varphi=a^2\frac{\a_1}{4\varepsilon_1}\bigg(\frac{p}{\cal
H}\bigg)^2
\bigg[{\varphi'}^2-{\cal H}^2\,\varepsilon_1\bigg(1-\frac{\varepsilon_2}{2\varepsilon_1}
+\frac{2(c^2_\varphi-c^2_\gamma)}{\a_1}
+\frac{3c_{k}^2}{c_\gamma^2}\bigg)\,\varphi^2\bigg]\;,
\label{varphisup4}
\end{equation}
where $c_k^2\equiv 2\a_1\bar\a/\a_3^2$ is the low-energy velocity of
the khronon. Upon proper translation of notations, this coincides with the
Lagrangian for the isocurvature mode obtained in~\cite{APSG}. From
(\ref{varphisup4}) we see that the isocurvature mode evolves slowly
with the rate suppressed by the slow-roll parameter $\varepsilon_1$. This
behavior is also illustrated in Fig.~\ref{Fig:khronon}.
\begin{figure}[t]
\vspace{-6cm}
\centering
\includegraphics[width=10.0cm, bb=0 10 700 750]{mode_fig2.png}
\caption{The amplitude of a khronon mode with conformal momentum $p$
as a function of the scale
factor. It grows in the Lifshitz regime and reaches the value
$\sqrt{H/M_P}$. Then it remains constant till
$a\sim p/M_*\sqrt{\varepsilon_1}$, where the mode enters into the $z=1$
scaling. The subsequent evolution depends on the relation between
velocities of the inflaton and graviton characterized by the
parameters $\varkappa_1$ and $\a_1/\a_3$, as explained in the main
text. }
\label{Fig:khronon}
\end{figure}
Even if the isocurvature mode does not develop instability at late
times, it initially grows due to anti-friction, see
Eq.~(\ref{varphisup1}) and Fig.~\ref{Fig:khronon}. By the time the
growth terminates the power spectrum of ${\cal R}$ reaches
\begin{equation}
{\cal P}_{\cal R}\Big|_{a= p/M_*}=\frac{H^2}{4\pi^2m_{k,z}M_*}
\simeq \frac{H}{4\pi^2 M_P}\;.
\end{equation}
For the validity of the linearized theory developed above we must
require that the perturbations of ${\cal R}$ do not exceed unity. Then we
obtain an upper bound on the inflationary Hubble scale,
\begin{equation}
H<4\pi^2 M_P\;.
\label{Hbound}
\end{equation}
This constraint is somewhat unexpected, as a priori HL gravity should
be applicable also at trans-Planckian energies\footnote{Recall, in
particular, that for $z=3$ the power spectra of the curvature
perturbation $\zeta$ and the gravitational waves $\gamma$ do not depend
on the Hubble scale, so their perturbative calculation does not
require sub-Planckian energies.}.
In fact, the requirement (\ref{Hbound}) may be too restrictive. It
follows from consideration of metric perturbations
with very long wavelengths. Unlike
in GR, we cannot use a space-dependent reparameterization of
time to remove this perturbation completely. However, {\it
space-independent} time reparameterizations are still a symmetry of
HL gravity and can be used to remove the fluctuation ${\cal R}$ at any
given point. This suggests that coupling of khronon to other physical
degrees of freedom should involve spatial derivatives and its almost
homogeneous fluctuation, even with a large amplitude, should
not have any effect locally. This property is indeed satisfied by the
Lagrangian (\ref{Lagzeta2}) describing dynamics of $\zeta$ and khronon
at super Hubble scales. We have also verified
that in pure de Sitter universe the growth of ${\cal R}$ does not lead to
divergence of local gauge invariant observables constructed out of the
metric $h_{ij}$, such as
the extrinsic curvature $K$ and
$K^i\!_j K^j\!_i$ and the spatial Ricci scalar $R$.
For instance, the linear perturbation of the
trace part of the extrinsic curvature is given by
\begin{align}
& \delta K = K - 3H =
-\frac{1- 2 \bar{\alpha}}{\bar\a} \left[ (1- \Omega_1)
\dot{{\cal R}} - \Omega_2 H {\cal R} \right] \propto a^{-(z+ \frac{3}{2})}\,.
\end{align}
These arguments indicate that the constraint (\ref{Hbound}) may be
avoided by a more careful treatment where the growth of super Hubble
khronon fluctuations is absorbed by an appropriate field
redefinition. This study is, however, beyond the scope of the present
paper.
Before concluding this section, let us describe what happens if the
slow-roll parameter $\varepsilon_1$ is the smallest quantity in the setup,
\begin{equation}
\varepsilon_1/\a_1\ll 1\;.
\label{epsalhier1}
\end{equation}
In this case the perturbations ${\cal R}$ and $\varphi$ are decoupled all
the way through the Hubble crossing down to $X \simeq \varepsilon_1/\alpha$. After that, the good
variables are $\zeta$ and $\varphi$. As before, $\zeta$ is conserved,
whereas the evolution of $\varphi$ is described by
Eqs.~(\ref{varphisup1}), (\ref{varphisup2}), (\ref{varphisup3}),
(\ref{varphisup4}). The power spectrum of $\zeta$ is determined by
matching it to the fluctuations of the inflaton and khronon at $X \simeq \varepsilon_1/\alpha$. It is easy to see that the inflaton fluctuations
dominate, so the spectrum is still given by
(\ref{Exp:powerzeta0}) (leaving aside a small correction due to the damping of the inflaton perturbations between the Hubble crossing and $X \simeq \varepsilon_1/\alpha$). Note that for $z=3$ the hierarchy
(\ref{epsalhier1}) is actually not viable, as it would imply that the
power spectrum is larger than unity, see Eq.~(\ref{Pzeta3}).
\section{Violation of consistency relation}
\label{Sec:LV}
In the previous section, we computed the power spectra of the adiabatic curvature
perturbation $\zeta$ and the primordial gravitational waves in the anisotropic
scaling regime of HL
gravity. In particular, we have shown that in the non-projectable case
$\zeta$ is conserved at super Hubble scales during inflation, despite
the presence of an isocurvature scalar perturbation. The
intuitive explanation of this conservation is that the isocurvature
mode, associated to the shift of khronon, is locally unobservable and
its interaction with $\zeta$ is suppressed by spatial
derivatives. This suggests that $\zeta$ will not be affected by the
isocurvature mode at super Hubble scales also after the end of
inflation. Indeed, conservation of $\zeta$ at super Hubble scales has
been demonstrated for rather general matter content in the low-energy
limit ($z=1$) of non-projectable HL gravity in
Refs.~\cite{APSG,Kobayashi:2009hh}. We will proceed under the
assumption that this also holds between the end of inflation and the
time when the universe enters into the isotropic scaling regime.
Below we discuss a signal of the Lifshitz scaling in the
primordial spectra.
\subsection{Consistency relation in 4D Diff invariant theories}
\label{ssec:crLI}
Before discussing the primordial spectra generated in the anisotropic
scaling regime, let us review the discussion in
theories encompassed by the Effective Field Theory (EFT) of inflation
\cite{Cheung:2007st} where the inflaton background breaks
4D Diff invariance down to
time-dependent spatial Diff. We follow Ref.~\cite{CGNV}.
Within EFT of inflation, the quadratic action for the gravitational waves is
given by
\begin{align}
S_{\gamma\gamma} = \frac{1}{8}\int{{\rm d} t\, {\rm d}^3 \bm{x} \, a^2\,
\frac{M^2_P}{c_\gamma^2} \left[(\gamma'_{ij})^2 -
c^2_\gamma (\partial_k{\gamma_{ij}})^2 \right]}\,.\label{EFTact}
\end{align}
In the presence of a time-dependent inflaton background
which breaks Lorentz invariance and time-translations, the parameters
$M_P$ and $c_\gamma$ can
deviate from their vacuum values and can vary with
time. However, one can always set these parameters to fixed values
by a redefinition of the metric.
Indeed, performing the
disformal transformation:
\begin{align}
g_{\mu\nu} \mapsto g_{\mu\nu} + (1 - c^2_\gamma(t))n_\mu n_\nu\,,
\end{align}
where $n_\mu$ is the unit vector orthogonal to the constant-inflaton slices,
and successively performing the conformal transformation to the
Einstein frame,
\begin{align}
g_{\mu\nu} \mapsto c^{-1}_\gamma(t)\frac{M_P^2(t)}{M^2_{P,0}}g_{\mu\nu}\,,
\end{align}
we can set the graviton speed $c_\gamma$ to unity and $M_P^2$ to
constant.
The equivalence between the Einstein frame and
the Jordan frame for the gravitational waves was explicitly confirmed in
Ref.~\cite{CGNV}. The price to pay is that these transformations
also alter the sector of scalar perturbations. For instance, if the
propagation
speed of the inflaton $c_s$ is 1 in the original frame, after the
above disformal transformation
which sets $c_\gamma$ to 1, the sound speed $c_s$ is changed into
$c_s = c^{-1}_\gamma$.
After inflation, the non-minimal coupling introduced by the inflaton
should disappear.
Therefore, it is reasonable to calculate the primordial
spectra in the Einstein frame for the gravitational waves. Then the
spectrum for the gravitational waves is given by the standard
expression
and depends only on the ratio of inflationary Hubble scale and Planck
mass. Besides, one obtains the well-known
consistency relation
\begin{align}
n_t = -\frac{r}{8c_s}\,,\label{n_t}
\end{align}
which relates the spectral index for the gravitational waves $n_t$ and
the tensor to scalar ratio $r$. (The sub leading contribution to the
consistency relation in the slow-roll approximation can be found, e.g.,
in Ref.~\cite{Baumann:2014cja}.) In a Lorentz invariant theory the
velocity of any excitation cannot exceed unity, $c_s\leq 1$, which
implies a bound,
\begin{align}
\label{ntbound}
& - n_t \geq \frac{r}{8}\,.
\end{align}
This is a robust prediction of (single field) EFT of
inflation. Moreover, when $c_s$ is smaller than $1$ the equilateral
non-Gaussianity is enhanced by
$1/c_s^2$ (see, e.g., Refs.~\cite{Seery:2005wm, Baumann:2011su,
Ade:2015ava}). Thus, a deviation from equality in (\ref{ntbound})
should be accompanied by large non-Gaussianity.
\subsection{Violation of consistency relation in Ho\v rava--Lifshitz gravity}
\label{ssec:crLV}
We now discuss the primordial spectra generated in gravity with anisotropic
scaling. In this case the symmetry breaking pattern is different:
there are no 4D Diff to start with, but only the reduced
symmetry of foliation-preserving Diff, that is further broken to
time-dependent spatial Diff by the inflaton background.
The velocity of graviton depends now on
the wavenumber $p$, so one cannot
set it to unity by the disformal transformation which
globally changes the time component of the metric.
This means that the modified dispersion relation
physically changes the spectrum of the gravitational waves.
In particular, the relation between the power spectrum ${\cal P}_\gamma$,
Eq.~(\ref{Exp:powertens}), and the inflationary Hubble rate
is no longer straightforward: it depends on the scaling exponent $z$
and other parameters of the theory. For $z=3$ the tensor power
spectrum does not depend on $H$ at all. On the other hand, a robust
prediction for $z=3$ is vanishing of the tensor spectral index, $n_t=0$.
Using Eqs.~(\ref{Exp:powertens}) and
(\ref{Exp:powerzeta0}), at the leading order in the slow-roll
approximation, we obtain the tensor-to-scalar ratio as\footnote{
Equations (\ref{Exp:powertens}) and (\ref{Exp:powerzeta0}) directly give
\begin{align}
& r = 16 \varepsilon_1
\left( \frac{\varkappa_z}{\varkappa_{\gamma, z}} \right)^{\frac{3}{2z}}
\left( \frac{H_{p, \gamma}}{H_p} \right)^{\frac{3}{z}-1} \,. \label{Exp:r}
\end{align}
For $\varkappa_z \neq \varkappa_{\gamma, z}$, the Hubble crossing times for the
adiabatic perturbation does not necessarily coincide with the one for
the gravitational waves and the Hubble parameters at these times are
related as
\begin{align}
& \frac{H_{p, \gamma}}{H_p} \simeq \left(
\frac{\varkappa_z}{\varkappa_{\gamma,\,z}} \right)^{\frac{\varepsilon_1}{2z}}\,.
\end{align}
}
\begin{align}
& r \equiv \frac{{\cal P}_\gamma}{{\cal P}_\zeta} \simeq 16 \varepsilon_1
\left( \frac{\varkappa_z}{\varkappa_{\gamma, z}} \right)^{\frac{3}{2z}} \,. \label{Exp:rSR}
\end{align}
Exceptionally, for $\varkappa_z = \varkappa_{\gamma, z}$ the tensor-to-scalar ratio is given by the standard expression irrespective of the
value of $z$.
Using Eqs.~(\ref{Exp:nt}) and (\ref{Exp:rSR}), we obtain the modified
consistency relation for the primordial perturbations in the anisotropic scaling regime as
\begin{align}
& n_t \simeq - \frac{3- z}{z} \frac{r}{16} \left(
\frac{\varkappa_{\gamma, z}}{\varkappa_z} \right)^{\frac{3}{2z}} \,. \label{HL_nt}
\end{align}
We see that $n_t$ and $r$ are still related linearly, but the
coefficient depends on
$z$, $\varkappa_z$, and $\varkappa_{\gamma, z}$.
Clearly, this can violate the lower bound (\ref{ntbound}) on
$-n_t$ obtained in Lorentz invariant theories.
\section{Concluding remarks}
\label{Sec:concl}
HL gravity contains an additional scalar
degree of freedom in the gravity sector, khronon, corresponding to
fluctuations of the preferred
time foliation. Therefore, a minimal model of inflation possesses two
scalar degrees of freedom: the
inflaton and khronon. These two fields are coupled gravitationally.
In the small scale limit, as usual,
the gravitational interaction is suppressed and we simply have two
decoupled Lifshitz scalar fields. Naively, one may expect that in the
large scale limit, the gravitational interaction becomes important and
these two fields start to be coupled. This is indeed the case in
the projectable version of HL gravity. The inflaton and khronon stay nearly
gapless modes which are bi-linearly coupled. Then the adiabatic curvature
perturbation $\zeta$ is generically not conserved at large scales.
On the other hand, the situation is crucially different in the
non-projectable version. In the anisotropic scaling regime, khronon acquires the effective mass $m_K$, which is much
larger than the Hubble scale, well before
Hubble crossing time. It then decouples from the adiabatic
mode $\zeta$ and does not leave any impact on
the power spectrum of $\zeta$, which is conserved at super Hubble scales. The power spectrum of $\zeta$ is simply
given by that of the Lifshitz scalar with the multiplicative factor
$1/(2\varepsilon_1 M_P^2)$. The decoupled khronon rapidly oscillates,
with the amplitude of the oscillations growing
exponentially due to anti-friction. The growth persists until the mode
enters into the regime of isotropic scaling as a consequence of the
redshift of its momentum.
We need a
more careful consideration to see if this exponential growth can or
cannot
affect
observable quantities.
One remaining question is whether the decoupling between the adiabatic mode
$\zeta$ and khronon is a robust feature of
non-projectable HL gravity also beyond the restricted setup considered
in this paper.
We have focused on
the linear order in perturbations. The physical
interpretation presented in Sec.~\ref{ssec:NP} suggests that the decoupling
will also persist at non-linear orders. We
postpone an explicit analysis of this issue, as well as of primordial
non-Gaussianity, to a future work. In this
paper we assumed the minimal coupling of the inflaton to the gravity
sector. One may wonder whether a non-minimal interaction can prevent the
decoupling of khronon. Recall that khronon gets gapped due to a
peculiar structure of the
coefficient in front of the (quadratic) time
derivative term in the action. Thus, to make khronon gapless, the
non-minimal coupling should modify the
time derivative terms. The only contribution that can
change the time derivative terms under the assumption of
foliation-preserving Diff and time reversal symmetry is the term with
$K \dot{\Phi}/N$. However, this can be removed by a redefinition of
the metric $h_{ij} \to \Omega^2(\Phi) h_{ij}$ and
$N \to \Omega^3(\Phi) N$. Therefore, we expect that the decoupling
between $\zeta$ and khronon takes place generically in the
non-projectable version of HL gravity with the time reversal symmetry in
the anisotropic scaling regime. It may be interesting to study if this
decoupling takes place also in the case when the time reversal symmetry is broken,
e.g. by a term with $\sqrt{h}\, \dot{\Phi}/N$.
We also pointed out that the consistency relation between the tensor to
scalar ratio $r$ and the tensor spectral index $n_t$, which holds in the
general single field EFT of inflation, can be violated by the primordial
perturbations generated during the anisotropic scaling regime. If the
primordial gravitational waves are detected, the value of $r$ will give the
lower bound on $- n_t$ in Lorentz invariant theory. A violation of
this bound will indicate violation of Lorenz invariance in the early universe.
\acknowledgments
Y.~U. would like to thank Jaume Garriga for his fruitful
comments. Y.~U. would like to thank CERN for the hospitality
during the work on this project. S.~A. is supported by
Japan Society for the Promotion of Science (JSPS) under Contract
No.~17J04978 and in part by
by Grant-in-Aid for Scientific Research on Innovative Areas under Contract
No.~15H05890. S.~S. is supported by the RFBR grant No.~17-02-00651.
Y.~U. is supported by JSPS Grant-in-Aid for Research Activity Start-up
under Contract No.~26887018
and Grant-in-Aid for Scientific Research on Innovative Areas under
Contract No.~16H01095. Y.~U. is also supported in part by Building of
Consortia for the Development of Human Resources in
Science and Technology, Daiko Foundation, and the National Science Foundation under Grant No. NSF
PHY11-25915.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,262
|
using FluentFTP.Servers;
using FluentFTP.Client.BaseClient;
namespace FluentFTP.Client.Modules {
/// <summary>
/// All servers with server-specific handling and support are listed here.
/// Its possible you can connect to other FTP servers too.
///
/// To add support for another standard FTP server:
/// 1) Add a new enum in the `FtpServer` enum
/// 2) Add a new class extending `FtpBaseServer` under the `Servers.Handlers` NS
/// 3) Create a new instance of your class in `FtpHandlerIndex.AllServers`
///
/// To support a custom FTP server you only need to extend `FtpBaseServer`
/// and set it on your client.ServerHandler before calling Connect.
/// </summary>
internal static class ServerModule {
/// <summary>
/// Detect the FTP Server based on the welcome message sent by the server after getting the 220 connection command.
/// Its the primary method.
/// </summary>
public static FtpServer DetectFtpServer(BaseFtpClient client, FtpReply handshakeReply) {
var serverType = client.ServerType;
if (handshakeReply.Success && (handshakeReply.Message != null || handshakeReply.InfoMessages != null)) {
var message = (handshakeReply.Message ?? "") + (handshakeReply.InfoMessages ?? "");
// try to detect any of the servers
foreach (var server in FtpHandlerIndex.AllServers) {
if (server.DetectByWelcome(message)) {
serverType = server.ToEnum();
break;
}
}
// trace it
if (serverType != FtpServer.Unknown) {
((IInternalFtpClient)client).LogLine(FtpTraceLevel.Info, "Status: Detected FTP server: " + serverType.ToString());
}
}
return serverType;
}
/// <summary>
/// Get a default FTP Server handler based on the enum value.
/// </summary>
public static FtpBaseServer GetServerHandler(FtpServer value) {
if (value != FtpServer.Unknown) {
foreach (var server in FtpHandlerIndex.AllServers) {
if (server.ToEnum() == value) {
return server;
}
}
}
return null;
}
/// <summary>
/// Detect the FTP Server based on the response to the SYST connection command.
/// Its a fallback method if the server did not send an identifying welcome message.
/// </summary>
public static FtpOperatingSystem DetectFtpOSBySyst(BaseFtpClient client) {
var serverOS = client.ServerOS;
// detect OS type
var system = client.SystemType.ToUpper();
if (system.StartsWith("WINDOWS")) {
// Windows OS
serverOS = FtpOperatingSystem.Windows;
}
else if (system.Contains("Z/OS")) {
// IBM z/OS, message can be one of the two, depending on realm (and server config)
// Syst message: "215 MVS is the operating system of this server. FTP Server is running on z/OS."
// Syst message: "215 UNIX is the operating system of this server. FTP Server is running on z/OS."
//**
//** Important: Keep this z/OS IN FRONT of the UNIX entry, both contain "UNIX".
//**
serverOS = FtpOperatingSystem.IBMzOS;
}
else if (system.Contains("UNIX") || system.Contains("AIX")) {
// Unix OS
serverOS = FtpOperatingSystem.Unix;
}
else if (system.Contains("VMS")) {
// VMS or OpenVMS
serverOS = FtpOperatingSystem.VMS;
}
else if (system.Contains("OS/400")) {
// IBM OS/400
serverOS = FtpOperatingSystem.IBMOS400;
}
else if (system.Contains("SUNOS")) {
// SUN OS
serverOS = FtpOperatingSystem.SunOS;
}
else {
// assume Unix OS
serverOS = FtpOperatingSystem.Unknown;
}
return serverOS;
}
/// <summary>
/// Detect the FTP Server based on the response to the SYST connection command.
/// Its a fallback method if the server did not send an identifying welcome message.
/// </summary>
public static FtpServer DetectFtpServerBySyst(BaseFtpClient client) {
var serverType = client.ServerType;
// detect server type
if (serverType == FtpServer.Unknown) {
// try to detect any of the servers
foreach (var server in FtpHandlerIndex.AllServers) {
if (server.DetectBySyst(client.SystemType)) {
serverType = server.ToEnum();
break;
}
}
// trace it
if (serverType != FtpServer.Unknown) {
((IInternalFtpClient)client).LogStatus(FtpTraceLevel.Info, "Detected FTP server: " + serverType.ToString());
}
}
return serverType;
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,155
|
{"url":"https:\/\/www.physicsforums.com\/threads\/vector-geometry-toughie.225056\/","text":"# Vector\/Geometry toughie\n\n1. Mar 29, 2008\n\n### emma3001\n\nFind parametric equations of a line that intersects both l1 and l2 at right angles:\n\nl1= [x,y,z]=[4,8,-1] + t[2,3,-4]\nl2= x-7\/-6 = y-2\/1 = z+1\/2\n\nI found the cross product of l1 and l2 to get a normal vector perpendicular to both, which is [10, 20, 20] or n=[1,2,1]\n\nNow I am not sure how to get the parametric equation for the line because I cannot just use one of the vectors from above, like [4,8,-1] because it is not necessarily on l3.\n\nLast edited: Mar 29, 2008\n2. Mar 30, 2008\n\n### dynamicsolo\n\nHow about this? You already know now that a line L3 parallel to the vector <1,2,1> must intersect both L1 and L2. Picture your vector sticking up from, say, L1 at a point (x,y,z). For any value of the parameter t, the prospective line L3 will have the equation\n\n(x,y,z) + <1,2,1>\u00b7v = [ (4,8,-1) + <2,3,-4>\u00b7t ] + <1,2,1>\u00b7v ,\n\nwhere v is the parameter on line L3. Now, for some value of t, as we slide our candidate L3 along, it's supposed to intersect line L2. If we call its parameter s, this intersection will be given by\n\n[ (4,8,-1) + <2,3,-4>\u00b7t ] + <1,2,1>\u00b7v = (7,2,-1) + <-6,1,2>\u00b7s ,\n\nafter converting L2's symmetric equation to parametric equations. We now have a vector equation corresponding to a system of three equations in three unknowns. (Not much worse than what you have to do for the skewness\/intersection test for two lines in three-dimensional space.) Once you have\nt, s, and v , you will know the points where each pair L1, L3 and L2, L3 meet and the value of parameter v where L3 intersects L2. This will give a complete description of the situation, including information for writing the parametric form for L3.\n\nIt also occurred to me that if L3 meets L1 and L2 at right angles, it must meet those lines at the points on L1 and L2 that are at the minimal separation between those lines, since the segment of L3 linking them marks the perpendicular distance between L1 and L2. But the reckoning for that looks hideous...\n\nLast edited: Mar 30, 2008","date":"2017-01-18 22:54:13","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8423681259155273, \"perplexity\": 913.0062933509512}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560280364.67\/warc\/CC-MAIN-20170116095120-00273-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
| null | null |
{"url":"https:\/\/homework.cpm.org\/category\/CON_FOUND\/textbook\/a2c\/chapter\/8\/lesson\/8.1.2\/problem\/8-25","text":"### Home > A2C > Chapter 8 > Lesson 8.1.2 > Problem8-25\n\n8-25.\n\nWhat is the domain of the entire graph of $h(\u03b8)=\\sin\u03b8$? Justify your reasoning.\n\nDomain: all real numbers","date":"2020-08-04 14:38:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 1, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5604997873306274, \"perplexity\": 3247.3165377924865}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439735867.94\/warc\/CC-MAIN-20200804131928-20200804161928-00548.warc.gz\"}"}
| null | null |
package org.springframework.cloud.stream.tuple;
import org.springframework.core.convert.converter.Converter;
/**
* @author David Turanski
*
*/
public class TupleStringMarshaller {
private final Converter<Tuple, String> tupleToStringConverter;
private final Converter<String, Tuple> stringToTupleConverter;
public TupleStringMarshaller(Converter<Tuple, String> tupleToStringConverter,
Converter<String, Tuple> stringToTupleConverter) {
this.tupleToStringConverter = tupleToStringConverter;
this.stringToTupleConverter = stringToTupleConverter;
}
public Tuple toTuple(String source) {
return stringToTupleConverter.convert(source);
}
public String fromTuple(Tuple source) {
return tupleToStringConverter.convert(source);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,591
|
package model
import "github.com/af83/edwig/logger"
type ModelId string
type ModelInstance interface {
ObjectIDConsumerInterface
modelId() ModelId
}
type Model interface {
Date() Date
Lines() Lines
Situations() Situations
StopAreas() StopAreas
StopVisits() StopVisits
VehicleJourneys() VehicleJourneys
Operators() Operators
}
type MemoryModel struct {
stopAreas *MemoryStopAreas
stopVisits *MemoryStopVisits
vehicleJourneys *MemoryVehicleJourneys
lines *MemoryLines
date Date
situations *MemorySituations
operators *MemoryOperators
SMEventsChan chan StopMonitoringBroadcastEvent
GMEventsChan chan GeneralMessageBroadcastEvent
}
func NewMemoryModel() *MemoryModel {
model := &MemoryModel{}
model.date = NewDate(DefaultClock().Now())
lines := NewMemoryLines()
lines.model = model
model.lines = lines
situations := NewMemorySituations()
situations.model = model
model.situations = situations
model.situations.broadcastEvent = model.broadcastGMEvent
stopAreas := NewMemoryStopAreas()
stopAreas.model = model
model.stopAreas = stopAreas
model.stopAreas.broadcastEvent = model.broadcastSMEvent
stopVisits := NewMemoryStopVisits()
stopVisits.model = model
model.stopVisits = stopVisits
model.stopVisits.broadcastEvent = model.broadcastSMEvent
vehicleJourneys := NewMemoryVehicleJourneys()
vehicleJourneys.model = model
model.vehicleJourneys = vehicleJourneys
operators := NewMemoryOperators()
operators.model = model
model.operators = operators
return model
}
func (model *MemoryModel) SetBroadcastSMChan(broadcastSMEventChan chan StopMonitoringBroadcastEvent) {
model.SMEventsChan = broadcastSMEventChan
}
func (model *MemoryModel) SetBroadcastGMChan(broadcastGMEventChan chan GeneralMessageBroadcastEvent) {
model.GMEventsChan = broadcastGMEventChan
}
func (model *MemoryModel) broadcastSMEvent(event StopMonitoringBroadcastEvent) {
select {
case model.SMEventsChan <- event:
default:
logger.Log.Debugf("BrocasterManager StopMonitoringBroadcastEvent queue is full")
}
}
func (model *MemoryModel) broadcastGMEvent(event GeneralMessageBroadcastEvent) {
select {
case model.GMEventsChan <- event:
default:
logger.Log.Debugf("BrocasterManager GeneralMessageBroadcastEvent queue is full")
}
}
func (model *MemoryModel) Reload(referentialSlug string) *MemoryModel {
model = NewMemoryModel()
model.date = NewDate(DefaultClock().Now())
model.stopAreas.Load(referentialSlug)
model.lines.Load(referentialSlug)
model.operators.Load(referentialSlug)
return model
}
func (model *MemoryModel) Clone() *MemoryModel {
clone := NewMemoryModel()
clone.stopAreas = model.stopAreas.Clone(clone)
clone.lines = model.lines.Clone(clone)
clone.date = NewDate(DefaultClock().Now())
return clone
}
func (model *MemoryModel) Date() Date {
return model.date
}
func (model *MemoryModel) Situations() Situations {
return model.situations
}
func (model *MemoryModel) StopAreas() StopAreas {
return model.stopAreas
}
func (model *MemoryModel) StopVisits() StopVisits {
return model.stopVisits
}
func (model *MemoryModel) VehicleJourneys() VehicleJourneys {
return model.vehicleJourneys
}
func (model *MemoryModel) Lines() Lines {
return model.lines
}
func (model *MemoryModel) Operators() Operators {
return model.operators
}
func (model *MemoryModel) NewTransaction() *Transaction {
return NewTransaction(model)
}
// TEMP: See what to do with errors
func (model *MemoryModel) Load(referentialSlug string) error {
err := model.stopAreas.Load(referentialSlug)
if err != nil {
logger.Log.Debugf("Error while loading StopAreas: %v", err)
}
err = model.lines.Load(referentialSlug)
if err != nil {
logger.Log.Debugf("Error while loading Lines: %v", err)
}
err = model.vehicleJourneys.Load(referentialSlug)
if err != nil {
logger.Log.Debugf("Error while loading VehicleJourneys: %v", err)
}
err = model.stopVisits.Load(referentialSlug)
if err != nil {
logger.Log.Debugf("Error while loading StopVisits: %v", err)
}
err = model.operators.Load(referentialSlug)
if err != nil {
logger.Log.Debugf("Error while loading Operators: %v", err)
}
return nil
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,522
|
Q: Facets of binary polytopes I have a problem that seems like it should have a slick, elegant solution but I'm having trouble finding one.
I'm working with convex polytopes with vertices that are subsets of $\{-1,1\}^n$. When considering the facet-defining hyperplanes
$$a_1 x_1 + \cdots + a_n x_n + b = 0$$
I'd like to claim that we may assume the coefficients $a_i$ lie in $\{0, \pm 1\}$.
I have a clunky induction argument in mind, but it feels like there may be a shrewd linear algebra trick for something like this.
Thanks!
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 592
|
Vegas Odds on Deshaun Watson Trade Destinations
Rice Moves All Classes Online, Delays Student Move-In Thanks To COVID-19 Surge
The Four Prongs of Bill O'Brien's Complex Legacy
Sean Pendergast
| Sean Pendergast |
Sean Pendergast | October 15, 2020 | 4:00am
By the end, Bill O'Brien was pretty beleaguered and running out of answers.
Photo by Eric Sauseda
When we eventually have the stomach to look back on the Bill O'Brien Era here in Houston, I think it will be viewed as one of the more remarkable (not really in a good way) progressions of an individual hoarding power within an organization in NFL history. It will be a college course in the "Peter Principle," in which the theory is that people rise to the level of their incompetence.
O'Brien was a decent NFL head coach, who couldn't stand sharing power nor decision making with anybody, so eventually he was able to take advantage of a power vacuum with the Texans, in the wake of the death of Bob McNair, to gather all of the power within the football side of the organization for himself. Coaching, personnel, drafting, free agency, all of it. All for Bill.
So when we try to answer the question "What is Bill O'Brien's legacy in Houston?" to me, there are four prongs to that legacy, and it's not all good. Far from it. Yeah, there are four division titles in six seasons, but there is so much more. Let's examine, shall we?
PLAYOFF BILL
There's no denying that Bill O'Brien was pretty good, better than most head coaches actually, at MAKING the playoffs. He just wasn't very good at actually WINNING in the playoffs. Over Bill O'Brien's tenure as Texans' head coach, the only head coaches to win more division titles were Bill Belichick and Andy Reid. That's pretty good company. Problem is that both of them won Super Bowls during that time. O'Brien peaked with two divisional round appearances where he lost by 18 points to the Patriots (on their way to a Super Bowl win, in Houston, of all places) and infamously blew a 24-0 lead to the Chiefs (by HALFTIME). O'Brien's four playoff losses were by an average of nearly 21 points. His nickname will never be "Big Game Bill".
TEAPOT BILL
Bill O'Brien arrived in town in 2014 with the nickname "Teapot," given to him in New England for his tendency to blow his stack due to his legendary temper. We all saw O'Brien blow up at Tom Brady when he was the offensive coordinator for the Patriots, and we kind of thought "Yeah! that guy has BALLS! Arguing with the G.O.A.T.! YOU GO, BILL!"
Then, he was hired here, and we got press conferences like THIS one in his first year as a head coach, and we thought "OK this is kind of fun, but he may need to dial it back a little, right?"
By last season, what turned out to be O'Brien's final season, he was shouting down fans like a deranged maniac....
TRADER BILL
It was one thing to get Bill O'Brien, the head coach. O'Brien the head coach had his good points and his bad points, but he was, at the very least, hovering around being average at his job, He was a middle of there pack head coach, and with a good quarterback, you can win a Super Bowl with a middle of the pack head coach. The problems really began to increase exponentially when O'Brien was given the reins as general manager in the summer of 2019. The trades he made were so outlandish and crippling, and the contracts he gave out were so devastating to the salary cap, that when any big name player became available, the Texans were immediately mentioned as a possibility because it was believed there was no deal O'Brien WOULDN'T make, if asked. The damage he has done to the Texans roster doesn't began to reveal itself in the team's 1-4 start. The next GM will be cleaning up O'Brien's mess for a long, long time.
WOKE BILL
This post has reflected on a lot of the negative of the Bill O'Brien experience. I'm not sure what it says about his tenure as head coach (and in the later years, GM) that his finest moment probably came in the aftermath of the murder of George Floyd, when he released this statement, where he was clearly speaking from the heart:
One thing I will say positively about O'Brien is that he always had the backs of his players in situations like this (and like 2017, when several of them were up in arms over Bob McNair's "inmates running the prison" comment). J.J. Watt, who seemed to despise O'Brien by the end of his tenure here, even said last week that O'Brien had his players' backs. So there's one in the "positive column" for O'Brien.
Listen to Sean Pendergast on SportsRadio 610 from 6 a.m. to 10 a.m. weekdays. Also, follow him on Twitter at twitter.com/SeanTPendergast and like him on Facebook at facebook.com/SeanTPendergast.
Keep the Houston Press Free... Since we started the Houston Press, it has been defined as the free, independent voice of Houston, and we would like to keep it that way. Offering our readers free access to incisive coverage of local news, food and culture. Producing stories on everything from political scandals to the hottest new bands, with gutsy reporting, stylish writing, and staffers who've won everything from the Society of Professional Journalists' Sigma Delta Chi feature-writing award to the Casey Medal for Meritorious Journalism. But with local journalism's existence under siege and advertising revenue setbacks having a larger impact, it is important now more than ever for us to rally support behind funding our local journalism. You can help by participating in our "I Support" membership program, allowing us to keep covering Houston with no paywalls.
Sean Pendergast is a contributing freelance writer who covers Houston area sports daily in the News section, with periodic columns and features, as well. He also hosts afternoon drive on SportsRadio 610, as well as the post game show for the Houston Texans.
Trump Supporters Storm U.S. Capitol, Halt Electoral College...
COVID Hospitalizations Surge and Houston Area Heads For a...
Federal Judge Rules Against City of Houston in Harding Street...
Bitcoin ATMs Are a Mixed Bag in Houston
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 349
|
package com.loyagram.android.campaignsdk.ui;
import android.content.Context;
import android.content.res.TypedArray;
import android.graphics.Typeface;
import android.support.v7.widget.AppCompatTextView;
import android.util.AttributeSet;
import com.loyagram.android.campaignsdk.R;
/**
* Custom Spinner text for Language selection.
*/
public class SpinnerTextVIew extends AppCompatTextView {
public SpinnerTextVIew(Context context) {
super(context);
}
public SpinnerTextVIew(Context context, AttributeSet attrs) {
super(context, attrs);
setAttributeSet(context, attrs);
}
private void setAttributeSet(Context ctx, AttributeSet attrs) {
TypedArray a = ctx.obtainStyledAttributes(attrs, R.styleable.TextViewSpinner);
String customFont = a.getString(R.styleable.TextViewSpinner_customFont);
setCustomFont(ctx, customFont);
a.recycle();
}
public boolean setCustomFont(Context ctx, String asset) {
Typeface tf;
try {
tf = Typeface.createFromAsset(ctx.getAssets(), asset);
} catch (Exception e) {
return false;
}
setTypeface(tf);
return true;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,727
|
package com.google.code.yanf4j.test.unittest.utils;
import java.util.NoSuchElementException;
import java.util.Queue;
import com.google.code.yanf4j.util.SimpleQueue;
import junit.framework.TestCase;
public class QueueTest extends TestCase {
private Queue<String> queue;
@Override
protected void setUp() throws Exception {
queue = new SimpleQueue<String>();
}
public void testADD() {
assertEquals(0, queue.size());
assertTrue(queue.isEmpty());
queue.add("a");
assertEquals(1, queue.size());
assertFalse(queue.isEmpty());
queue.add("a");
assertEquals(2, queue.size());
assertFalse(queue.isEmpty());
queue.add("b");
assertEquals(3, queue.size());
assertFalse(queue.isEmpty());
}
public void testOffer() {
assertEquals(0, queue.size());
assertTrue(queue.isEmpty());
queue.offer("a");
assertEquals(1, queue.size());
assertFalse(queue.isEmpty());
queue.offer("a");
assertEquals(2, queue.size());
assertFalse(queue.isEmpty());
queue.offer("b");
assertEquals(3, queue.size());
assertFalse(queue.isEmpty());
}
public void testPoll() {
assertNull(queue.poll());
queue.add("a");
assertEquals("a", queue.poll());
assertNull(queue.poll());
queue.add("a");
queue.add("b");
assertEquals("a", queue.poll());
assertEquals("b", queue.poll());
assertNull(queue.poll());
}
public void testPeek() {
assertNull(queue.peek());
queue.add("a");
assertEquals("a", queue.peek());
queue.add("b");
assertEquals("a", queue.peek());
queue.add("c");
assertEquals("a", queue.peek());
queue.poll();
assertEquals("b", queue.peek());
queue.poll();
assertEquals("c", queue.peek());
queue.poll();
assertNull(queue.peek());
}
public void testRemove() {
try {
this.queue.remove();
fail();
} catch (NoSuchElementException e) {
}
queue.add("a");
assertEquals("a", queue.remove());
try {
this.queue.remove();
fail();
} catch (NoSuchElementException e) {
}
queue.add("b");
queue.add("c");
assertEquals("b", queue.remove());
assertEquals("c", queue.remove());
try {
this.queue.remove();
fail();
} catch (NoSuchElementException e) {
}
}
@Override
protected void tearDown() throws Exception {
queue.clear();
assertEquals(0, queue.size());
assertTrue(queue.isEmpty());
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,737
|
west des moines »
5535 Mills Civic Parkway, West Des Moines, IA 50266
wdm.tonicbars.com
Bar, Lounge, Sports Bar, Cigar Friendly
Rate or Review this place!
Write a review for Tonic
by R Brock 4 years ago
The best in all of the world. Been in 23 countries none can compare to the friendship, the service and the overall relationship that a lounge/bar has to offer. Try it for a week you will understand what I'm saying.
Did you find this review helpful? Login or Join to give your opinion
by cambohooker 11 years ago
This is a cool place that gets really packed on the weekends. This is more of an upscale bar. They have a good selection of martinis if you're into that. They used to be known for their cigar selection but with the state smoking ban, i'm not sure what's going to happen with that. The drinks can be pricey.
by nevesis 11 years ago
Cool place -- great atmosphere and a classy, upscale crowd. The only caveat is that many of the patrons are part of the over 35 crowd.
by straight money 11 years ago
I love this bar I've gone there every wednesday since I moved back to Des Moines. I live on the other side of town and don't mind driving all the way out there to enjoy a fun environment where I don't have to worry about being surrounded by people who don't know how to just sit down and enjoy the beautiful surroundings. I can go here with my friends and really have a good time because you are surrounded by classy people.
by BENCHWARMER 12 years ago
By far the best bar in Des Moines area I did'nt even know it existed until City Views ad came out a week or so ago on best bars in Des Moines. The bartender's & cocktail waitresses were more than helpful it was a Wenesday night & the place was packed-I just can't get over how nice of a place it was- the decor -is out of this world it's like something you would find in Chicago or New York they defenetily went a step or two above.I know this is were I will be hanging out from now on-they even offered to call a cab or give me a ride home ! Thank You Tonic staff
by dsmjay 12 years ago
Talk about missed oppotunities! This should have be a cool place to go, but isn't. Who run this place, Denny Arthur? I've seen a younger crowd at the local nursing home. Maybe its a cigar manufacturer; the little guy behind the bar would'nt stop trying to sell us his overpriced cigars. As for the music, worst selection is DSM. One night they played one reggae CD all night, nothing else. Thank god Cabaret opened next door, so normal people in the neighborhood have a place to drink.
by T 13 years ago
I thought the atmosphere was decent, the beer prices were fair, and the service was great. We sat upstairs (not outside). I thought the top floor atmosphere was a little bland/boring, otherwise it is a place I will go again.
by Roberta 13 years ago
Great location. We sat on the patio and ordered drinks during a weekday Happy Hour. Margaritas were terrible, rather bitter in flavor. Beer was ok and the waitress didn't have change and forgot to bring it back to one patron twice. He had to point it out to her that she failed to return his change!! We did get take away food from Wok Fuzion and ate on the patio-appreciated that courtesy. I doubt we will return but it was mediocre at worst!
by Tom Brazelton 13 years ago
My two star rating is generous only because I think this is an interesting location and I want it to do better.
We went maybe a week after they first opened and things did not go smoothly. Drinks took forever to get to us and when we called for the check, it took even longer. Then the girl couldn't find our receipt to sign off on the credit card. She "thought" we were charged thou...
We were so frustrated we just left.
I've been kind of itching to go back since West Glen started filling in a little more. I think if the owners shelf the sports bar idea and just said "Hey, it's a classy joint." there would probably be less confusion.
by Josh 13 years ago
Just becasue a bar has TVs doesn't make it a sports bar. Just another bar with big flat screens thinking they're a sports bar. Prices are too high for a sports bar and what kind of sports bar doesn't serve pitchers? The only reason I'd go back was if I was on that side of town and wanted to watch a two minute drill. Other than that, it's a lounge at best. Looks like the owners tried to duplicate a Ducktail Lounge for the Jordan Creek area. Overall, prices are too high. $10 for 3 beers and $27 for four shots, ouch!
by Tom 13 years ago
Great bar with a great atmosphere and good happy hour prices. Plus the best looking women I have seen at a bar in Des Moines. I just guess some of us think it is okay to show up and watch a game and leave the jersey at home.
by nathan 13 years ago
So I read the review and this is NOT a sports bar. It is a yuppie crowd and that's what you will find there. Of all the TV's in the place 3 had sports going on and the others had FOX regular programming. The bartenders don't really have their feet set quite yet. And apparently you just walk in because the door guy only checks ID's if you walk up to him. I would call this place a mix of lounge and I don't know what else. It really has no identity as a bar. The only reason it got this good of a rating is because everyone there was good looking and it was clean. I'd like to check it out on a Sunday afternoon and see who shows up in jerseys.
Tips for Tonic from foursquare users
Collin C. 8 years ago
cold coors light and happy peeps.
7 people have done this
Chris M. 8 years ago
McCauley uses his interest free Von Maur charge card to buy the store out of Robert Graham shirts.
4 people have done this and 1 other wants to
Johnny Krohn wears six inch platform shoes to be as tall as his average customer. If you're lucky, he may sport his tongue ring that he wore while going to school at Dowling Catholic High School.
Tuesdays are real deece. The Tiff and Holly show can not be beat...
Joe F. 8 years ago
Help MSRP Matt pay his dry cleaning bill by leaving a good tip.....kitten!
Zachary B. 8 years ago
This is a nice chill place. We need to have more like it in Des Moines.
Sean G. 8 years ago
Toy soldiers on the speaker makes me dream of Brett Fine in a speedo smokin a cigar while krohn does shots with 3 inch
brandon n. 8 years ago
Ya Bud let's have some !
1 person has done this
Bailey B. 9 years ago
Matt basart is the best bartender. Go upstairs on the weekends. You will not be disappointed.
Matt mcCauley gives handjobs to frequent customers
Tonic is a West Glen bar near Target and Jordan Creek Town Center off of Mills Civic Parkway.
The owners themselves had trouble describing what the bar would be like, they said there is nothing like it in Des Moines and claimed it would be classy, fun, and relaxing. They also hesitantly described it as an upscale sportsbar - which is probably due to the 10 huge TVs.
However there isn't a lot of sports paraphernalia and typical sports bar stuff; instead you'll find leather furniture, dark wood, 2 huge bars (1 on each level), 2 patios, and a mezzanine that overlooks the ground floor. It really is a classy place.
Due to its location and style, Tonic is definitely a yuppie bar with a yuppie crowd, drinks aren't cheap, and some of the bartenders don't exactly know what they're doing yet, but all-in-all, it's a nice place.
We've only been on a friday night - I'd like to check it out during a football game, or sometime during the week to see how it transforms.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,147
|
So you're interested in a Cancer relationship? Try and observe the moon regularly; its a good way to track the ever changing moods and emotions of this zodiac sign.
These moods are rarely hidden, despite the hard shell covering their emotional selves; they appear on their ever-shifting faces.
It's important to remember that the Moon isn't really changing; its apparent changes have to do with its relationship with the Sun and the Earth. The surrounding atmosphere, the conditions in the environment and the emotions with which you approach Cancer all affect them and their responses to you, but they remain pretty much the same person throughout all of this.
The good news is that they'll always be the person that you like, especially if you give them the opportunity to see to your needs.
One of the most amazing traits of a Cancer relationship is how much fun they can be. When they're not full of laughs themselves, they are laughing with you at whatever comedic situation the two of you find yourselves in.
Cancers are rarely spoil-sports or party poopers; when they are in a good mood, they can really light up the atmosphere with antics and jokes that may be quite surprising because of their normally quiet and peaceful moods.
The quality of reflection borne by their planet, the Moon, has an ample metaphor in their ability to shine when appealing to the deep emotions and ideals in others; they are incredibly sensitive to others, and often use this sensitivity to create an intimate atmosphere that usually appeals to their friends and mates like no other.
Even in old age, a Cancer person seems to know exactly what's needed to lift your mood or satisfy your palate.
The hardest part of having a Cancer friend is that when they are in a depressive mood or in bad health, their feelings are extremely contagious. They can throw a wet blanket on your warm disposition or rain on your parade so quickly that it's easy to think they are doing it on purpose.
It's also true that they are easily hurt by others; there's nothing sadder than watching a Cancer trying to put on a happy face when they find themselves in the presence of someone who has hurt them.
Cancer may be the most difficult sign to break off a relationship with and expect to remain friends; they have a really hard time letting go of both old joys and old wounds.
In ancient times, the Water Signs were all said to be fertile, and Cancer's ability to nurture and care for others and their feelings is legendary. They can quickly size up your mood and come up with the solution almost instantaneously.
Some folks in a Cancer relationship may even feel a bit smothered by the kind of nurturing attention lavished on them, but it's hard to find fault with someone who has such a pleasant way of making you feel at home. Never worry about security when close to a Cancer; they are very good at making sure their loved ones are well taken care of.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,760
|
\section{Introduction}
In the classical infinitesimal model, a quantitative trait is
expressed as the sum of a genetic and a non-genetic (environmental)
component and the genetic component of offspring traits within a family
follows a normal distribution around the average of the parents' trait
values, and has a variance that is independent of the trait values of
the parents. With inbreeding, the variance decreases in proportion to
relatedness.
When trait values are determined by the sum of a large number of Mendelian
factors, each of small effect, as we show
in Barton et al.~(2017),
\nocite{barton/etheridge/veber:2017}
one can justify the infinitesimal
model as a limit of Mendelian inheritance.
Crucially, the results of Barton et al.~(2017)
\nocite{barton/etheridge/veber:2017}
show that the evolutionary forces such as random drift and
population structure are captured by the pedigree; conditioning
on that pedigree, and trait values in the population in
all generations before the present,
the within family distributions in the present generation
will be determined by a
multivariate normal, with variance determined by that in
the ancestral population and identities that can be deduced from the pedigree.
If some traits in the pedigree are
unknown, then averaging with respect to the ancestral distribution,
the multivariate normality is preserved.
It was also shown that under some forms of epistasis,
trait values within a family
are still normally distributed, although the mean will no longer be a simple
function of the traits in the parents (as there are epistatic components which
cannot be observed directly).
We emphasize that as a result of selection, population structure, and so on,
the trait distribution {\em across} the population can be far
from normal; the infinitesimal model as we define it only asserts that the
trait distributions {\em within families} are normally distributed, with a
variance-covariance matrix
that is determined entirely by that in an ancestral population and
the probabilities of identity determined by the pedigree.
Moreover, as a result of the multivariate normality, conditioning on some
of the trait values within that pedigree has predictable effects on the
mean and variance within and between families.
In this paper, we show that this extraordinary robustness of the infinitesimal
model extends to include dominance.
The trait distribution will once again be a multivariate normal distribution
whose mean and variance is
expressed in terms of the variance components
in an ancestral population and identities
determined by the pedigree, but now, with just first order dominance
effects, the identities required
will be up to fourth order.
As with the case of epistasis, the mean is not a
simple function of the trait values in the parents, and there is nontrivial
covariance between families.
One can think of the trait values within a family as
consisting of two parts. Both are normally distributed.
In the additive case, the first reduces to the mean of the trait values of
the parents;
with dominance it will be random (even if we condition on knowing the
parental traits), but the same for all
individuals in the family.
What is at first sight surprising is that even if we condition on
knowing the trait values of the parents, this shared quantity is normally
distributed.
We
show how to calculate its mean and variance
from knowledge of variance components in the ancestral
population and the pedigree, both with and without knowledge of the
trait values of the parents. Knowing the trait values of the parents
shifts the mean in a predictable way; the variance is independent of
the parental trait values.
The second part of the trait value, which is independent for each offspring in the family,
is independent of the first; it encodes the randomness of Mendelian
inheritance. It
is a draw from a normally distributed random variable with mean
zero and variance again determined by the pedigree and variance components in
the ancestral population. It is not affected by conditioning on parental
trait values. This segregation of the trait into a
shared part and a part that is independent for each member of a family, is
not the classical subdivision into additive and dominance components, but
it arises naturally
both in the formulation of the infinitesimal model and in its
derivation as a limit
of Mendelian inheritance for a large number of loci each of small effect.
Our work can be seen as an extension of that of Abney et al.~(2000), who
\nocite{abney/mcpeek/ober:2000}
establish sufficient conditions for a Central Limit Theorem to be
applied to the vector of trait values in the presence of dominance and
inbreeding. Here we establish the magnitude of the
error in that normal approximation; we
verify that in conditioning on the trait values of the parents of an
individual we are not (unless those traits are very extreme or
the pedigree is very inbred) leaving
the domain where the normal approximation is valid; and we
write down the effect of
knowing those parental trait values on the distribution of the
individual's own trait.
A careful statement of our results can be found in Theorems~\ref{convergence of residuals}
and~\ref{shared parent theorem}.
We begin by defining the identity coefficients that we need to be able to
calculate from the pedigree, and the variance and covariance components that
we need to know in the ancestral population, to describe the model precisely.
Having written down the model in terms of quantities that are familiar
from classical quantitative genetics, we explore its accuracy numerically.
Finally, we derive this extension of the infinitesimal model as a limit of a
model of Mendelian inheritance
on the pedigree. The calculations are somewhat involved, and almost all
will be relegated to the appendices. We must modify the
strategy of Barton et al.~(2017), which, although valid for the part of the
trait value which is independent for each individual within the family,
does not suffice for proving normality of the part of the trait value
that is shared by all individuals within a family.
\nocite{barton/etheridge/veber:2017}
To prove that this is normally distributed requires a new approach, based
on an extension of Stein's method of exchangeable pairs.
To keep the expressions in our calculations manageable,
we satisfy ourselves with presenting
the details only in the case in which we condition on knowing the trait
values of the parents of an individual, in contrast to the additive
case of Barton et al.~(2017), in which we conditioned on knowing all the
trait values in the pedigree up to the parental generation.
Our
approach could readily be extended to conditioning on knowledge of
more trait values, which amounts to conditioning a multivariate normal
on some of its marginals. In
Appendix~\ref{appendix: accumulation of information} we present the new ideas
that are required
to control the way in which errors in
the infinitesimal approximation accumulate from knowledge of
trait values of more distant relatives in the presence of dominance.
Just as in the additive case, the key will
be to show that because many different combinations of allelic states
are consistent with the same trait value,
knowledge of the pedigree, and the trait
values of the parents of an individual
in that pedigree, actually gives very little
information about the allelic state at a particular locus in that
individual, or about correlations between two specific loci.
An important consequence of this is that, in practice,
it is going to be hard to
observe
signals of polygenic adaptation, because even a large shift in a trait
caused by strong selection doesn't yield a prediction about alleles
at a particular locus.
\section{Identity coefficients}
In the case of an additive trait, the infinitesimal model can be
expressed in terms of the variance in the ancestral population and
two-way identity coefficients, that is, the probability that an allele
in individual $i$ in generation $t$ is identical by descent with
the corresponding allele in individual $j$.
In order to incorporate dominance, we require two, three and four-way
identities.
\subsubsection*{Recursions for pairwise identity by descent}
Two way identities are readily expressed as solutions to a recurrence.
We define $F_{ij}$ to be the probability of identity between two genes, one
from individual $i$ and one from individual $j$. When $i=j$, $F_{ii}$ is
defined to be the probability of identity by descent of two distinct genes in
the diploid individual.
The recursion for $F$ can be written in terms of a {\em pedigree matrix},
$P_{i,k}(t)$, which gives the probability that a
gene in $i$ in generation $t$
came from parent $k$ in generation $(t-1)$;
each row has two non-zero entries each with
value 1/2, unless the individual is produced
by selfing, in which case there is a single entry with value 1. In contrast to
Barton et al.~(2017),
\nocite{barton/etheridge/veber:2017} where we focused on haploids, here we necessarily
have to deal with diploids.
For diploids,
the recursion for $F$ is
\begin{equation}
\label{diploid recursion for identity}
F_{ij}(t)=\sum _{k,l}P_{i,k}(t)P_{j,l}(t)F_{kl}^*(t-1),
\end{equation}
where
\begin{equation*}
F_{kl}^*=F_{kl} \quad\mbox{if }k\neq l,
\qquad F_{kk}^*=\frac{1}{2}\left(1+F_{kk}\right).
\end{equation*}
The quantity $F_{kl}^*$ is the probability of identity of two genes
drawn {\em independently} from $k$, $l$;
if $k=l$, then the probability of drawing the same gene twice is one half.
\subsubsection*{Higher order identities}
In order to state our result, we shall need third and fourth order identities.
We use $F_{122}$ for the probability that the two alleles in individual
$2$ are identical by descent {\em and} they are identical by descent with
an allele chosen at random from individual $1$.
We write $F_{1122}$ for the probability that all four alleles across
individuals $1$ and $2$ are identical by descent; this corresponds
to the quantity $\delta$ in Walsh \& Lynch~(2018),
\nocite{walsh/lynch:2018}
Chapter 11.
We need an expression for the probability that each gene in
individual $1$ is identical by descent with a {\em different} gene
in individual $2$ and all four are not identical. We shall denote
this by $\widetilde{F}_{1212}$.
This is denoted by
$(\Delta -\delta)$ in Walsh \& Lynch~(2018). \nocite{walsh/lynch:2018}
Finally we need the probability that the two alleles in individual~$1$
are identical, as are the two alleles in individual $2$, but the four
alleles are not {\em all} identical, which we shall denote
by $\widetilde{F}_{1122}$.
We illustrate the three and four way identities in
Figure~\ref{picture for identities}. During the course of our
mathematical derivations,
it will be convenient to express all two, three, and four way identities
in terms of the nine possible four way identities \nocite{walsh/lynch:2018}
(Walsh \& Lynch, 2018;
Figure 11.5). This is illustrated in
Figure~\ref{picture for all identities}.
\begin{figure}
\centerline{\includegraphics[width=4in]{identities3.pdf}}
\caption{Three and four way identities. Lines indicate identity of alleles.
See the main text for further explanation. }
\label{picture for identities}
\end{figure}
\subsubsection*{Calculating identity coefficients}
Several papers have developed algorithms for calculating identity
coefficients, given a pedigree (Karigl, 1981; Abney, 2009;
Garcia-Cortes, 2015; Kirkpatrick et al., 2018).
\nocite{karigl:1981, abney:2009, garcia-cortes:2015, kirkpatrick/ge/wang:2019}
These assume a single genetic locus, and primarily consider the nine
condensed identity coefficients of Figure~\ref{picture for all identities}
that describe the relationship
between two diploid individuals.
This body of work has developed algorithms
that can efficiently calculate identity coefficients involving
two individuals, across large pedigrees.
Karigl (1982) \nocite{karigl:1982}
considers (but does not implement) calculation of identities
amongst more than two individuals.
Here, we define and implement a (fairly) simple algorithm that deals
with multiple sets of genes across multiple individuals.
This is unlikely
to be as efficient as existing algorithms for identities amongst one
set of genes across two individuals; it is limited by the need to calculate
and store identities amongst very many sets of ancestral genes,
corresponding to the very many routes by which genes may descend through
the pedigree.
First we establish our notation.
The two genes in each individual each receive a separate label. Thus a gene in
individual $i$ will have label
\(\mathbf{i}=\{i,1\}\) or \(\mathbf{i}=\{i,2\}\).
We define
\(F\left[\left\{S_1,S_2,\ldots , S_n\right\}\right]\) to be the probability
that the sets of genes $S_1, S_2,\ldots ,S_n$ are identical by descent,
tracing back to $n$ distinct founders in the ancestral population.
For example,
$F[\{\{\mathbf{i}\},\{\mathbf{j},\mathbf{k}\},\{\mathbf{l},\mathbf{m}\}\}]$
is the probability that these 3 sets of genes each trace back to
3 distinct founders. Necessarily,
$F[\{\{\mathbf{i}\}\}]=1$ (a single gene traces back to a unique founder),
and the probability of identity of $\mathbf{i}$ and $\mathbf{j}$ satisfies
$F[\{\{\mathbf{i},\mathbf{j}\}\}]=
1-F[\{\{\mathbf{i}\},\{\mathbf{j}\}\}]$.
Identities in generation $t$ are denoted $F_t$.
Given the pedigree, the identities are defined recursively;
$F_t$ is a linear combination of identities $F_{t-1}$ in the previous
generation.
Here we simply outline the algorithm.
A detailed explanation in terms of the \textit{Mathematica} code is in the
supplementary material.
In generation $t=0$ all individuals are assumed unrelated and so $F_0[S]$ is
set to be $1$ if $S$ is a collection of sets each containing just a single
distinct gene,
$S=\{\{\mathbf{i}_1\},\{\mathbf{i}_2\},\ldots
, \{\mathbf{i}_n\}\}$ with $\mathbf{i}_a\neq \mathbf{i}_b$ for $a\neq b$.
Otherwise it is set to zero.
The algorithm proceeds in two steps, first identifying the possible
parents from which each gene is descended and then the possible genes
within that parent. In this way, a list of all possible scenarios is
generated, with
each scenario having equal probability. A slight twist here is that
if a set contains a single gene in a given individual, that gene traces
back to one or other parent of the individual, with equal probability;
two genes in the same individual must trace back to the two parents,
although those may be the same individual if there is selfing.
This list contains many permutations that are equivalent, differing
only by order; these are tallied to reduce the number of configurations that
need to be stored, resulting in a weighted list.
This gives a recursion back to the founder generation.
The number of generations
and size of pedigree is limited by the amount of memory needed to store
the intermediate lists.
\section{The infinitesimal model with dominance}
\label{setting out the model}
The population is diploid and trait values are
determined by the allelic states at
$M$ unlinked loci.
We assume that in generation zero,
the individuals that found the pedigree are unrelated and
sampled from an ancestral population in which all loci are
in linkage equilibrium.
In order to define the various quantities
that enter into our model, we introduce notation to
express the trait as a sum of effects over loci. However, we
emphasize that once these
components, all of which are familiar from classical quantitative genetics,
have been calculated for the ancestral population,
the model can be defined without reference to the effects of individual loci.
To adhere to the notation
of Barton et al.~(2017), \nocite{barton/etheridge/veber:2017}
we use $\chi_l^1$, $\chi_l^2$ for the allelic
states at locus $l$.
We write $\bar{z}_0$ for the mean trait value in the ancestral population and
express the trait value of an individual as $\bar{z}_0$ plus a sum of allelic effects.
The influence of each locus will scale as $1/\sqrt{M}$,
where $M$ is the total number of loci (assumed large).
We write $\eta_l(\chi_l)$ to denote the (order one)
scaled additive effect of the
allele $\chi_l$ and $\phi_l(\chi_l^1,\chi_l^2)$ for the scaled dominance
component (assumed to be symmetric).
We shall assume that both $\eta_l$ and $\phi_l$ are uniformly
bounded. We also suppose that dominance effects are sufficiently
`balanced' that inbreeding
depression is finite. More precisely, let $\widehat{\chi}_l$ denote an
allele sampled at random from the distribution of
alleles at locus $l$ in the ancestral
population, then $\iota$ defined by
$$\iota=\frac{1}{\sqrt{M}}\sum_{l=1}^M\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]$$
is bounded (as a function of $M$). This condition is crucial to our result.
For simplicity, we don't consider higher order dominance components (that is $D\times D$ -- or more complex -- components) here.
For an individual in the ancestral population, its allelic states at locus
$l$, which we denote by
$\widehat{\chi}^1_l, \widehat{\chi}^2_l$, are independent draws from
a distribution $\widehat{\nu}_l$ on possible allelic states that we
assume is known.
It is convenient to normalise so that $\mathbb E[\eta_l(\widehat{\chi}_l)]=0$,
$\mathbb E[\phi_l(\widehat{\chi}_l^1,\widehat{\chi}_l^2)]=0$,
and for any value $x'$ of the allelic state at locus $l$, the
conditional expectations
$\mathbb E[\phi_l(\widehat{\chi}_l,x')]=0
=\mathbb E[\phi_l(x',\widehat{\chi}_l)]$.
We explain in Section~\ref{mendelian inheritance} why these
assumptions do not result in a
loss of generality.
The trait value takes the form
\begin{equation}
\label{expression for genetic component of trait}
Z=\bar{z}_0+\frac{1}{\sqrt{M}} \sum_{l=1}^M\left(\eta_l(\chi_l^1)
+\eta_l(\chi_l^2)
+\phi_l(\chi_l^1,\chi_l^2)\right).
\end{equation}
Let us write $i[1]$ and $i[2]$ for the parents of the individual labelled $i$. As advertised in the introduction, an offspring's value has two components. This first one is shared by all its siblings, and is a random quantity which is characteristic of the family. The second component is unique to the individual and independent of the first component. In our proofs, we shall
investigate these two parts separately.
We shall use the notation $Z^i=({\cal A}^i+{\cal D}^i)+(R^i+S^i)$,
where the shared part has been further subdivided into
the contribution ${\cal A}^i$ from the additive
component, and the contribution ${\cal D}^i$ from the dominance component.
The
residuals $R^i$ and $S^i$ are determined by Mendelian inheritance and
correspond to the contributions from
the additive and dominance components respectively.
Explicit expressions for these quantities are in
equations~(\ref{remainder R1})-(\ref{defn of D}) below.
In this notation, the additive part of the trait value is
${\cal A}^i+R^i$ and the dominance deviation is ${\cal D}^i+S^i$.
\subsection*{Trait values for a given pedigree}
We now define the infinitesimal model in
terms of classical quantities of quantitative genetics that can
be expressed in terms of expectations in the ancestral
population and identities determined by the pedigree. We use the
notation of Walsh \& Lynch~(2018),
\nocite{walsh/lynch:2018}
which we recall in
Table~\ref{QG coefficients}.
\begin{table}
\caption{Coefficients of classical quantitative genetics}
~
We use $\widehat{\chi}_l$ to denote a sample
from the distribution $\widehat{\nu}_l$ of possible allelic states at
locus $l$ in the ancestral population; $\widehat{\chi}_l^1$,
$\widehat{\chi}_l^2$ are independent draws from the same distribution.
\centering
\begin{tabular}{ll}\\
\hline\hline\\
Additive variance & $\sigma_A^2=\frac{2}{M}\sum_{l=1}^M
\mathbb E[\eta_l(\widehat{\chi}_l)^2]$\\
\\
Dominance variance & $\sigma_D^2=\frac{1}{M}\sum_{l=1}^M
\mathbb E[\phi_l(\widehat{\chi}_l^1,\widehat{\chi}_l^2)^2]$\\
\\
Inbreeding depression & $\iota =\frac{1}{\sqrt{M}}\sum_{l=1}^M
\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]$\\
\\
Sum of squared locus-specific & $\iota^*=\frac{1}{M}\sum_{l=1}^M
\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]^2$\\
inbreeding depressions & \\
\\
Variance of dominance effects & $\sigma_{DI}^2=\frac{1}{M}\sum_{l=1}^M
\left(\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)^2]-
\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]^2\right)$\\
in inbred individuals & \\
\\
Covariance of additive and & $\sigma_{ADI}=\frac{2}{M}\sum_{l=1}^M
\mathbb E\left[\eta_l(\widehat{\chi}_l)
\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)\right]$
\\
dominance effects in inbred &
\\
individuals &
\end{tabular}
\label{QG coefficients}
\end{table}
Under the infinitesimal model, conditional on the pedigree, the components
$({\cal A}^i+{\cal D}^i)$ and $(R^i+S^i)$ of
the trait values of individuals in a family
follow independent multivariate normal distributions.
In Appendix~\ref{QG derivations}
the expressions presented in this section will be justified by taking
the
trait values
determined by~(\ref{expression for genetic component of trait})
under a model of Mendelian inheritance. In writing down the
infinitesimal model, we shall assume
that as the number of loci tends to infinity, the quantities defined in
Table~\ref{QG coefficients}
converge to well defined limits.
To simplify notation, we shall use $1$ and $2$ in place of $i[1]$ and
$i[2]$ in our expressions for identity; thus, for example,
$F_{12}\equiv F_{i[1],i[2]}$, and $F_{11}$ will be the probability of
identity of the two copies of an allele in parent $i[1]$.
The mean and variance of $({\cal A}^i+{\cal D}^i)$ are then
\begin{equation}
\label{mean A+D}
\mathbb E[{\cal A}^i+{\cal D}^i]=\iota F_{12},
\end{equation}
and
\begin{align}
\label{variance A+D unconditioned}
\mathtt{Var}({\cal A}^i+{\cal D}^i)= &\ \frac{\sigma_A^2}{2}
\left(1+\frac{F_{11}+F_{22}}{2}+2F_{12}\right)
+\sigma_{ADI}\left(F_{12}+\frac{F_{112}+F_{122}}{2}\right) \nonumber
\\ & +\frac{(\sigma_{DI}^2+\iota^*)}{4}\left(F_{12}+F_{112}+F_{122}+F_{1122}\right) +\frac{\iota^*}{4}\widetilde{F}_{1212}-\iota^* F_{12}^2 \nonumber
\\
& +\frac{\sigma_D^2}{4}\left(1-F_{12}+F_{22}-F_{122}+
F_{11}-F_{112}+\widetilde{F}_{1122}+\frac{1}{2}\widetilde{F}_{1212}\right).
\end{align}
In this expression, the term proportional to $\sigma_A^2$ is the variance of ${\cal A}^i$, the term proportional to $\sigma_{ADI}$ is twice the covariance of ${\cal A}^i$ and ${\cal D}^i$ and the remaining sum gives the variance of ${\cal D}^i$. Recall that we are assuming here that the ancestral population is
in linkage equilibrium. With linkage there is an additional term, c.f.~the remark below equation~(\ref{variance Z}). The components $({\cal A}+{\cal D})$ are also correlated across families. For individuals
labelled $i$ and $j$ respectively,
\begin{align}
\label{covariance A plus D}
\mathtt{Cov}(({\cal A}^i+{\cal D}^i),({\cal A}^j+{\cal D}^j))=&\ 2F_{ij}\sigma_A^2
+(F_{ijj}+F_{iij})\sigma_{ADI}\nonumber\\
& +\widetilde{F}_{ijij}\sigma_D^2
+F_{iijj}(\sigma_{DI}^2+\iota^*)-\iota^2F_{ii}F_{jj}
+\iota^*\widetilde{F}_{iijj}.
\end{align}
Note that, in contrast to our expression for the variance of $Z^i$, in
this expression, the subscripts $i$ and $j$ in the identities
refer to the individuals themselves, not their parents; for example
the expression $F_{ij}$ is the probability of
identity of two alleles, one sampled at random from individual $i$ and one
sampled at random from individual $j$.
We reserve letters for individuals in the current generation, and
numbers for their parents.
If we combine the components $R^i+S^i$ that segregate within families, we have that the $N^i=(R^i+S^i)$ are independent of each other (due to the independence of the variables encoding Mendelian inheritance), mean zero, normally distributed random variables with variance
\begin{align}
\mathtt{Var}(N^i) = &
\left(1-\frac{F_{11}+F_{22}}{2}\right)\frac{\sigma_A^2}{2}
+
\frac{1}{4}\left(3F_{12}-F_{1122}-F_{112}-F_{122}\right)
\left(\sigma_{DI}^2+\iota^*\right) \nonumber\\
& +\frac{1}{4}\left(3(1-F_{12})-(F_{11}-F_{112})-(F_{22}-F_{122})
-\widetilde{F}_{1122}-\frac{1}{2}\widetilde{F}_{1212}\right)\sigma_D^2
\nonumber\\
&+\left(F_{12}-\frac{F_{112}+F_{122}}{2}\right)\sigma_{ADI}-\frac{\iota^*}{4}\widetilde{F}_{1212}.
\end{align}
Here again, the term proportional to $\sigma_A^2$ is the variance of $R^i$, the term proportional to $\sigma_{ADI}$ is twice the covariance of $R^i$ and $S^i$, and the remaining sum equals the variance of $S^i$.
We calculate the mean, variance and covariance of these different components
in Appendix~\ref{QG derivations}. In order to recover the mean and variance of the trait values, we add
the contributions of $({\cal A}^i+{\cal D}^i)$ and $N^i$ and observe that the identity
$F_{12}$ in our expressions for the variances of these quantities (which
we recall was
the probability of identity of one gene sampled at random from each of
the parents $i[1]$, $i[2]$ of our individual)
corresponds to $F_{ii}$. This yields that,
conditional on the pedigree,
\begin{equation}
\label{mean Z}
\mathbb E[Z^i]=\bar{z}_0+\iota F_{ii},
\end{equation}
\begin{equation}
\label{covariance Z}
\mathtt{Cov}(Z^i,Z^j)=2F_{ij}\sigma_A^2
+(F_{ijj}+F_{iij})\sigma_{ADI}+\widetilde{F}_{ijij}\sigma_D^2
+F_{iijj}(\sigma_{DI}^2+\iota^*)-\iota^2F_{ii}F_{jj} +\iota^*\widetilde{F}_{iijj},
\end{equation}
and
\begin{equation}
\label{variance Z}
\mathtt{Var}(Z^i)=\sigma_A^2(1+F_{ii})+\sigma_D^2(1-F_{ii}) +(\sigma_{DI}^2+\iota^*)F_{ii} +2\sigma_{ADI}F_{ii}-\iota^*F_{ii}^2.
\end{equation}
For a single individual, its trait value can only depend on the two alleles
that it carries at each locus, so it is no surprise that this expression
depends only on pairwise identities between those two alleles.
We remark that~(\ref{variance Z}) differs from the corresponding expression
(Equation 11.6c) in Walsh \& Lynch~(2018).
\nocite{walsh/lynch:2018}
To recover exactly their expression, one must
add $(\tilde{f}-F_{ii}^2)(\iota^2-\iota^*)$ to the right hand side, where
$\tilde{f}$ is the probability of identity at two distinct loci
in individual $i$.
We see how to recover this term in
Remark~\ref{source of discrepancy from WL},
but because we have assumed
linkage equilibrium in our base population, for the
period over which the infinitesimal model remains a good approximation,
under our assumptions we have
$\tilde{f}\approx F_{ii}^2$. This is not to say that there is not a
significant contribution to the trait value from linkage disequilibrium;
it is just that for any specific pair of loci it is negligible. We shall
see a toy example that reinforces this point at the
beginning of Section~\ref{mendelian inheritance}.
We emphasize again that our partition of the trait values into a
contribution that is shared by all individuals in a family and residuals
differs from the conventional split into an additive part and a
dominance deviation. The additive part of the trait is
$A^i={\cal A}^i+R^i$ and the dominance component is $D^i={\cal D}^i+S^i$.
From our calculations in Appendix~\ref{QG derivations}, we can read off
$$\mathbb E[A^i]=0,\qquad \mathbb E[D^i]=\iota F_{ii},$$
$$\mathtt{Var}(A^i)=\sigma_A^2\big(1+F_{ii}\big),
\qquad\mathtt{Cov}(A^i,D^i)=\sigma_{ADI}F_{ii},$$
and
\begin{equation}
\label{variance dominance deviation}
\mathtt{Var}(D^i)=\sigma_D^2\big(1-F_{ii}\big)+\sigma_{DI}^2F_{ii}+
\iota^*\big(F_{ii}-F_{ii}^2\big).
\end{equation}
\subsection*{Conditioning on trait values of parents}
Under the infinitesimal model,
the trait values of individuals across the pedigree are given
by a multivariate normal. Therefore standard results on conditioning multivariate
normal random vectors on their marginal values, which for ease of reference
we record in Appendix~\ref{conditioning normals}, allow us to
read off the effect on the distribution of $Z^i$ of conditioning on
$Z^{i[1]}$ and $Z^{i[2]}$.
However, a little care is needed; we shall be justifying the normal
distribution within families
as an approximation as the number of loci tends to infinity,
and we must be sure that asymptotic normality is
preserved under this conditioning.
We shall see that if, for example, parental trait values are
too extreme, then the conditioning pushes us to a part of the
probability space where the normal approximation breaks down. This is
particularly evident in the toy example that we present in
Section~\ref{mendelian inheritance}.
A justification for asymptotic normality even after conditioning is
outlined in Section~\ref{mendelian inheritance}, and details are
presented in
the appendices.
Just as in the classical infinitesimal model, the mean
and variance of the residuals $N^i=R^i+S^i$ is unchanged by conditioning on the trait
values of the parents (recall that these residuals encode the stochasticity due to Mendelian inheritance at each locus; expressions for $R^i$ and $S^i$ are given in equations~(\ref{remainder R1})-(\ref{remainder S2})). For the shared components, the mean and
variance will be distorted by quantities determined by the covariances
between $({\cal A}^i+{\cal D}^i)$ and $Z^{i[1]}$, $Z^{i[2]}$.
Let us write
\begin{equation}
\label{covariance A+D and parental trait}
C(i, i[1]):=\mathtt{Cov}(({\cal A}^i+{\cal D}^i), Z^{i[1]}),
\end{equation}
with a corresponding definition for $C(i, i[2])$.
Then, once again using $1$ and $2$ in place of $i[1]$ and $i[2]$ in our
expressions for identities,
\begin{align}
\label{expression for C([1])}
C(i, i[1])=&\ \frac{\sigma_A^2}{2}\left(1+F_{11}+2F_{12}\right)
+\frac{\sigma_{ADI}}{2}\left(F_{11}+F_{12}+2F_{112}\right)\nonumber \\
& +\sigma_D^2(F_{12}-F_{112})
+(\sigma_{DI}^2+\iota^*)F_{112}-\iota^2F_{11}F_{12},
\end{align}
with $C(i,i[2])$ given by the corresponding expression with the r\^oles
of the subscripts $1$ and $2$ interchanged.
(A derivation of this expression is provided in Appendix~\ref{QG derivations}.)
With this notation,
\begin{multline}
\label{mean A+D given parental traits}
\mathbb E[({\cal A}^i+{\cal D}^i)|Z^{i[1]},Z^{i[2]}]=\mathbb E[({\cal A}^i+{\cal D}^i)]
+\frac{1}
{\mathtt{Var}(Z^{i[1]}) \mathtt{Var}(Z^{i[2]})
-\mathtt{Cov}(Z^{i[1]},Z^{i[2]})^2}\\
\times\Bigg\{
\left(C(i, i[1])\mathtt{Var}(Z^{i[2]})
-C(i, i[2])\mathtt{Cov}(Z^{i[1]},Z^{i[2]})\right)(Z^{i[1]}-\mathbb E[Z^{i[1]}])
\\
+\left(C(i, i[2])\mathtt{Var}(Z^{i[1]})
-C(i, i[1])\mathtt{Cov}(Z^{i[1]},Z^{i[2]})\right)(Z^{i[2]}-\mathbb E[Z^{i[2]}])
\Bigg\},
\end{multline}
and
\begin{multline}
\label{variance A+D given parental traits}
\mathtt{Var}(({\cal A}^i+{\cal D}^i)|Z^{i[1]}, Z^{i[2]})
=\mathtt{Var}({\cal A}^i+{\cal D}^i)
\\-
\frac{\mathtt{Var}(Z^{i[1]})
C(i, i[2])^2+\mathtt{Var}(Z^{i[2]})
C(i,i[1])^2-2\mathtt{Cov}(Z^{i[1]},Z^{i[2]})C(i,i[1])C(i,i[2])}
{\mathtt{Var}(Z^{i[1]})\mathtt{Var}(Z^{i[2]})-
\mathtt{Cov}(Z^{i[1]},Z^{i[2]})^2}.
\end{multline}
(We have implicitly assumed that $i[1]\neq i[2]$; in the case
$i[1]=i[2]$ the expression is simpler as we are then conditioning a bivariate
normal on one of its marginals.)
\begin{remark}
In the purely
additive case, things simplify greatly. From the expressions above,
before conditioning, the mean of
${\cal A}^i+{\cal D}^i$ is zero (since $\iota=0$), and the variance is
$$\frac{\sigma_A^2}{2}\left(1+\frac{(F_{11}+F_{22})}{2}+2F_{12}\right).$$
Moreover,
$$\mathtt{Var}(Z^{i[1]})=\sigma_A^2(1+F_{11}),\quad
\mathtt{Var}(Z^{i[2]})=\sigma_A^2(1+F_{22}),\quad
\mathtt{Cov}(Z^{i[1]}, Z^{i[2]})=2\sigma_A^2 F_{12},$$
and
$$C(i, i[1])=\frac{1}{2}\sigma_A^2\left(1+F_{11}+2F_{12}\right),
\quad
C(i, i[2])=\frac{1}{2}\sigma_A^2\left(1+F_{22}+2F_{12}\right).$$
Substituting
into~(\ref{mean A+D given parental traits})
and~(\ref{variance A+D given parental traits}),
and observing that
\begin{align*}
& (1+F_{11})(1+F_{22}+2F_{12})^2+(1+F_{22})(1+F_{11}+2F_{12})^2
-4F_{12}(1+F_{11}+2F_{12})(1+F_{22}+2F_{12})\\
& =2\left((1+F_{11})(1+F_{22})-4F_{12}^2\right)\left(1+\frac{F_{11}+F_{22}}{2}
+2F_{12}\right),
\end{align*}
we find that
conditional on the trait values of the parents, the mean
and variance of ${\cal A}^i+{\cal D}^i$
reduce to $(Z^{i[1]}+Z^{i[2]})/2$ and zero, respectively, and
we recover the classical infinitesimal model.
\end{remark}
Although in the presence of dominance
the expressions~(\ref{mean A+D given parental traits})
and~(\ref{variance A+D given parental traits})
are rather complicated, we emphasize that they
are derived from knowledge of just the ancestral population and the pedigree,
and are expressed in terms of familiar quantities from classical
quantitative genetics.
\section{Numerical examples}
In this section, we present numerical examples to illustrate the
accuracy of the predictions of the infinitesimal model.
We first generated a pedigree for a population of constant size of $N=30$
diploid individuals over $50$ discrete generations. Mating is
random, but with no selfing.
In order to facilitate comparison of different scenarios, the same
pedigree was used for all subsequent simulations. In this way, the
identity coefficients are held constant. As expected, the mean probability
of identity between pairs of genes sampled from different individuals
in generation $t$ is close to $1-(1-1/2N)^t$.
We define a trait, $Z$, which depends on $M=1000$ biallelic loci.
There is no epistasis, so that the trait value is a
sum across loci. In the examples here, we assume
complete dominance, so that the effects of the three genotypes at each
locus are either $-\alpha:-\alpha:+\alpha$ or $-\alpha:+\alpha:+\alpha$.
In order to ensure that the inbreeding depression $\iota$ is bounded, we
need to have some `balance' and so choose the effects at each locus
according to an independent Bernoulli random variable with parameter $H$; that
is, the probability that the effects across the three genotypes at locus $l$ is
$-\alpha:-\alpha:+\alpha$ is $1-H$, independently for each locus.
The effect size $\alpha$ is taken to be $1/\sqrt{M}$
for all loci and $H=\frac{1}{2}+\frac{2}{\sqrt{M}}$. With these choices
the additive and dominance variances will be $\mathcal{O}(1)$.
In the ancestral population, the allele frequencies were
generated to mimic neutral allele frequencies with very low mutation rates, but
conditioned to segregate at each locus. Thus, allele frequencies at every locus were sampled independently and according to a distribution with density proportional to $(p(1-p))^{1-\epsilon}$,
with $\epsilon=0.001$, but with those in $[0, 1/60]$ and $[1-1/60, 1]$
discarded (and the distribution renormalised). Then for each population replicate, these frequencies were used to endow each individual in the base population with an allelic type at every locus.
Variance components are defined with respect to
this reference set of allele frequencies. For the population generated for the
examples presented here, these values were $\sigma_A^2 =0.269$,
$\sigma_D^2 =0.073$, and the inbreeding depression
$\iota =-0.531$. The additive and dominance components are
uncorrelated in the base population
($\mathtt{Cov}(A, D)=0$).
In the numerical experiments that follow, each replicate population is
started at time zero from a different collection of genotypes, sampled
from this base distribution.
We first simulated a neutral model.
Figure~\ref{change in mean and variance} illustrates how the different
components of the trait values change over fifty generations of neutral
evolution. Recall that we always use the same realisation of the pedigree.
For each replicate, we take an independent sample of allelic types
at time zero. For each individual in the pedigree we evaluate the additive
and dominance components $A$ and $D$ and then in each generation we
calculate the mean and
variance of these quantities across the $30$ individuals in the population.
This is only intended to give some feeling for
the ways in which the components fluctuate through time.
Of course the infinitesimal model is only providing a prediction for the \emph{distribution} of
trait values within families; a single realisation will see substantial contributions to trait
values from linkage disequilibrium (c.f. the toy example in Section~\ref{mendelian inheritance}
and Theorem~\ref{shared parent theorem}).
In the following
figures we compare these quantities to the detailed predictions of the
infinitesimal model.
The top row in Figure~\ref{change in mean and variance}
is a single replicate, while the bottom is the
average over three hundred replicates. On the left we have the mean of the
additive and dominance components and their sum; on
the right we have plotted the variance components.
For a single replicate,
there is indeed a substantial contribution from linkage disequilibrium.
When we
plot just the genic components (that is the sum over variances at each locus,
ignoring the contribution from linkage disequilibrium),
as expected, the picture is much smoother
and we see that the predictions of the infinitesimal model are a good fit
to the values obtained by averaging over $300$ replicates. Since
linkage disequilibrium will dissipate rapidly, halving in each generation, it is the genic
component that determines the long term evolution.
All components are measured relative to the base population.
In practice, in natural populations, one does not have access to
the ancestral population and so one measures
components relative to the current population.
This amounts to a change of reference (Hill et al.~2006).
\nocite{hill/barton/turelli:2006}
We do not do this in our setting as it would result in different
variance components for every replicate.
\begin{figure}
\centerline{\includegraphics[width=5in]{Fig1.pdf}}
\caption{
Changes of the mean and variance over $50$ generations of
neutral evolution. The top row
shows a single replicate, whilst the bottom row shows the average over
$300$ replicates. The left
column shows the means ($G = A + D$, $A$, $D$; black, blue, red),
whilst the right column shows the
variance components ($V_G$, $V_A$, $V_D$, $V_{A,D}$; black, blue, red, purple).
On the right, solid lines show the
total variances and covariance, whilst the dashed lines show the genic
component. These differ
through the contribution of linkage disequilibrium, which generates
substantial variation. The
genic component changes smoothly, as expected with a large number
($M = 1000$) of loci. Simulations
are made on a single pedigree with $30$ individuals; variance components
are measured relative
to the base population.
}
\label{change in mean and variance}
\end{figure}
In Figure~\ref{D vs F}
we explore the relationship between the dominance deviation and inbreeding.
Since we use the same pedigree for all our experiments, each individual
is characterised by a single $F_{ii}$ (the probability of identity of
the two alleles at a given locus). For each of 1000 replicates (that is
independent samples of allelic types for the
individuals in generation zero), we calculated the dominance deviation for
each individual in the pedigree.
The plot in Figure~\ref{D vs F}
shows the dominance deviation
averaged over those $1000$ replicates for each individual in
the pedigree. Thus there are $30$ points in each generation, one for
each individual in the population.
As expected, the mean of the
dominance component decreases in proportion to $F_{ii}$,
$\overline{D}=-0.53 F_{ii}$ (recall that $\iota=-0.53$ for our base
population).
\begin{figure}
\centerline{
\includegraphics[width=4in]{Fig2.pdf}}
\caption{The relation between the dominance deviation and
the probability of identity of the two alleles within an individual.
There is one point for the average over $1000$ replicates for
each of the thirty individuals in generations
$5$, $10$, $20$, $40$ (black, blue, purple, red). (Recall that the
pedigree is fixed, so identities are the same for each replicate.)
The mean of $D$ decreases as $\iota F_{ii} = -0.53 F_{ii}$
(black line).
}
\label{D vs F}
\end{figure}
Figure~\ref{covariance A D} shows how the (co)variance of $A$ and
$D$ depends on identity $F_{ii}$ for pairs of individuals in
the pedigree. As in Figure~\ref{D vs F},
for each individual in the pedigree, $A$ and $D$ are calculated for
each of the 1000 replicates; Figure~\ref{covariance A D} shows the
variances and covariances of the resulting values for each of the
thirty individuals in generations $5$, $10$, $20$ and $40$ and these
are compared to the theoretical predictions.
Note that since in the biallelic case $\sigma_D^2=\iota^*$, the
expression~(\ref{variance dominance deviation})
for the variance of the dominance component reduces to
$$\sigma_D^2\big(1-F_{ii}^2\big)+\sigma_{DI}^2F_{ii}.$$
\begin{figure}
\centerline{\includegraphics[width=4in]{Fig3.pdf}}
\caption{The variance and covariance of $A$ and $D$ versus identity
$F_{ii}$ for
individuals in the pedigree. As in Figure~\ref{D vs F},
there are $30$ points in each generation,
one corresponding to each of the thirty individuals in the population.
Generations $5$, $10$, $20$, $40$ (black, blue, purple, red).
}
\label{covariance A D}
\end{figure}
Next we consider the variances of the residuals $R$ and $S$ within families.
One hundred pairs of parents were chosen at random from the population, and
from each 1000 offspring were generated. This was repeated for ten replicates
made with the same pedigree and the same set of parents; within family
variances were then averaged over replicates.
In Figure~\ref{R and S}, in each plot there are 100 points, one for each
pair of parents.
The two lines correspond to least square regression (blue) and
theoretical predictions (red) which can be read off from
equations~(\ref{star one})-(\ref{star three}).
In particular, we use the notation
$V_R=\sigma_A^2(1+F_{W})/2$, where
$F_W=(F_{i[1]i[1]}+F_{i[2]i[2]})/2$ (the within-individual identity
averaged over parents $1$ and $2$);
$$V_S=\frac{\sigma_{DI}^2}{4}\big(3F_{12}-F_{1122}-F_{112}-F_{122}\big)+
\frac{\sigma_D^2}{4}\bigg(3-F_{11}-F_{22}-F_{1122}-\tilde{F}_{1122}
-\frac{3}{2}\tilde{F}_{1212}\bigg);$$
and
$$V_{R,S}=\frac{\sigma_{ADI}}{2}\bigg(F_{12}-\frac{(F_{112}+F_{122})}{2}\bigg).$$
We use the notation
$$F_{(3)}=
\frac{(F_{112}+F_{122})}{2}.$$
\begin{figure}
\centerline{\includegraphics[width=6in]{Fig4.pdf}}
\caption{The variance and covariance within families between the
residual additive and dominance deviations $R$ and $S$.
One hundred pairs of parents were chosen at random from the ancestral population
and from each one
thousand offspring were generated. The within family variances obtained
in this way were averaged over ten replicates (with the same pedigree
and parents). Each of the 100 points in each plot corresponds to one
pair of parents. The five outliers are families produced by selfing.
The blue lines show a least-squares regression; the red lines are the
theoretical predictions (see the main text). The two lines exactly coincide in
the plot on the right.
}
\label{R and S}
\end{figure}
The full force of our theoretical results is that even if we condition on
the trait values of parents, the within family distribution of their
offspring will consist of two normally distributed components and, in
particular, the variance components will be independent of the trait
values of the parents.
We test this by imposing strong truncation selection on the population.
We retain the same pedigree relatedness, but working down the pedigree,
each individual's genotype is determined by generating two possible
offspring from its parents and retaining the one with the larger trait
value.
In Figure~\ref{selected vs neutral} we compare the results with simulations
of the neutral population. Dashed lines are for the neutral simulations, solid
ones for the simulation with selection.
For the population under selection, we see an
immediate drop in the total genetic variation, caused by the strong
selection; there is significant negative linkage disequilibrium between individual
loci, as predicted by Bulmer~(1971).
\nocite{bulmer:1971}
The blue is the additive component. We see that about
one third of the variance is dominance variance.
The bottom row shows that the genic components are hardly affected by selection,
as predicted by the infinitesimal model. With or without selection, the variance
components change as a result of inbreeding.
\begin{figure}
\centerline{
\includegraphics[width=4in]{Fig5.pdf}}
\caption{Comparison between a neutral population and one subject to
truncation selection. Top row: change in means relative to the
initial value ($\overline{G}=\overline{A+D}$, $\overline{A}$,$\overline{D}$;
black, blue, red); middle: solid lines are
variances, including linkage disequilibria
($\mathtt{Var}(A+D)$,
$\mathtt{Var}(A)$,
$\mathtt{Var}(D)$,
$\mathtt{Cov}(A,D)$; black, blue, red, purple). Dashed lines are
for the neutral simulations, whilst solid lines show results for truncation selection.
The bottom row is the changes to genic variances with time against
predictions of the infinitesimal model.
The values are averages over 300 replicates for the neutral case, 1000 for the selected case, made with the same pedigree. There are $M=1000$ loci. Selection is made within families; for each
offspring, two individuals are generated from the corresponding parents,
and the one with the larger trait value
retained.
}
\label{selected vs neutral}
\end{figure}
Finally, Figure~\ref{rate of convergence figure}
compares the variance components at 50 generations for neutral simulations
with those with truncation selection
as the number of loci increases from $M=100$ to
$M=10^4$. Replicate simulations were generated as
in Figure~\ref{selected vs neutral}. Under the infinitesimal model,
these components should take the same values with and without selection.
This is reflected in the simulations, with the covariance between the
additive and dominance effects being the slowest to settle down to
the infinitesimal limit.
\begin{figure}
\centerline{
\includegraphics[width=4.5in]{Fig6.pdf}}
\caption{Convergence of the variance components at 50 generations,
as the number of loci increases from
$M=100$ to $M=10^4$. Simulations with 50\% truncation selection are compared
with neutral simulations (solid, dashed lines).
The replicate simulations were generated as
in Figure~\ref{selected vs neutral} (see main text).
}
\label{rate of convergence figure}
\end{figure}
\section{The infinitesimal model with dominance as a limit of Mendelian
inheritance}
\label{mendelian inheritance}
In this section we turn to the justification of our model as a limit of
a model of Mendelian inheritance as the number $M$ of loci tends to infinity.
Our work is an extension of that of Abney et al.~(2000),
\nocite{abney/mcpeek/ober:2000}
which in turn builds on Lange~(1978).
\nocite{lange:1978}
The distinctions here are that we explicitly
model the component of the trait value that is shared by all
individuals in a family
separately from the part that segregates within that family; we identify
the effect on each of these components of conditioning on knowing the trait
values of the parents of the family; and we estimate the error that we are
making in taking the normal approximation, thus providing information on when
the infinitesimal approximation breaks down.
The fact that the genetic component of trait values within families is
normally distributed is a consequence of the Central Limit Theorem.
That this remains valid even when we condition on the trait values of
the parents stems from the fact that knowing the trait value of an
individual actually provides very little information about the
allelic state at any particular locus. This in turn is because, typically,
there
are a large number of different genotypes that are consistent with a
given phenotype.
In Barton et al.~(2017), this was illustrated
through a simple example which can be found
on p.402 of Fisher~(1918), which concerned an additive trait in
a haploid population.
\nocite{fisher:1918}
\nocite{barton/etheridge/veber:2017}
Here we adapt that example to the model for which we performed our
numerical experiments.
Suppose then that we have $M$ biallelic loci. We denote the alleles at
locus $l$ by $a_l$ and $A_l$. The contributions to the trait of the
three genotypes $a_la_l$, $a_lA_l$ and $A_lA_l$ are
$-\alpha$, $-\alpha$, $\alpha$ respectively with probability
$\frac{1}{2}-\frac{2}{\sqrt{M}}$ and they are
$-\alpha$, $\alpha$, $\alpha$ with probability
$\frac{1}{2}+\frac{2}{\sqrt{M}}$.
The effect size $\alpha=1/\sqrt{M}$.
For simplicity, in contrast to our numerical experiments, we suppose that
the probabilities of genotypes $a_la_l$, $a_lA_l$, $A_lA_l$ are
$1/4$, $1/2$, $1/4$ respectively.
Now suppose that
we observe the trait value to be $k/\sqrt{M}$.
What is the conditional probability that the allelic
types at locus $l$, which we denote $\chi^1_l\chi^2_l$ is $A_lA_l$?
For definiteness, we take $M$ and $k$ both to
be even and $l=1$.
First consider the probability that the contribution to the trait
value from locus $1$ is $+1/\sqrt{M}$. Let us write $p_{+}$ for the
(unconditional) probability that the contribution from locus $1$
is $1/\sqrt{M}$, that is
$$p_{+}=\frac{1}{4}+\frac{1}{2}\bigg(\frac{1}{2}+\frac{2}{\sqrt{M}}\bigg)
=\frac{1}{2}\bigg(1+\frac{1}{\sqrt{M}}\bigg),$$
and $p_{-}=1-p_{+}$.
Let us write $\Psi_l/\sqrt{M}$ for the contribution to the trait
from locus $l$. We have
\begin{eqnarray*}
\frac{ \mathbb P\left[\left. \sum_{l=1}^M\Psi_l=k\right| \Psi_1=1\right]
}{\mathbb P\left[\sum_{l=1}^M\Psi_l=k\right]
&=& \frac{\mathbb P\left[\sum_{l=2}^M\Psi_l=k-1\right]
}{\mathbb P\left[\sum_{l=1}^M\Psi_l=k\right]
\\
&=&\frac{p_{+}^{(M+k-2)/2}p_{-}^{(M-k)/2}}{p_+^{(M+k)/2}p_{-}^{(M-k)/2}}\frac{\binom{M-1}{(M+k-2)/2}}
{\binom{M}{(M+k)/2}
\\&=& \left(1+\frac{k}{M}\right)\frac{1}{2p_+
\\&=& \left(1+\frac{k}{M}\right)\frac{1}{(1+1/\sqrt{M})}.
\end{eqnarray*}
An application of Bayes' rule then gives
\begin{eqnarray*}
\mathbb P\left[\chi^1_1=A_1, \chi^2_{1}=A_1\left|
\sum_{l=1}^M\frac{\Psi_l}{\sqrt{M}}=\frac{k}{\sqrt{M}}\right.\right]
&=&\frac{ \mathbb P\left[\left. \sum_{l=1}^M\Psi_l=k\right| \Psi_1=1\right]
}{\mathbb P\left[\sum_{l=1}^M\Psi_l=k\right]}\mathbb P\left[\chi^1_1=A_1, \chi^2_1=A_1\right]
\\
&=&
\left(1+\frac{k}{M}\right)\frac{1}{(1+1/\sqrt{M})}
\mathbb P\left[\chi^1_1=A_1, \chi^2_1=A_1\right].
\end{eqnarray*}
Similarly,
\begin{equation*}
\mathbb P\left[\chi^1_1=a_1, \chi^2_{1}=a_1\left|
\sum_{l=1}^M\frac{\Psi_l}{\sqrt{M}}=\frac{k}{\sqrt{M}}\right.\right]
=
\left(1-\frac{k}{M}\right)\frac{1}{(1-1/\sqrt{M})}
\mathbb P\left[\chi^1_1=a_1, \chi^2_1=a_1\right],
\end{equation*}
and
\begin{multline*}
\mathbb P\left[\chi^1_1=a_1, \chi^2_{1}=A_1\left|
\sum_{l=1}^M\frac{\Psi_l}{\sqrt{M}}=\frac{k}{\sqrt{M}}\right.\right]
\\=
\left\{\left(1+\frac{k}{M}\right)\frac{(1/2+2/\sqrt{M})}{(1+1/\sqrt{M})}
+\left(1-\frac{k}{M}\right)\frac{(1/2-2/\sqrt{M})}{(1-1/\sqrt{M})}\right\}
\mathbb P\left[\chi^1_1=a_1, \chi^2_1=A_1\right].
\end{multline*}
In view of the Central Limit Theorem,
we would expect a `typical' value of $k$ to be on the
order of $\sqrt{M}$; conditioning has only perturbed the
probability that $\Psi_1=1$ by a factor $k/M+\mathcal{O}(1/\sqrt{M})$,
which we expect
to be of order $1/\sqrt{M}$.
In the purely additive case, which corresponds to taking
$p_+=p_{-}=1/2$, at the extremes of what is possible ($k=\pm M$),
we recover complete
information about the values of $\chi^1_1$, $\chi_2^1$; however,
with dominance that is no longer true.
Notice that
for the difference between the trait value of an individual
and the mean over the population to be order one requires order
$\sqrt{M}$ of the loci to be `non-random', but observing the trait does
not tell us which of the possible $M$ loci these are.
Similarly, performing the entirely analogous calculation for pairs of loci,
and observing that
$$\frac{\binom{M-2}{(M+k-4)/2}}{\binom{M}{(M+k)/2}}
=\frac{1}{4}\left(1+\frac{k}{M}\right)\left(1+\frac{k-1}{M-1}\right),$$
we deduce that,
\begin{align}
& \mathbb P\left[\chi^1_1=A_1, \chi^2_1=A_1; \chi^1_2=A_2, \chi^2_2=A_2\bigg|
\sum_{l=1}^M\frac{\Psi_l}{\sqrt{M}}=\frac{k}{\sqrt{M}}\right] \nonumber\\
& =\left(1+\frac{k}{M}\right)\left(1+\frac{k-1}{M-1}\right)
\frac{1}{(1+1/\sqrt{M})^2}
\mathbb P[\chi^1_1=A_1, \chi^2_1=A_1; \chi^1_2=A_2, \chi^2_2=A_2] \nonumber
\\
& = \mathbb P\left[\chi^1_1=A_1, \chi^2_1=A_1\bigg|
\sum_{l=1}^M\frac{\Psi_l}{\sqrt{M}}=\frac{k}{\sqrt{M}}\right]
\times \mathbb P\left[\chi^1_2=A_2, \chi^2_2=A_2\bigg|
\sum_{l=1}^M\frac{\Psi_l}{\sqrt{M}}=\frac{k}{\sqrt{M}}\right] \nonumber
\\ & \qquad+
\mathbb P[\chi^1_1=A_1, \chi^2_1=A_1; \chi^1_2=A_2, \chi^2_2=A_2]
\left(1+\frac{k}{M}\right)
\frac{1}{(1+1/\sqrt{M})^2}
\left(\frac{k-1}{M-1}-\frac{k}{M}\right).
\label{toy model calculation}
\end{align}
For a `typical' trait value the last term
in~(\ref{toy model calculation})
is order $1/M$.
When we sum over loci, this is enough to give a nontrivial contribution to the trait
value coming from the linkage disequilibrium. However,
although observing the trait of a typical
individual tells us
something about linkage disequilibria, it does not tell us enough to identify
which of the order $M^2$ pairs of loci are in linkage disequilibrium.
Essentially the same argument will apply to the much more
general models that we develop below.
In particular, for the infinitesimal model to be a
good approximation, the observed parental trait
values must not contain too much information about the allelic
effect at any given locus, which requires that
the parental traits must not be too extreme (corresponding to $k$ in
our toy model being ${\mathcal O}(\sqrt{M})$).
In the additive case, it was enough to control the additional
information that we gained about any particular locus from knowledge
of the trait value in the parents. This is
because, in that case, the variance of the shared contribution
within a family
is zero and independent Mendelian inheritance at each locus ensures that
linkage disequilibria don't distort the variance of the
residual component that
segregates within families. With dominance, we must estimate the
(non-trivial) variance
of the shared component, and for this
we shall see that we need to control the build up of linkage disequilibrium
between pairs of loci.
It will turn out that since all pairs of loci are in linkage equilibrium
in the ancestral population, any {\em given} pair of loci will be approximately
in linkage equilibrium for the order $\sqrt{M}$ generations for which the
infinitesimal approximation is valid.
This does not mean that the linkage disequilibria don't affect the
trait values, but because of the very many different
combinations of alleles in an individual that are consistent with a given
trait, observing the trait tells us very little about the allelic
state at a particular locus. The allele at that locus can only ever
contribute $\mathcal{O}(1/\sqrt{M})$ to the overall trait value.
As the population evolves, and we are able to observe more and more
traits on the pedigree, we gain more and more information about the
allele that an individual carries at a particular locus.
In Barton et al.~(2017),
\nocite{barton/etheridge/veber:2017}
we considered an additive
trait in a population of haploid individuals. In that setting we showed
that for a given individual, one does not gain any more information
about the state at a given locus from looking at the trait values on
the whole of the rest of the pedigree than one does from observing just
the parents of that individual. In our model for diploid individuals
with dominance, this is no longer
the case; observing the trait values of any relatives, no matter how distant,
provides some additional information about the allelic state at a locus.
However, the amount of information gleaned about the allelic state
of an individual from observing new individuals in the pedigree
will decrease in proportion to
the probability of identity, and so for distant relatives in the pedigree is very small;
provided our pedigree is not too inbred,
and trait values are not too extreme, we
can still expect the infinitesimal model to be a good approximation
for order $\sqrt{M}$ generations.
\subsection*{Environmental noise}
Our derivations will depend on two approaches to proving asymptotic
normality. The first, which we apply to the portion $R^i+S^i$ of the
trait values, uses a generalised Central Limit theorem (which allows
for the summands to have different distributions),
which provides control over the rate of convergence
as $M\rightarrow\infty$.
(It is this control that tells us for how many generations
we can expect the infinitesimal model to be valid.)
However, the Central Limit Theorem guarantees only the rate of
convergence of the cumulative distribution function of the normalised
sum of effects at different loci. Our proofs exploit convergence to
the corresponding probability density function, which may
not even be defined.
To get around this, we
can follow the approach of Barton et al.~(2017)
\nocite{barton/etheridge/veber:2017}
and
make the (realistic) assumption that rather than observing the genetic
component of a trait directly, the observed trait has an environmental
component with a smooth density. This results in the trait distribution having a
smooth density which is enough to guarantee the faster rate of
convergence. In addition to the benefit in terms of regularity of the trait distribution, an environmental noise with a smooth distribution also reinforces the property that observing the trait value gives us very little information on the allelic state at a given locus: a continuum of combinations of genetic and environmental components may have led to the observed trait, in which each given locus contributes an infinitesimal amount. (To ensure sufficient regularity of the trait density, we could also make the assumption that the distribution of allelic effects at a given locus has a smooth
probability density function.)
The approach to proving asymptotic normality of the shared component uses
an extension of Stein's method of exchangeable pairs. Once
again in the presence of environmental noise (to ensure that the trait distribution
has a smooth density)
we recover convergence with an error of order
$1/\sqrt{M}$.
If the environmental component is taken to be normally
distributed, then exactly as in Barton et al.~(2017),
\nocite{barton/etheridge/veber:2017}
we can adapt our application of
Theorem~\ref{conditioning multivariate normals}
to write down the conditional distribution of the genetic
components given {\em observed} traits; i.e.~traits distorted by a small
environmental noise, c.f.~Remark~\ref{remark on breeder's equation}.
\subsection*{Assumptions and notation}
Recall that we assume that in generation zero,
the individuals that found the pedigree are unrelated and
sampled from an ancestral population in which all loci are assumed to be
in linkage equilibrium. The allelic states at locus $l$ on the
two chromosomes drawn from the ancestral population will be denoted
$\widehat{\chi}^1_l, \widehat{\chi}^2_l$. They are independent draws from
a distribution on possible allelic states that we denote by
$\widehat{\nu}_l(dx)$.
Without loss of generality,
by replacing $\phi_l(\widehat{\chi}_l^1, \widehat{\chi}_l^2)$ by
$$
\phi_l(\widehat{\chi}_l^1, \widehat{\chi}_l^2) -
\mathbb E[\phi_l(\widehat{\chi}_l^1, \widehat{\chi}_l^2)|\widehat{\chi}_l^1]
-\mathbb E[\phi_l(\widehat{\chi}_l^1, \widehat{\chi}_l^2)|\widehat{\chi}_l^2]
+\mathbb E[\phi_l(\widehat{\chi}_l^1, \widehat{\chi}_l^2)],$$
and observing that the second and third terms on the right hand side are
functions of $\widehat{\chi}_l^1$ and $\widehat{\chi}_l^2$ respectively,
which we may therefore
subsume into $\eta_l(\widehat{\chi}_l)$, we may assume that for any
value $x'$ of the allelic state at locus $l$, the
conditional expectation
\begin{equation}
\label{vanishing conditional distribution phi}
\mathbb E[\phi_l(\widehat{\chi}_l,x')]=\int\phi_l(x,x')\widehat{\nu}_l(dx)=0
=\mathbb E[\phi_l(x',\widehat{\chi}_l)].
\end{equation}
As a consequence, partitioning over the possible values of
$\widehat{\chi}_l^2$,
we have that the cross variation term
\begin{equation}
\label{vanishing cross variation}
\mathbb E[\eta_l(\widehat{\chi}_l^1)\phi_l(\widehat{\chi}_l^1, \widehat{\chi}_l^2)]
=\int \mathbb E[\eta_l(x')\phi_l(x',\widehat{\chi}_l^2)]\widehat{\nu}_l(dx')
=\int \eta_l(x')\mathbb E[\phi_l(x',\widehat{\chi}_l^2)]\widehat{\nu}_l(dx')
=0.
\end{equation}
With this modification of $\phi_l(x,x')$,
\begin{equation}
\label{mean phi zero}
\mathbb E[\phi_l(\widehat{\chi}_l^1,\widehat{\chi}_l^2)]=0.
\end{equation}
Moreover, still without loss of generality, by absorbing the mean into
$\bar{z}_0$, we may assume that
\begin{equation}
\label{mean eta zero}
\mathbb E[\eta_l(\widehat{\chi}_l)]=\int\eta_l(x)\widehat{\nu}_l(dx)=0.
\end{equation}
In this notation, the genetic component of the trait of an individual in
the ancestral population is
$$Z=\bar{z}_0+\frac{1}{\sqrt{M}} \sum_{l=1}^M
\left(\eta_l(\widehat{\chi}_l^1)+
\eta_l(\widehat{\chi}_l^2) +\phi_l(\widehat{\chi}_l^1,
\widehat{\chi}_l^2)\right),$$
and evidently $\mathbb E[Z]=\bar{z}_0$.
We assume that the scaled allelic effects $\eta_l$, $\phi_l$ are bounded;
$|\eta_l|$, $|\phi_l|\leq B$, for all $l$.
We also assume that all the quantities in Table~\ref{QG coefficients}
exist in the limit as $M\to\infty$.
\subsection*{Inheritance}
We now need some notation for Mendelian inheritance.
Recall that $i[1]$ and $i[2]$ are the labels of the parents of individual
$i$ in our pedigree.
Even though we do not distinguish between males and females,
it is convenient to think of
the chromosomes in individual $i$ as being labelled $1$ and $2$,
according to whether they are inherited from $i[1]$ or $i[2]$.
Again following the conventions of Barton et al.~(2017),
\nocite{barton/etheridge/veber:2017}
extended to account for the fact that we are now considering
diploid individuals,
we use independent Bernoulli$(1/2)$ random
variables, $X_l^i$, $Y_l^i$ to determine the inheritance at locus $l$
in individual $i$ in the chromosomes $1$ and $2$ respectively. Thus,
$X_l^i=1$ if the type at locus $l$ on the first chromosome
in individual $i$ is inherited from chromosome $1$ in $i[1]$, and
$Y_l^i=1$ if the type at locus $l$ on the second chromosome
in individual $i$ is inherited from chromosome $1$ in $i[2]$.
In this notation, the trait in individual $i$ in generation $t$
is given by
\begin{eqnarray}
\nonumbe
Z^i&=&\bar{z}_0+ {\cal A}^i+{\cal D}^i
\\
\label{remainder R1}
&&+\frac{1}{\sqrt{M}}\sum_{l=1}^M\Bigg\{
\bigg(X_l^i-\frac{1}{2}\bigg)\eta_l(\chi_l^{i[1],1})
+\bigg(\frac{1}{2}-X_l^i\bigg)\eta_l(\chi_l^{i[1],2})
\\
\label{remainder R2}
&& \qquad \qquad+\bigg(Y_i-\frac{1}{2}\bigg)\eta_l(\chi_l^{i[2],1})
+\bigg(\frac{1}{2}-Y_i\bigg)\eta_l(\chi_l^{i[2],2})\Bigg\}
\\
\label{remainder S1}
&&+\frac{1}{\sqrt{M}}\sum_{l=1}^M\Bigg\{
\bigg(X_l^iY_l^i-\frac{1}{4}\bigg)\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],1})
+\bigg(X_l^i(1-Y_l^i)-\frac{1}{4}\bigg)\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],2})
\\
\label{remainder S2}
&&+\bigg((1-X_l^i)Y_l^i-\frac{1}{4}\bigg)\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],1})
+\bigg((1-X_l^i)(1-Y_l^i)-\frac{1}{4}\bigg)\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],2})
\Bigg\},
\end{eqnarray}
where
\begin{equation}
\label{defn of A}
{\cal A}^i=\frac{1}{2\sqrt{M}}\sum_{l=1}^M
\left(\eta_l(\chi_l^{i[1],1})+\eta_l(\chi_l^{i[1],2})+
\eta_l(\chi_l^{i[2],1})+\eta_l(\chi_l^{i[2],2})\right)
\end{equation}
and
\begin{equation}
\label{defn of D}
{\cal D}^i=\frac{1}{4\sqrt{M}}\sum_{l=1}^M\left\{
\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],1})
+\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],2})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],1})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],2})
\right\}.
\end{equation}
The terms ${\cal A}^i$ and ${\cal D}^i$
are shared by all descendants of the
parents $i[1]$ and $i[2]$.
Our first task will be to calculate
the mean and variance of their sum, conditional on the pedigree ${\cal P}(t)$.
The sums~(\ref{remainder R1})+(\ref{remainder R2})
and~(\ref{remainder S1})+(\ref{remainder S2}),
comprise what we previously called $R^i$ and $S^i$ respectively;
each has mean zero. They capture the randomness of Mendelian inheritance.
They are uncorrelated with ${\cal A}^i+{\cal D}^i$.
Our second, simpler, task will be to calculate the variances and covariance
of $R^i$ and $S^i$
in terms of the ancestral population and
identities generated by the pedigree.
These calculations allow us to identify the mean and variance of
the parts ${\cal A}^i+{\cal D}^i$ and
$R^i+S^i$ in terms of the classical quantities of quantitative genetics
in Table~\ref{QG coefficients}.
Since we are assuming unlinked loci,
the asymptotic normality of these quantities when we condition on the
pedigree, but not on the trait values within that pedigree, is an
elementary application of Theorem~\ref{rinott clt}, a generalised
Central Limit Theorem which allows for non-identically distributed summands.
In Barton et al.~(2017),
\nocite{barton/etheridge/veber:2017}
we showed that in the purely additive case, the vector
$(R^i)_{i=1}^{N_t}$ which determines the joint distribution of
the trait values within families in generation $t$ (recalling that
in the additive case $S^i=0$),
is asymptotically a multivariate normal, even when we condition not just
on the pedigree relatedness of the individuals in generation $t$, but
also on knowing the observed
trait values of all individuals in the pedigree up to
generation $t-1$, which we denote by $\widetilde{Z}(t-1)$.
Our main result extends this to include
dominance, at least under the assumption that the ancestral population was
in linkage equilibrium.
With dominance, the expression for the distribution of the mean and
variance-covariance matrix of the multivariate normal
$Z^1,\ldots ,Z^{N_t}$ conditioned on the pedigree up to generation $t$
and some
collection of the observed trait values of individuals in that pedigree
up to generation $t-1$ is a sum of the quantities of
classical quantitative genetics in Table~\ref{QG coefficients}, weighted by
four-way identities and deviations of trait values from the mean.
In principle, they can be read off from
Theorem~\ref{conditioning multivariate normals}.
We will focus on proving that
conditional on knowing just
the trait values of the parents of individual $i$
and the pedigree, the components $({\cal A}^i+{\cal D}^i)$ and $(R^i+S^i)$
are both
asymptotically normal, but we explain why our proof allows us to extend to the
case in which we also know trait values of other individuals.
The importance (and surprise) is that given the pedigree relationships
between the parents and classical coefficients of quantitative genetics
for a base population (assumed to be in linkage equilibrium), knowing the
traits of the parents distorts the distribution of their offspring in an
entirely predictable way.
In particular, this is what we mean when we say
that the infinitesimal model continues to hold even with
dominance.
The extra challenge compared to the additive case is that, in contrast to the
part $R^i+S^i$, where Mendelian inheritance ensures independence of the
summands corresponding to different loci even after conditioning on trait
values, when we condition on trait values the terms in ${\cal A}^i+{\cal D}^i$ will be
(weakly) dependent and proving a Central Limit Theorem becomes more involved.
\subsection*{Main results}
We assume throughout our calculations that the trait values that we
observe, and therefore on which we condition, are the sum of a genetic
component and an independent environmental component; that is,
the observed trait value is
$$\widetilde{Z}^j:=Z^j+E^j,$$
where, for convenience, the $\{E^j\}$ are independent $N(0,\sigma_E^2)$-valued
random variables.
We suppose that the
environmental noise is shared by individuals in a family (so we can think
of it as part of the component ${\cal A}^i+{\cal D}^i$ of the trait
value, whose distribution therefore also has a smooth density).
We write $N_t$ for the number of individuals in the population in generation
$t$, $\big(Z_t^1,\ldots , Z_t^{N_t}\big)$ for the corresponding vector
of trait values,
and ${\cal P}(t)$ for the pedigree up to and including generation~$t$.
A simple application of the Central Limit Theorem gives that
$$\left.\Big(Z_t^1,\ldots ,Z_t^{N_t}\Big)\right| {\cal P}(t)$$
is asymptotically distributed as a multivariate normal random variable
as $M\to\infty$. More precisely, let $(\beta_1,\beta_2,\ldots, \beta_{N_t})
\in \mathbb R^{N_t}$, and write
$Z_\beta=\sum_{i=1}^{N_t}\beta_i Z_t^i$, then
using Theorem~\ref{rinott clt},
$$\left|\mathbb P\left[\frac{Z_{\beta}-\mathbb E[Z_{\beta}]}{\sqrt{\mathtt{Var}(Z_\beta)}}
\leq z\right]-{\cal N}(z)\right|
\leq\frac{C}{\sqrt{M}\sqrt{\mathtt{Var}(Z_\beta)}}
\left(1+\frac{\widetilde{C}}{\mathtt{Var}(Z_\beta)}\right),$$
for suitable constants $C,\,\widetilde{C}$ (which can be made explicit),
where ${\cal N}(z)$ is the cumulative distribution function for
a standard normal random variable. The
mean and variance of $Z_\beta$ can be read off from
equations~(\ref{mean Z}), (\ref{covariance Z}), and~(\ref{variance Z}).
Our main results concern the components of the trait values of
offspring when
we condition on the (observed) trait values of their parents.
The following result follows in essentially the same way as the
additive case of Barton et al.~(2017).
\nocite{barton/etheridge/veber:2017}
\begin{thm}
\label{convergence of residuals}
The conditioned residuals
$(R^i+S^i)|{\cal P}(t), \widetilde{Z}^{i[1]}, \widetilde{Z}^{i[2]}$
are asymptotically normally distributed, with an error of
order $1/\sqrt{M}$. More precisely, for all $z\in\mathbb R$,
\begin{align}
& \left|\mathbb P\left[\left.\frac{R^i+S^i}{\sqrt{\mathtt{Var}(R^i+S^i)}}
\leq z\right| {\cal P}(t), \widetilde{Z}^{i[1]},
\widetilde{Z}^{i[2]}\right]-{\cal N}(z)\right|
\nonumber\\ & \leq
\frac{C'}{\sqrt{M \mathtt{Var}(R^i+S^i)}}\left(1+\frac{\widetilde{C'}}{\mathtt{Var}(R^i+S^i)}\right)
\nonumber\\
& \quad \times \left(1+
C\big(\mathtt{Var}(\widetilde{Z}^{i[1]}),\mathtt{Var}(\widetilde{Z}^{i[2]}),
|\widetilde{Z}^{i[1]}-\mathbb E[\widetilde{Z}^{i[1]}|{\cal P}(t-1)]|,
|\widetilde{Z}^{i[2]}-\mathbb E[\widetilde{Z}^{i[2]}|{\cal P}(t-1)]|\big)\right),
\end{align}
where
\begin{align}
C&\big(\mathtt{Var}(\widetilde{Z}^{i[1]}),\mathtt{Var}(\widetilde{Z}^{i[2]}),
|\widetilde{Z}^{i[1]}-\mathbb E[\widetilde{Z}^{i[1]}|{\cal P}(t-1)]|,
|\widetilde{Z}^{i[2]}-\mathbb E[\widetilde{Z}^{i[2]}|{\cal P}(t-1)]|\big) \nonumber
\\
& =
C''\frac{|\widetilde{Z}^{i[1]}-\mathbb E[\widetilde{Z}^{i[1]}|{\cal P}(t-1)]|}
{\sqrt{\mathtt{Var}(\widetilde{Z}^{i[1]})}}+
C''\frac{|\widetilde{Z}^{i[2]}-\mathbb E[\widetilde{Z}^{i[2]}|{\cal P}(t-1)]|}
{\sqrt{\mathtt{Var}(\widetilde{Z}^{i[2]})}}
\nonumber \\
& \quad +
C'''\frac{1}{\sqrt{\mathtt{Var}(\widetilde{Z}^{i[1]})}
\, p\big(\mathtt{Var}(\widetilde{Z}^{i[1]}), |Z^{i[1]}-\mathbb E[Z^{i[1]}|{\cal P}(t-1)]|\big)}
\left(1+\frac{1}{\mathtt{Var}(\widetilde{Z}^{i[1]})}\right)
\nonumber\\ & \quad +
C'''\frac{1}{\sqrt{\mathtt{Var}(\widetilde{Z}^{i[2]})}
\,p\big(\mathtt{Var}(\widetilde{Z}^{i[2]}), |Z^{i[2]}-\mathbb E[Z^{i[2]}|{\cal P}(t-1)]|\big)}
\left(1+\frac{1}{\mathtt{Var}(\widetilde{Z}^{i[2]})}\right),
\end{align}
and we have used $p(\sigma^2,x)$ to denote the density at $x$ of a mean
zero normal random variable with variance $\sigma^2$.
The constants $C'$, $\widetilde{C'}$, $C''$, $C'''$
depend only on the bound $B$ on the scaled allelic
effects. The variances in the expressions
above are all calculated conditional on ${\cal P}(t-1)$, but not on
observed parental trait values.
\end{thm}
Put simply, the normal approximation is good to an error of order
$1/\sqrt{M}$;
the constant in the error term will be large, meaning that the approximation
will be poor, if the within family variance somewhere in
the pedigree is small or if the observed trait values are very different
from their expected values.
Just as in the additive case, we could prove an entirely analogous
result when we condition on any number of observed
trait values in the pedigree, except that with dominance this is
at the expense of picking up an extra
term in the error for each observed trait value on which we condition.
The justification required for this is provided by
Appendix~\ref{appendix: accumulation of information}.
What is at first sight more surprising is that the shared
component of the
trait value within a family is also asymptotically normally distributed,
even when we condition on observed parental trait values.
Our proof uses that we consider the environmental noise to be shared by
individuals within the family;
in this way we can guarantee that the shared component of the observed
trait value also has a smooth density.
We are only going to prove the result for the shared component of
a family in generation one that was produced by selfing.
\begin{thm}
\label{shared parent theorem}
Let $W={\cal A}+{\cal D}+E$ denote the shared component of the trait
value in a family
in generation one. Let $h$ be an
absolutely continuous function with $\|h'\|<\infty$, then
\begin{equation}
\label{result with h}
\left|\mathbb E[h(W)|i[1]=i[2], \widetilde{Z}^{i[1]}]-{\cal N}_{\mu_W,\sigma_W^2}(h)\right|
\leq \frac{C\|h'\|}{\sqrt{M}},
\end{equation}
where $\mu_W$
is given
by~(\ref{mean gen one identical parents}),
and $\sigma_W^2$ is the sum of the variance of the environmental noise and
the expression
in~(\ref{cond variance gen one same parent}).
\end{thm}
\begin{remark}
\begin{enumerate}
\item
Although we only prove that ${\cal A}^i+{\cal D}^i+E^i$ is asymptotically
normal in this special case of an individual in generation one that is
produced by selfing, the same arguments will apply in general. However, the
expressions involved become extremely cumbersome. By considering selfing, we
capture all the complications that arise in later generations (when distinct
parents may nonetheless be related).
\item
We don't record the exact bound on the constant $C$. It takes the same form
as the error function $C$ in Theorem~\ref{convergence of residuals},
except that the constants $C'$, $\widetilde{C'}$, $C''$, $C'''$ depend on the inbreeding
depression $\iota$, as well as the bound $B$ on the scaled allelic effects.
In particular, just as there, the asymptotic normality will break down
if the trait value of the parent is too extreme, or if the variance of the
trait values among offspring is too small.
\item
Since we are assuming that the environmental noise has a smooth density,
convergence in
the sense of~(\ref{result with h})
is sufficient to deduce
that the
cumulative distribution of ${\cal A}^i+{\cal D}^i+E^i$ converges.
\end{enumerate}
\end{remark}
\subsection*{Strategy of the derivation}
Our first task will be to show that conditional on the pedigree,
the distribution of the trait values
in generation $t$ is approximately multivariate normal (with an
appropriate error bound).
Since Mendelian inheritance ensures that (before we condition on
knowing any of the previous trait values in the pedigree) the allelic
states at different loci are independent, this is a straightforward
application of a generalised Central Limit Theorem (generalised
because the summands are not required to all have the same distribution).
Just as in Barton et al.~(2017), \nocite{barton/etheridge/veber:2017}
we can keep track of the error that we are making
in assuming a normal approximation at each generation.
In this way we see that, under our assumptions, the
infinitesimal model can be expected to be a good approximation
for order $\sqrt{M}$ generations.
The same Central Limit Theorem guarantees that the joint
distribution of $(Z^{i[1]}, Z^{i[2]}, {\cal A}^i+{\cal D}^i)$ is asymptotically
normally distributed
as the number of loci tends to infinity.
This certainly suggests that the conditional distribution of ${\cal A}^i+{\cal D}^i$ given
$Z^{i[1]}$, $Z^{i[2]}$ should be (approximately) normal with
mean and variance predicted by standard results on conditioning
a multivariate normal distribution on some of its marginals
(which we recall in Theorem~\ref{conditioning multivariate normals}).
However, this is not immediate. It is possible that the conditioning forces
the distribution on to the part of our probability space where the normal
approximation breaks down.
To verify that the conditional distribution is asymptotically normal,
we shall show that observing the
trait value of
an individual provides very little information
about their allelic state at any particular locus, or
any particular pair of loci,
and consequently conditioning on parental trait values provides very
little information about allelic states in their offspring.
This is (essentially)
achieved through an application of Bayes' rule, although some care is needed
to control the cumulative error across loci.
We use this to calculate the first and second moments of
${\cal A}^i+{\cal D}^i$ conditional on $\widetilde{Z}^{i[1]}$,
$\widetilde{Z}^{i[2]}$. The fact that they agree
with the predictions of Theorem~\ref{conditioning multivariate normals}
depends crucially on the assumption that dominance is `balanced', in the
sense that the inbreeding depression $\iota$ is well-defined. This
quantity enters not just in the expression for the expected
trait value of inbred individuals, but also in our error bounds,
c.f.~Remark~\ref{where we use inbreeding}.
Of course checking that the first two moments of the
conditional distribution of ${\cal A}^i+{\cal D}^i$ are (approximately)
consistent with
asymptotic normality is not enough to prove
that the conditioned random variable is indeed (approximately) normal.
Moreover, we cannot apply our generalised Central Limit Theorem
to this term.
Instead we use a generalisation of
Stein's method of `exchangeable pairs' (outlined in
Appendix~\ref{generalised CLTS}), which relies on our ability
to control the (weak) dependence between the contributions to
${\cal A}^i+{\cal D}^i$
from different loci that is induced by the conditioning.
We present the details in the case of identical parents (which is
the case in which normality is most surprising) in
Appendix~\ref{convergence to normal}.
We only present our results in the case in which we condition on the
parental traits of a single individual in generation $t$.
Just as in the additive case, this
can be extended to conditioning on any combination of traits in the pedigree
up to generation $t-1$, but the expressions involved become
unpleasantly complex. Instead of writing them out, we content ourselves
with explaining the only step that requires a new argument.
We must show that
knowing the traits of all individuals up to generation $t-1$
does not provide enough information
about the allelic states at any particular locus in an individual
in generation $t$ to destroy the
asymptotic normality of its trait value. This is
justified
in Appendix~\ref{appendix: accumulation of information} using the
fact that, because of Mendelian inheritance,
the amount of information gleaned about an allele carried
by individual $i$ from looking at the trait
value of one its relatives, is proportional to the
probability of identity with that individual as dictated by the
pedigree.
\subsection*{Asymptotic normality conditional on the pedigree}
We first illustrate the application of the generalised
Central Limit Theorem by
showing that in the ancestral population, the distribution
of $(Z_0^1,\ldots ,Z_0^{N_0})$ is multivariate normal with mean
vector $(\bar{z}_0,\ldots ,\bar{z}_0)$ and variance-covariance matrix
$(\sigma_A^2+\sigma_D^2)\,\mathrm{Id}$,
where $\mathrm{Id}$ is the
identity matrix and $\sigma_A^2$ and $\sigma_D^2$ were defined in
Table~\ref{QG coefficients}.
To prove this, it is enough to show that for any choice of
$\beta=(\beta_1,\ldots,\beta_{N_0})\in\mathbb R^{N_0}$,
\begin{equation*}
\sum_{j=1}^{N_0}\beta_jZ^j\rightarrow Z_{\beta},
\end{equation*}
where $Z_{\beta}$ is normally distributed with mean
$\bar{z}_0\sum_{j=1}^{N_0}\beta_j$ and
variance $(\sigma_A^2+\sigma_D^2)\sum_{j=1}^{N_0}\beta_j^2$.
We apply Theorem~\ref{rinott clt}, due to Rinott~(1994), which
provides control of the rate of convergence as $M\rightarrow\infty$.
\nocite{rinott:1994}
It is convenient to write
$\|\beta\|_1=\sum_{j=1}^{N_0}|\beta_j|$ and $ \|\beta\|_2^2=\sum_{j=1}^{N_0}\beta_j^2$.
Let us write
$$\Psi_l= \left(\eta_l(\widehat{\chi}_l^1)+
\eta_l(\widehat{\chi}_l^2) +\phi_l(\widehat{\chi}_l^1,
\widehat{\chi}_l^2)\right),$$
and we abuse notation by writing $\Psi_l^j$ for this quantity in the
$j$th individual in generation zero.
Set
$E_l=\sum_{j=1}^{N_0}\beta_j\Psi_l^j$. Recalling our assumption that all $\eta_l$ and $\phi_l$ are bounded by some constant $B$, so that the sum of the scaled effects at each locus is bounded
by $3B$, we have that $|E_l|$ is bounded
by $3B\|\beta\|_1$ for all $l$.
Moreover, since the individuals that found the pedigree are assumed to be
unrelated and sampled from an ancestral population in which all loci
are in linkage equilibrium, using~(\ref{mean phi zero})
and~(\ref{mean eta zero}), we find that
$$\mathbb E\Bigg[\sum_{l=1}^ME_l\Bigg]=0,\qquad \mathtt{Var}\Bigg(\sum_{l=1}^ME_l\Bigg)
=M\|\beta\|_2^2\left(\sigma_A^2+\sigma_D^2\right).$$
Theorem~\ref{rinott clt} then yields
\begin{multline*}
\left|\mathbb P\left[
\frac{\sum_{i=1}^{N_0}\beta_i(Z^i-\bar{z}_0)}{
\|\beta\|_2
\sqrt{\sigma_A^2+\sigma_D^2}}
\leq z\right]
-{\cal N}(z)\right|\leq
\frac{1}{\sqrt{M}\|\beta\|_2\sqrt{\sigma_A^2+\sigma_D^2}}
\Bigg\{\sqrt{\frac{1}{2\pi}}3B\|\beta\|_1
\\
+\frac{16}{\|\beta\|_2\sqrt{\sigma_A^2+\sigma_D^2}}(3B)^2\|\beta\|_1^2
+10\left(\frac{1}{\|\beta\|_2^2(\sigma_A^2+\sigma_D^2)}\right)(3B\|\beta\|_1)^3\Bigg\}.
\end{multline*}
Here ${\cal N}$ is the cumulative distribution function of a standard normal
random variable.
The right hand side can be bounded above by
\begin{equation}
\label{rinott clt bound generation zero}
\frac{C(\|\beta\|_1)}{\|\beta\|_2
\sqrt{M}\sqrt{\sigma_A^2+\sigma_D^2}
\left(1+\frac{1}{\|\beta\|_2^2{(\sigma_A^2+\sigma_D^2)}}\right),
\end{equation}
for a suitable constant $C$.
In particular, taking $\beta_k=0$ for $k\neq j$ and $\beta_j=1$, we read
off that the
rate of convergence to the normal distribution of $Z_0^j$ as the number
of loci tends to
infinity is order $1/\sqrt{M}$. Note that the normal approximation is
poor if the variance $\sigma_A^2+\sigma_D^2$ is small.
Exactly the same argument shows that the
distribution of
$(Z^1,\ldots ,Z^{N_t})$ of the individuals in generation $t$ converges
to that of a multivariate normal, with
mean vector $(\bar{z}_0+\iota F_{11},\ldots ,\bar{z}_0+\iota F_{N_tN_t})$ and
variance-covariance matrix
determined by equations~(\ref{covariance Z})
and~(\ref{variance Z}).
Our proof of asymptotic normality of ${\cal A}^i+{\cal D}^i$ conditional
on the observed trait
values of parents will exploit that the joint distribution of
$({\cal A}^i+{\cal D}^i, Z^{i[1]}, Z^{i[2]})$ is asymptotically normal, also with an
error of order $1/\sqrt{M}$. This time we show that
$\beta_1Z^{i[1]}+\beta_2 Z^{i[2]}+\beta_3({\cal A}^i+{\cal D}^i)$ is asymptotically
normal for every choice of the vector $(\beta_1,\beta_2,\beta_3)\in\mathbb R^3$.
We apply Theorem~\ref{rinott clt}
with
$$\widetilde{E}_l=\beta_1\Psi_l(i[1])
+\beta_2\Psi_l(i[2])+\beta_3 \Phi_l^i$$
where
$$\Psi_l(i[1])=\eta_l(\chi_l^{i[1],1})+\eta_l(\chi_l^{i[1],2})
+\phi_l(\chi_l^{i[1],1},\chi_l^{i[1],2}),$$
with a symmetric expression for $\Psi_l(i[2])$, and
\begin{multline*}
\Phi^i_l=\frac{1}{2}\left(\eta_l(\chi_l^{i[1],1})+
\eta_l(\chi_l^{i[1],2})+\eta_l(\chi_l^{i[2],1})
+\eta_l(\chi_l^{i[2],2})\right)
\\
+\frac{1}{4}\left(\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],1})+
\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],2})+
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],1})+
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],2})\right).
\end{multline*}
Theorem~\ref{rinott clt} then shows that the difference between the
cumulative distribution function of $\beta_1Z^{i[1]}+\beta_2 Z^{i[2]}
+\beta_3\big({\cal A}^i+{\cal D}^i\big)$ and that of a normal
random variable with the corresponding mean and variance can be
bounded by~(\ref{rinott clt bound generation zero}) with
$\|\beta\|_2^2\big(\sigma_A^2+\sigma_D^2\big)$ replaced by
$\mathtt{Var}\Big(\beta_1Z^{i[1]}+\beta_2Z^{i[2]}+\beta_3\big({\cal A}^i
+{\cal D}^i\big)\Big)$, which
can be deduced from the expressions for the variance and
covariance of $\Psi_l^{i[1]}$, $\Psi_l^{i[2]}$ and $\Phi_l^i$
that are calculated in Appendix~\ref{QG derivations} and
recorded in~(\ref{covariance Z}), (\ref{variance Z}),
and~(\ref{expression for C([1])}).
\subsubsection*{Conditioning on trait values of the parents}
We suppose that for each $i$, we know the parents of the individual $i$ and
their trait values $Z^{i[1]}$ and $Z^{i[2]}$.
We shall treat the shared components $({\cal A}^i+{\cal D}^i)$ and the residuals
$(R^i+S^i)$ separately. Both will converge to multivariate normal
distributions which are independent of one another.
Mendelian inheritance ensures that the contributions to $R^i+S^i$ from
different loci are independent and so normality becomes an easy
consequence of Theorem~\ref{rinott clt}
once we have shown that the information gleaned from knowing the
trait values only perturbs the distribution by order
$1/\sqrt{M}$. This is checked in~(\ref{diagonal terms behave well})
and the proof then
closely resembles the proof in the additive setting of
Barton et al.~(2017) and so we omit the details.
\nocite{barton/etheridge/veber:2017}
The proof that $({\cal A}^i+{\cal D}^i)$ is normal is more involved as once we condition
on the trait values in the parents,
the contributions
$\Phi_l^i$ for $l=1,\ldots , M$ will all be (weakly) correlated.
Our approach uses an extension of
Stein's method of exchangeable pairs which we recall
in Appendix~\ref{generalised CLTS} and
apply to our setting in Appendix~\ref{convergence to normal}.
This calculation is more delicate, but the key is that our
conditioning induces very weak dependence between loci. The
deviation from normality is controlled by
$$\frac{1}{\mathbb P\Big[\widetilde{Z}^{i[1]}=z_1, \widetilde{Z}^{i[2]}=z_2,
{\cal A}^i+{\cal D}^i+E^i=w\Big]}\frac{\partial}{\partial z_1}
\mathbb P\Big[\widetilde{Z}^{i[1]}=z_1, \widetilde{Z}^{i[2]}=z_2,
{\cal A}^i+{\cal D}^i+E^i=w\Big],$$
and the corresponding quantity for the partial derivative with respect
to $z_2$ (both to be interpreted as ratios of densities)
evaluated at $\widetilde{Z}^{i[1]}$, $\widetilde{Z}^{i[2]}$ respectively.
(We recall that $\widetilde{Z}$ denotes observed trait value.)
The normal
approximation will break down if the trait values are too extreme
or if the pedigree is too inbred.
\section{Discussion}
\label{discussion}
The essence of the infinitesimal model is that in the absence of selection,
the distribution of a polygenic trait across a pedigree is multivariate
normal. Necessarily, if some individuals are selected (that is, if we
condition on their trait values), there can be an arbitrary distortion away
from Gaussian across the population. However, conditional on parental values and on the pedigree,
offspring still follow a Gaussian distribution. Therefore, the classic
theory for neutral evolution of quantitative traits can be used to predict
evolution, even under selection.
In Section~\ref{setting out the model}
we show here that this infinitesimal
limit holds with dominance, at least over timescales of order square root
of the number of loci.
Our work provides some mathematical justification for the ubiquity of the
Gaussian, and the empirical success of quantitative genetics - a success which is
remarkable, given the complex interactions that underlie most traits.
The limit is not universal: a non-linear transformation of a Gaussian trait
leads to a non-Gaussian distribution, and failure of the infinitesimal model.
This is because epistatic and dominance interactions then have a systematic
direction, which violates the terms of the Central Limit Theorem.
(Recall that in our toy example in Section~\ref{mendelian inheritance},
we needed a `balance' in the dominance component, which we see
reflected in our main results in the requirement that $\iota$ be
well-defined.)
Nevertheless, if the population is restricted to a range that is narrow
relative to the extremes that are genetically possible, then the
infinitesimal model may be accurate, even if the genotype-phenotype map is
not linear. This links to another way to understand our results: if very
many genotypes can generate the same phenotype, then knowing the trait
value gives us negligible information about individual allele frequencies.
To put this another way, the infinitesimal limit implies that selection on
individual alleles is weak relative to random drift ($N_e s \sim 1$), so
that neutral evolution at the genetic level is barely perturbed by selection
on the trait (Robertson, 1960).
\nocite{robertson:1960}
If traits truly evolve in this infinitesimal regime, then it will be
impossible to find any genomic trace of their response to selection. This
extreme view is contradicted by finding an excess of `signatures' of
selection in candidate genes, though it might nevertheless be that these
signals are generated by alleles with modest $N_es$, such that the
infinitesimal model remains accurate for the trait. Indeed,
Boyle et al.~(2017)
\nocite{boyle/li/pritchard:2017}
argue that the very large numbers of SNPs that are typically implicated in
GWAS for complex traits implies an `omnigenic' view, in which trait variance
is largely due to genes with no obvious functional relation to the trait.
Frequencies of non-synonymous and synonymous mutations suggest that selection
on deleterious alleles is typically much stronger than drift
($N_e s\gg 1$; Charlesworth, 2015).
\nocite{charlesworth:2015}
However, it might still be that selection on the focal trait is comparable
with drift, even if the total selection on alleles is much stronger.
Whether the infinitesimal model accurately describes trait evolution under
such a pleiotropic model is an interesting open question.
In principle, we can simulate the infinitesimal model exactly, by generating
offspring from the appropriate Gaussian distributions. For the additive case,
this is straightforward, since we only need follow the breeding value of each
individual, and the matrix of relationships amongst individuals
(e.g.~Barton \& Etheridge 2011, 2018).
\nocite{barton/etheridge:2011, barton/etheridge:2018}
However, to simulate the infinitesimal with dominance, we need to track
four-way identities, which is only feasible for small populations
($<30$, say).
We have not set out the extension of the infinitesimal model to structured
populations in detail. In principle, this just requires that we track the
identities within and between the various classes of individual. One
motivation for the present theoretical work was to extend our infinitesimal
model of `evolutionary rescue' (Barton \& Etheridge, 2018)
\nocite{barton/etheridge:2018}
to include inbreeding depression and partial selfing. This should be
feasible, provided that we do not need to track identities between specific
individuals, but instead, group individuals according to the time since
their most recent outcrossed ancestor - an approach applied successfully by
Sachdeva (2019).
\nocite{sachdeva:2019}
\nocite{lande/porcher:2015}
Already, Lande and Porcher~(2015) applied the infinitesimal model to a
deterministic model of partial selfing, whilst Roze~(2016)
\nocite{roze:2016}
analysed an explicit multi-locus model of partial selfing, allowing for
dominance and drift, assuming that all loci are equivalent, and that linkage
disequilibria are weak.
One of the most obviously unreasonable assumptions of the classical
infinitesimal model, and the extension described here, is that there are
an infinite number of unlinked loci. Santiago~(1998)
\nocite{santiago:1998}
showed how loose linkage could be approximated by averaging over pairwise
linkage disequilibria. In the additive case, the infinitesimal model can
be defined precisely for a linear genome, by assuming that very many alleles
are spread uniformly over the genome (Sachdeva \& Barton,~2018).
\nocite{sachdeva/barton:2018}
The same approach should extend to include dominance, though analysis, or
even exact simulation, would be challenging.
The main value of the infinitesimal model may be to show that trait evolution
depends on only a few macroscopic parameters; even if we still make explicit
multi-locus simulations, this focusses attention on those key parameters,
and gives confidence in the generality of our results. Quantitative genetics
has developed quite separately from population genetics. Although the
theoretical synthesis half a century ago (e.g.~Robertson, 1960; Bulmer,~1971;
\nocite{robertson:1960, bulmer:1971, lande:1975}
Lande,~1975) stimulated much subsequent work (empirical as well as
theoretical), the failure to find a practicable approximation for the
evolution of the genetic variance (e.g.~Turelli \& Barton,~1994)
\nocite{turelli/barton:1994}
was an obstacle to further progress. The infinitesimal model provides a
justification for neglecting the intractable effects of selection on the
variance components, and treating them as evolving solely due to drift and
migration. This approach may be helpful for understanding evolution in
the short and even medium term.
\begin{appendix}
\section{Conditioning on the pedigree}
\label{QG derivations}
In this section, we illustrate how to recover the expressions for the mean and
variance of the two parts $({\cal A}^i+{\cal D}^i)$ and $(R^i+S^i)$ of the trait
of individual $i$ from identity coefficients
of its parents $i[1]$ and
$i[2]$ and the classical coefficients of Table~\ref{QG coefficients}.
Covariances between families are calculated in the same way.
We also calculate the covariance between $({\cal A}^i+{\cal D}^i)$ and $Z^{i[1]}$
and $Z^{i[2]}$ (given the pedigree) which will be important for
establishing the effect of conditioning on the trait values of the
parents.
Although these expressions are
well known, it seems to be hard to find an explicit
derivation such as that presented here.
Note that at this stage we are only conditioning on the pedigree, not
on the observed trait values and the results in this section do not require us
to assume the presence of an environmental noise term.
\subsection*{Notation}
Throughout this section we are going to be calculating quantities conditional
on the pedigree. We shall suppress that in our notation.
\subsection*{Mean and variance of ${\cal A}^i+{\cal D}^i$}
The contribution to the trait $Z^{i}$ from the $l$th locus
is determined by the four alleles $\chi_l^{i[1],1}$,
$\chi_l^{i[1],2}$,$\chi_l^{i[2],1}$ and $\chi_l^{i[2],2}$ and
the independent Bernoulli random variables $X_l^i$ and $Y_l^i$.
The mean and variance of $({\cal A}^i+{\cal D}^i)$ and $(R^i+S^i)$ will
depend on which combinations of these alleles are identical.
First we introduce some notation for the nine possible identity
classes. In Figure~\ref{picture for all identities}, the two copies
of each gene in each individual are represented by two (horizontally
adjacent) dots. Lines between dots represent identity by descent.
It is convenient to think of the genes within an individual
as being ordered.
\begin{figure}
\centerline{\includegraphics[width=4in]{allidentities.pdf}}
\caption{All possible four way identities. The dots represent the
four genes across the two parents (each parent corresponding to
a row) and lines indicate identity (c.f.~Abney et al., 2000).
}
\label{picture for all identities}
\end{figure}
\nocite{abney/mcpeek/ober:2000}
Let us define
\begin{multline}
\label{defn of Phil}
\Phi(l)=\frac{1}{2}\left(\eta_l(\chi_l^{i[1],1})+
\eta_l(\chi_l^{i[1],2})+\eta_l(\chi_l^{i[2],1})
+\eta_l(\chi_l^{i[2],2})\right)
\\
+\frac{1}{4}\left(\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],1})+
\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],2})+
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],1})+
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],2})\right),
\end{multline}
\begin{equation}
\Psi_l(i[1])=\eta_l(\chi_l^{i[1],1})+
\eta_l(\chi_l^{i[1],2})+
\phi_l(\chi_l^{i[1],1},\chi_l^{i[1],2}),
\end{equation}
and
\begin{equation}
\Psi_l(i[2])=\eta_l(\chi_l^{i[2],1})+
\eta_l(\chi_l^{i[2],2})+
\phi_l(\chi_l^{i[2],1},\chi_l^{i[2],2}).
\end{equation}
For each of the nine possible identity classes between $i[1]$ and
$i[2]$, we calculate two quantities from which the mean and
variance of $({\cal A}^i+{\cal D}^i)$ will readily follow.
$$
\begin{array}{ccc}
\mbox{identity state}&\mathbb E\big[\frac{1}{M}\sum_{l=1}^M\Phi(l)^2|\Delta_{\cdot}\big]&
\mathbb E\big[\frac{1}{\sqrt{M}}\sum_{l=1}^M\Phi(l)|\Delta_{\cdot}\big]
\\
\\
\hline\hline
\\
\Delta_1&2\sigma_A^2+2\sigma_{ADI}+\sigma_{DI}^2+\iota^* & \iota
\\
\\
\Delta_2 & \sigma_A^2+\sigma_D^2 & 0
\\
\\
\Delta_3 & \frac{5}{4}\sigma_A^2+\frac{3}{4}\sigma_{ADI}+
\frac{\sigma_{DI}^2+\iota^*}{4}+\frac{\sigma_D^2}{4} &
\frac{\iota}{2}
\\
\\
\Delta_4 & \frac{3}{4}\sigma_A^2+\frac{1}{2}\sigma_D^2 &
0
\\
\\
\Delta_5 & \frac{5}{4}\sigma_A^2+\frac{3}{4}\sigma_{ADI}+
\frac{\sigma_{DI}^2+\iota^*}{4}+\frac{\sigma_D^2}{4} & \frac{\iota}{2}
\\
\\
\Delta_6 & \frac{3}{4}\sigma_A^2+\frac{1}{2}\sigma_D^2 & 0\\
\\
\Delta_7 & \sigma_A^2+\frac{\sigma_{DI}^2+\iota^*}{8}+
\frac{\sigma_{ADI}}{2}+\frac{\sigma_D^2}{4} & \frac{1}{4}\iota^* \\
\\
\Delta_8 & \frac{3}{4}\sigma_A^2+\frac{\sigma_{ADI}}{4}+
\frac{\sigma_{DI}^2+\iota^*}{16}+\frac{3}{16}\sigma_D^2 &
\frac{1}{4}\iota\\
\\
\Delta_9 & \frac{\sigma_A^2}{2}+\frac{\sigma_D^2}{4} & 0
\end{array}
$$
To see where these expressions come from, consider for example
identity state $\Delta_3$, with, say,
$\chi_l^1:=\chi_l^{i[1],1}= \chi_l^{i[1],2}= \chi_l^{i[2],1}\neq
\chi_l^{i[2],2}=:\chi_l^2$, where `$=$' here means identical
by descent. Then,
using~(\ref{vanishing cross variation})--(\ref{mean eta zero}),
\begin{eqnarray*}
\mathbb E\left[\left.\frac{1}{M}\sum_{l=1}^M\Phi(l)^2\right|\Delta_3\right]&=&
\frac{1}{M}\sum_{l=1}^M\mathbb E\left[\left(\frac{3\eta(\chi_l^1)+\eta(\chi_l^2)}{2}+
\frac{2\phi(\chi_l^1, \chi_l^1)+2\phi(\chi_l^1,\chi_l^2)}{4}\right)^2\right]
\\
&=&
\frac{5}{4}\sigma_A^2+\frac{3}{4}\sigma_{ADI}+\frac{1}{4}(\sigma_{DI}^2+\iota^*)
+\frac{1}{4}\sigma_D^2.
\end{eqnarray*}
The following quantities can be calculated in the same way. They
are important for calculating the
covariance between the trait values of parent and offspring
(in particular the covariance between $({\cal A}^i+{\cal D}^i)$ and $Z^{i[1]}$
and $Z^{i[2]}$) which
will dictate the change in distribution of the trait values within
families arising from conditioning on knowing the traits of the
parents. We record them here for later reference.
$$
\begin{array}{ccc}
\mbox{identity state}&
\mathbb E\big[\frac{1}{M}\sum_{l=1}^M\Phi(l)\Psi_l(i[1])|\Delta_{\cdot}\big]&
\mathbb E\big[\frac{1}{M}\sum_{l=1}^M\Phi(l)\Psi_l(i[2])|\Delta_{\cdot}\big]\\
\\
\hline\hline\\
\Delta_1&
2\sigma_A^2+2\sigma_{ADI}+\sigma_{DI}^2+\iota^*&
2\sigma_A^2+2\sigma_{ADI}+\sigma_{DI}^2+\iota^*\\
\\
\Delta_2 &
\sigma_A^2+\frac{\sigma_{ADI}}{2}
&
\sigma_A^2+\frac{\sigma_{ADI}}{2}\\
\\
\Delta_3 &
\frac{3}{2}\sigma_A^2+\frac{\sigma_{DI}^2+\iota^*}{2}+
\frac{5}{4}\sigma_{ADI} &
\sigma_A^2+\frac{\sigma_{ADI}}{4}+\frac{\sigma_D^2}{2}
\\
\\
\Delta_4 & \sigma_A^2+\frac{1}{2}\sigma_{ADI} & \frac{\sigma_A^2}{2}\\
\\
\Delta_5 & \sigma_A^2+\frac{\sigma_{ADI}}{4}+\frac{\sigma_D^2}{2} &
\frac{3}{2}\sigma_A^2+\frac{\sigma_{DI}^2+\iota^*}{2}+
\frac{5}{4}\sigma_{ADI}\\
\\
\Delta_6 & \frac{\sigma_A^2}{2} & \sigma_A^2 +\frac{1}{2}\sigma_{ADI}\\
\\
\Delta_7 & \sigma_A^2+\frac{\sigma_{ADI}}{4}+\frac{\sigma_D^2}{2}
& \sigma_A^2+\frac{\sigma_{ADI}}{4}+\frac{\sigma_D^2}{2}\\
\\
\Delta_8 &
\frac{3}{4}\sigma_A^2+\frac{1}{8}\sigma_{ADI}+\frac{1}{4}\sigma_D^2&
\frac{3}{4}\sigma_A^2+\frac{1}{8}\sigma_{ADI}+\frac{1}{4}\sigma_D^2\\
\\
\Delta_9 & \frac{1}{2}\sigma_A^2&
\frac{1}{2}\sigma_A^2
\end{array}
$$
We can express two and three way identities between the parents in
terms of the four way identities $\Delta_1,\ldots ,\Delta_9$.
Recall
that we write, for example, $F_{11}$ for the probability of identity of
the two alleles in $i[1]$ and $F_{12}$ for the probability of
identity of two alleles, one selected at random from $i[1]$ and
one from $i[2]$. In terms of the nine identity states we have
\begin{eqnarray*}
F_{11}&=& \mathbb P[\Delta_1]+\mathbb P[\Delta_2]+\mathbb P[\Delta_3]+\mathbb P[\Delta_4]\\
F_{22}&=&\mathbb P[\Delta_1]+\mathbb P[\Delta_2]+\mathbb P[\Delta_5]+\mathbb P[\Delta_6]\\
F_{12}&=&\mathbb P[\Delta_1]+\frac{1}{2}\left(\mathbb P[\Delta_3]+\mathbb P[\Delta_5]
+\mathbb P[\Delta_7]\right)+
\frac{1}{4}\mathbb P[\Delta_8]\\
F_{112}&=&\mathbb P[\Delta_1]+\frac{1}{2}\mathbb P[\Delta_3]\\
F_{122}&=&\mathbb P[\Delta_1]+\frac{1}{2}\mathbb P[\Delta_5]\\
F_{1122}&=&\mathbb P[\Delta_1]\\
\widetilde{F}_{1122}&=&\mathbb P[\Delta_2]\\
\widetilde{F}_{1212}&=&\mathbb P[\Delta_7].
\end{eqnarray*}
Combining the above, we find
\begin{align*}
\mathbb E\left[\frac{1}{M}\sum_{l=1}^M \Phi(l)\Psi_l(i[1])\right]
= &\ \frac{\sigma_A^2}{2}\left(1+F_{11}+2F_{12}\right)
+\frac{\sigma_{ADI}}{2}\left(F_{11}+F_{12}+2F_{122}\right) \nonumber\\
&+\sigma_D^2\left(F_{12}-F_{112}\right)+
\left(\sigma_{DI}^2+\iota^*\right)F_{112},
\end{align*}
with a symmetric expression for
$\frac{1}{M}\sum_{l=1}^M\mathbb E\left[\Phi(l)\Psi_l(i[2])\right]$.
Similarly,
$$\mathbb E[({\cal A}^i+{\cal D}^i)]=\frac{1}{\sqrt{M}}\sum_{l=1}^M\mathbb E[\Phi(l)]=\iota F_{12}$$
and
\begin{align*}
\frac{1}{M}\sum_{l=1}^M\mathbb E[\Phi(l)^2]= &\
\frac{\sigma_A^2}{2}\left(1+\frac{F_{11}+F_{22}}{2}+2F_{12}\right)
+\sigma_{ADI}\left(F_{12}+\frac{F_{112}+F_{122}}{2}\right)\\
& +\frac{\sigma_{DI}^2+\iota^*}{4}\left(F_{12}+F_{112}+F_{122}+F_{1122}\right)\\
& +\frac{\sigma_D^2}{4}\left(1-F_{12}+F_{11}-F_{112}+
F_{22}-F_{122}+\tilde{F}_{1122}+\frac{1}{2}\tilde{F}_{1212}\right)
+\frac{1}{4}\iota^*\widetilde{F}_{1212},
\end{align*}
from which, since for $l\neq m$ we are assuming
$\mathbb E[\Phi(l)\Phi(m)]=\mathbb E[\Phi(l)]\mathbb E[\Phi(m)]$,
\begin{equation}
\nonumber
\frac{1}{M}\mathbb E\Bigg[\mathop{\sum_{l=1}^M\sum_{m=1}^M}_{l\neq m}
\Phi(l)\Phi(m)\Bigg]=(\iota F_{12})^2-\iota^*F_{12}^2,
\end{equation}
and the expression~(\ref{variance A+D unconditioned})
for the variance of $({\cal A}^i+{\cal D}^i)$ follows.
\begin{remark}
\label{source of discrepancy from WL}
Walsh \& Lynch~(2018) give an expression for the variance when there is
linkage disequilibrium. In their notation,
$\tilde{f}$ is the probability of identity at two distinct loci.
Then for $l\neq m$,
$$\mathbb E[\Phi(l)\Phi(m)]=\tilde{f}\mathbb E[\Phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]\mathbb E[\Phi_m(\widehat{\chi}_m,\widehat{\chi}_m)],$$
so that our expression for
\begin{equation}
\nonumber
\frac{1}{M}\mathbb E\Bigg[\mathop{\sum_{l=1}^M\sum_{m=1}^M}_{l\neq m}
\Phi(l)\Phi(m)\Bigg]
\end{equation}
will be multiplied by
$(\tilde{f}/F_{12}^2)$, resulting (when we subtract
$\mathbb E[{\cal A}^i+{\cal D}^i]^2$)
in an overall expression
of $(\tilde{f}-F_{12}^2)\iota^2-\tilde{f}\iota^*$
in place of $-\iota^*F_{12}^2$.
Correcting for this by adding
$(\tilde{f}-F_{12}^2)(\iota^2-\iota^*)$ to our
expression~(\ref{variance Z}) for the
variance of $Z^i$ (for which we recall that $F_{12}$ becomes $F_{ii}$),
we recover the expression of Walsh \& Lynch~(2018).
\end{remark}
\subsection*{The covariance between ${\cal A}^i+{\cal D}^i$ and
${\cal A}^j+{\cal D}^j$.}
To understand the expression~(\ref{covariance A plus D})
for the covariance between ${\cal A}^i+{\cal D}^i$and
${\cal A}^j+{\cal D}^j$ for $i\neq j$,
consider
\begin{multline*}
\mathbb E\Bigg[
\Bigg\{
\frac{
\left(\eta_l(\chi_l^{i[1],1})+\eta_l(\chi_l^{i[1],2})+
\eta_l(\chi_l^{i[2],1})+\eta_l(\chi_l^{i[2],2})\right)}{2}
\\+
\frac{
\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],1})
+\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],2})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],1})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],2})
}{4}\Bigg\}
\\
\Bigg\{
\frac{
\left(\eta_l(\chi_l^{j[1],1})+\eta_l(\chi_l^{j[1],2})+
\eta_l(\chi_l^{j[2],1})+\eta_l(\chi_l^{j[2],2})\right)}{2}
\\+
\frac{
\phi_l(\chi_l^{j[1],1}, \chi_l^{j[2],1})
+\phi_l(\chi_l^{j[1],1}, \chi_l^{j[2],2})
+\phi_l(\chi_l^{j[1],2}, \chi_l^{j[2],1})
+\phi_l(\chi_l^{j[1],2}, \chi_l^{j[2],2})
}{4}\Bigg\}
\Bigg].
\end{multline*}
The sixteen terms corresponding to products of additive effects
correspond to the sixteen different possibilities for the allelic
types at locus $l$ if we choose one allele at random
from individual
$i$ and one from individual $j$, and the contribution to the
expectation will be nonzero precisely if the chosen alleles are identical,
in which case they contribute $\mathbb E[\eta_l(\widehat{\chi}_l)^2]$.
Summing over $l$, the overall contribution of such terms to the
covariance will therefore be $2\sigma_A^2 F_{ij}$.
Similarly, terms involving one factor of $\eta_l$ and one $\phi_l$
will only be non-zero if all evaluated on the same allelic type, hence
the terms multiplied by $F_{iij}$ and $F_{ijj}$ in
equation~(\ref{covariance A plus D}).
Continuing in this way and using that $\mathbb E[{\cal A}^i+{\cal D}^i]=\iota F_{ii}$, we
recover
equation~(\ref{covariance A plus D}).
\subsection*{The residuals $R^i+S^i$}
The corresponding calculations for the mean and variance of the residuals,
$R^i+S^i$ follow exactly the same pattern. It is convenient to consider
$R^i$ and $S^i$ separately, and then calculate the covariance. The first of
these, corresponding to the additive part
is very straightforward since it is only going to depend on pairwise
identities.
Recall first that
$$
R^i\!=\!\frac{1}{\sqrt{M}}\sum_{l=1}^M\left\{
\big(X_i-\frac{1}{2}\big)\eta_l(\chi_l^{i[1],1})
+\big(\frac{1}{2}-X_l^i\big)\eta_l(\chi_l^{i[1],2})
+\big(Y_i-\frac{1}{2}\big)\eta_l(\chi_l^{i[2],1})
+\big(\frac{1}{2}-Y_i\big)\eta_l(\chi_l^{i[2],2})\right\}.
$$
Since the Mendelian inheritance is independent of the
allelic states, $R^i$ has mean zero; to establish the
variance, we must calculate its square. Since
inheritance is independent at distinct loci, only the diagonal
terms contribute and we find
\begin{eqnarray}
\nonumber
\mathbb E[(R^i)^2]
&=&
\frac{1}{M}\sum_{l=1}^M\mathbb E\Bigg[\Bigg\{
\big(X_i-\frac{1}{2}\big)\eta_l(\chi_l^{i[1],1})
+\big(\frac{1}{2}-X_l^i\big)\eta_l(\chi_l^{i[1],2})
\\
\nonumber
&&
\qquad\qquad +\big(Y_i-\frac{1}{2}\big)\eta_l(\chi_l^{i[2],1})
+\big(\frac{1}{2}-Y_i\big)\eta_l(\chi_l^{i[2],2})\Bigg\}^2
\Bigg]\\
\nonumber
&=&
\frac{1}{4M}\sum_{l=1}^M\mathbb E\left[
(\eta_l(\chi_l^{i[1],1}))^2
+(\eta_l(\chi_l^{i[1],2}))^2
+(\eta_l(\chi_l^{i[2],1}))^2
+(\eta_l(\chi_l^{i[2],2}))^2
\right] \\
\nonumber
&&\qquad\qquad
-
\frac{1}{2M}\sum_{l=1}^M\mathbb E\left[
\eta_l(\chi_l^{i[1],1})
\eta_l(\chi_l^{i[1],2})
+\eta_l(\chi_l^{i[2],1})
\eta_l(\chi_l^{i[2],2})
\right]\\
\nonumber
&=&
\frac{1}{M}\sum_{l=1}^M\mathtt{Var}(\eta_l(\widehat{\chi}_l))
-\frac{1}{2M}\sum_{l=1}^M\left(F_{11}+F_{22}\right)
\mathtt{Var}(\eta(\widehat{\chi}_l))
\\
&=&\left(1-\frac{F_{11}+F_{22}}{2}\right)\frac{\sigma_A^2}{2}.
\label{star one}
\end{eqnarray}
This is, of course, exactly the expression we would obtain in the
purely additive case.
The second residual, $S^i$, also has mean zero, but its variance will now
involve higher order identities.
Recall that
\begin{align*}
S^i=& \frac{1}{\sqrt{M}}\sum_{l=1}^M\Bigg\{
\big(X_l^iY_l^i-\frac{1}{4}\big)\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],1})
+\big(X_l^i(1-Y_l^i)-\frac{1}{4}\big)\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],2})
\\
& \qquad \qquad +\big((1-X_l^i)Y_l^i-\frac{1}{4}\big)\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],1})
+\big((1-X_l^i)(1-Y_l^i)-\frac{1}{4}\big)\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],2})
\Bigg\}.
\end{align*}
Once again, since Mendelian inheritance is independent at different loci,
$\mathbb E[(S^i)^2]$ will be entirely determined by the diagonal terms.
Note that for independent Bernoulli (parameter $1/2$)
random variables $X$ and $Y$,
$$\mathbb E\left[\bigg(XY-\frac{1}{4}\bigg)^2\right]
=\mathbb E\left[\frac{1}{2}XY+\frac{1}{16}\right]
=\frac{3}{16},$$
and
$$
\mathbb E\left[\bigg(XY-\frac{1}{4}\bigg)\bigg(X(1-Y)-\frac{1}{4}\bigg)\right]
=-\frac{1}{16}.$$
So, taking expectations over the variables $X_l^i$ and $Y_l^i$, we find
\begin{align}
\label{raw equation for Si squared}
& \mathbb E\big[(S_i)^2\big] \nonumber\\
& =\frac{3}{16M}\sum_{l=1}^M\mathbb E\left[
\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],1})^2+
\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],2})^2+
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],1})^2
+\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],2})^2
\right] \nonumber\\
&\quad -\frac{2}{16M}\sum_{l=1}^M\mathbb E\Bigg[
\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],1})
\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],2})
+\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],1})
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],1})
\nonumber\\
& \qquad \qquad\qquad +\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],1})
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],2})
+\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],2})
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],1})
\nonumber\\
& \qquad \qquad\qquad+\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],2})
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],2})
+\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],1})
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],2})
\Bigg].
\end{align}
The first term depends only on pairwise identities and we see
immediately that it is
\begin{equation*}
\frac{3}{4M}F_{12}\sum_{l=1}^M
\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)^2]
+\frac{3}{4M}(1-F_{12})\sum_{l=1}^M
\mathbb E[\phi_l(\widehat{\chi}^1_l,\widehat{\chi}^2_l)^2]
=
\frac{3}{4}F_{12}(\sigma_{DI}^2+\iota^*)+\frac{3}{4}(1-F_{12})\sigma_D^2.
\end{equation*}
The second term in~(\ref{raw equation for Si squared}) is most easily
calculated conditional on identity class.
Let us write $\Xi(l)$
for the summand corresponding to locus $l$.
$$
\begin{array}{cc}
\mbox{identity state}&\mathbb E\big[\frac{1}{M}\sum_{l=1}^M\Xi(l)|\Delta_{\cdot}\big]
\\
\\
\hline\hline
\\
\Delta_1& \frac{3}{4}\left(\sigma_{DI}^2+\iota^*\right)\\
\\
\Delta_2 & \frac{3}{4}\sigma_D^2\\
\\
\Delta_3 & \frac{1}{8}\left(\sigma_D^2+\sigma_{DI}^2+\iota^*\right)
\\
\\
\Delta_4 & \frac{1}{4}\sigma_D^2
\\
\\
\Delta_5 & \frac{1}{8}\left(\sigma_D^2+\sigma_{DI}^2+\iota^*\right)
\\
\\
\Delta_6 & \frac{1}{4}\sigma_D^2
\\
\\
\Delta_7 & \frac{1}{8}\left(\sigma_D^2+\iota^*\right)
\\
\\
\Delta_8 & 0
\\
\\
\Delta_9 & 0
\end{array}
$$
Using our notation for identities, this becomes
\begin{align*}
&-\frac{1}{4}\left(F_{1122}+F_{122}+F_{112}\right)\left(\sigma_{DI}^2
+\iota^*\right) -\frac{1}{4}\left(F_{11}-F_{112}+F_{22}-F_{122}+\widetilde{F}_{1122}
+\frac{1}{2}\widetilde{F}_{1212}\right)\sigma_D^2 \nonumber \\
& \qquad -\frac{1}{4}\iota^*\widetilde{F}_{1212}.
\end{align*}
Thus
\begin{align}
\label{star two}
\mathbb E\left[(S^i)^2\right] = &\ \frac{1}{4}\left(3F_{12}-F_{1122}-F_{122}-F_{112}\right)
\left(\sigma_{DI}^2+\iota^*\right) -\frac{1}{4}\iota^*\widetilde{F}_{1212} \nonumber
\\
& +\frac{1}{4}\left(3(1-F_{12})-(F_{22}-F_{122})-(F_{11}-F_{112})
-\widetilde{F}_{1122}-\frac{1}{2}\widetilde{F}_{1212}\right)\sigma_D^2.
\end{align}
\subsection*{The covariance of $R_i$ and $S_i$.}
Since $R_i$ has mean zero, it suffices to calculate
$\mathbb E[R_iS_i]$.
We need to establish the mean of
\begin{multline}
\label{summand for covariances}
\left\{ \big(X-\frac{1}{2}\big)\eta_l(\chi_l^{i[1],1})
+\big(\frac{1}{2}-X\big)\eta_l(\chi_l^{i[1],2})
+\big(Y-\frac{1}{2}\big)\eta_l(\chi^{i[2],1}) +
\big(\frac{1}{2}-Y\big)\eta_l(\chi_l^{i[2],2})\right\}
\\
\times
\Bigg\{XY\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],1})+X(1-Y)
\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],2})+(1-X)Y
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],1})
\\
+(1-X)(1-Y)\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],2})\Bigg\}.
\end{multline}
We have been able to drop the `$-1/4$' terms in the second bracket
since $\mathbb E[R_i]=0$.
Now
\begin{eqnarray*}
\mathbb E\left[\bigg(X-\frac{1}{2}\bigg)XY\right] &=& \frac{1}{2}\mathbb E[XY]=\frac{1}{8},\\
\mathbb E\left[\bigg(X-\frac{1}{2}\bigg)(1-X)Y\right] &=& -\frac{1}{2}\mathbb E[(1-X)Y]
=-\frac{1}{8},
\end{eqnarray*}
and so the mean of~(\ref{summand for covariances}) is that of
\begin{multline}
\frac{1}{8}\Big\{\big(\eta_l(\chi_l^{i[1],1})
-\eta_l(\chi^{i[1],2})\big)\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],1})
+\big(\eta_l(\chi_l^{i[2],1})-\eta_l(\chi_l^{i[2],2}))
\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],1})
\\
+\big(\eta_l(\chi_l^{i[1],1}-\eta_l(\chi_l^{i[1],2})\big)
\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],2})
+\big(\eta_l(\chi_l^{i[2],2})-\eta_l(\chi_l^{i[2],1})\big)
\phi_l(\chi_l^{i[1],1},\chi_l^{i[2],2})
\\
+\big(\eta_l^{i[1],2}-\eta_l(\chi_l^{i[1],1}\big)
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],1})
+\big(\eta_l(\chi_l^{i[2],1})-\eta_l(\chi_l^{i[2],2})\big)
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],1})
\\
+\big(\eta_l(\chi_l^{i[1],2})-\eta_l(\chi^{i[1],1})\big)
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],2})
+\big(\eta_l(\chi_l^{i[2],2})-\eta_l(\chi_l^{i[2],1})\big)
\phi_l(\chi_l^{i[1],2},\chi_l^{i[2],2})\Big\}.
\end{multline}
Taking expectations (conditional on the pedigree)
and summing over loci, we find
\begin{equation}
\label{star three}
\mathbb E\left[R^iS^i\big|{\cal P}(t)\right]
=\left(F_{12}-\frac{F_{112}+F_{122}}{2}\right)\frac{\sigma_{ADI}}{2}.
\end{equation}
Finally, for two distinct parents, we have found that in generation $t$,
conditional on the pedigree up to time $t$,
\begin{align*}
\mathtt{Var}(R^i+S^i)= & \
\left(1-\frac{F_{11}+F_{22}}{2}\right)\frac{\sigma_A^2}{2}
+\frac{1}{4}\left(3F_{12}-F_{112}-F_{122}-F_{1122}\right)
\left(\sigma_{DI}^2+\iota^*\right)
\\ & +\frac{1}{4}\left(3(1-F_{12})-(F_{11}-F_{112})-(F_{22}-F_{122})
-\widetilde{F}_{1122}-\frac{1}{2}\widetilde{F}_{1212}\right)\sigma_D^2 \\
& +\left(F_{12}-\frac{F_{112}+F_{122}}{2}\right)\sigma_{ADI} -\frac{1}{4}\iota^*\widetilde{F}_{1212}.
\end{align*}
We can also read off the result for when the two parents are the same
from this formula.
In that case
$$F_{1122}=F_{11}=F_{22}=F_{112}=F_{122},\quad
F_{12}=\frac{1}{2}(1+F_{11}),\mbox{ and }\widetilde{F}_{1212}=1-F_{11}.$$
Thus $\mathtt{Var}(R^i+S^i)$ reduces to
$$(1-F_{11})\left(\frac{\sigma_A^2}{2}+\frac{3}{8}\big(\sigma_{DI}^2+
\iota^*\big)
+\frac{1}{4}\sigma_D^2+\frac{1}{2}\sigma_{ADI}\right)
-\frac{1}{4}\iota^* .$$
\section{Conditioning multivariate Gaussian vectors}
\label{conditioning normals}
For ease of reference, we record here
a standard result for conditioning multivariate normal
random vectors on their marginal values.
\begin{thm}
\label{conditioning multivariate normals}
Suppose that
$$\begin{bmatrix}
x_A\\ x_B
\end{bmatrix}
\sim {\cal N}\left(\begin{bmatrix} \mu_A \\ \mu_B \end{bmatrix},
\begin{bmatrix} \Sigma_{AA} & \Sigma_{AB}\\
\Sigma_{BA} & \Sigma_{BB}\end{bmatrix}\right).$$
Then
$$x_A | x_B \sim {\cal N} \left(\mu_A+\Sigma_{AB}\Sigma_{BB}^{-1}
(x_B-\mu_B), \Sigma_{AA}-\Sigma_{AB}\Sigma_{BB}^{-1}\Sigma_{BA}
\right).$$
\end{thm}
The proof can be found, for example, in Brockwell \& Davis~(1996)
(Proposition~1.3.1
in Appendix~A).\nocite{brockwell/davis:1996}
\section{Generalised Central Limit Theorems}
\label{generalised CLTS}
We shall exploit known techniques for proving both convergence to
a normal distribution, and for establishing the rate of
convergence, in situations which go beyond the classical setting of
independent identically distributed random variables. For convenience
we recall the key results that we need here.
We begin with a result of Rinott~(1994)
\nocite{rinott:1994}
on the rate of convergence in a generalised Central Limit Theorem;
generalised because the summands are not identically distributed and
it allows some dependence between elements in the sum.
We do not use this second feature here, but it would be needed to
extend our results to include effects that depend on
more than one locus, and so for completeness
we include it in the statement of the result.
It also gives an idea of how quickly the rate of convergence deteriorates
if one includes epistasis or higher order dominance effects.
This result can be
used both to prove asymptotic normality when we condition only on the
pedigree (and not on any observed trait values), and to prove
asymptotic normality of the residuals (that is
the part of the trait distribution within families that is not shared
among offspring) conditional on the observed traits of ancestors in
the pedigree.
The dependence is captured by a {\em dependency graph}.
\begin{defn}
\label{dependency graph}
Let $\{X_l; l\in {\cal V}\}$ be a collection of random variables. The graph
${\cal G}=({\cal V}, {\cal E})$, where ${\cal V}$ and ${\cal E}$ denote the vertex set
and edge set respectively, is said to be a {\em dependency graph} for the
collection if for any pair of disjoint subsets $A_1$ and $A_2$ of ${\cal V}$ such that no
edge in ${\cal E}$ has one endpoint in $A_1$ and the other in $A_2$, the sets of
random variables $\{X_l; l\in A_1\}$ and $\{X_l; l\in A_2\}$ are independent.
\end{defn}
The degree of a vertex in the graph is the number of edges connected to it and the maximal
degree of the graph is just the maximum of the degrees of the vertices in it.
\begin{thm}[Theorem~2.2, Rinott~(1994)]
\label{rinott clt}
Let $E_1,\ldots ,E_M$ be random variables having a dependency graph whose maximal degree is strictly
less than $D$, satisfying $|E_l-\mathbb E[E_l]|\leq B$ a.s.,~$l=1,
\ldots ,M$, $\mathbb E[\sum_{l=1}^ME_l]=\lambda$
and $\mathtt{Var}\left(\sum_{l=1}^ME_l\right)=\sigma^2>0$. Then, for
every $w\in\mathbb R$,
\begin{equation}
\label{CLT bound}
\left|\mathbb P\left[\frac{\sum_{l=1}^ME_l-\lambda}{\sigma}\leq w\right]-
{\cal N}(w)\right|
\leq\frac{1}{\sigma}\left\{\sqrt{\frac{1}{2\pi}}DB+16\left(\frac{M}{\sigma^2}\right)^{1/2}D^{3/2}B^2
+10\left(\frac{M}{\sigma^2}\right)D^2B^3\right\},
\end{equation}
where ${\cal N}$ is the distribution function of a standard normal random variable.
\end{thm}
In particular, when $D$ and $B$ are order one
and $\sigma^2$ is of order $M$,
the bound is of order $1/\sqrt{M}$.
Since we are only allowing for dominance effects that depend on allelic
states at a single locus, and we have no epistasis, our dependency
graphs will have no edges and so the maximal degree of any vertex will
be zero and we may take $D=1$.
Epistasis or higher order dominance effects, will increase the degree.
This bound on the accuracy of the normal approximation will
decrease rapidly as the number
of combinations through which the
allelic state at a single locus
can influence the trait grows.
\subsection*{Exchangeable pairs}
In order to prove the asymptotic normality of the part of the trait
value that is shared by all the offspring in a family conditional on parental traits, we require a
different approach. Because we are conditioning on the trait values
of the parents, there will be weak dependence between all the
pairs of loci within the sums defining ${\cal A}^i+{\cal D}^i$ (and so the
dependency graph for the summands would be the complete graph).
To check that nonetheless
the limit is Gaussian we shall use a variant of Stein's method of
exchangeable pairs, originally introduced in Stein~(1986).
\nocite{stein:1986}
Recall that the pair of random variables $(W, W')$ is called
an exchangeable
pair if their joint distribution is symmetric.
Suppose that $\mathbb E[W]=0$, $\mathbb E[W^2]=1$, $(W,W')$ is an exchangeable pair
and
\begin{equation}
\label{approximate regression}
\mathbb E[W-W'|W]=\lambda (W-R),
\end{equation}
for some $0<\lambda <1$, where $R$ is a random variable of small order.
Let us write $\Delta =W-W'$ and define
$$\widehat{K}(t)=\frac{\Delta}{2\lambda}\Big(\mathbf{1}_{\{-\Delta\leq t\leq 0\}}
-\mathbf{1}_{\{0\leq t\leq -\Delta\}}\Big).$$
Note that $\int_{-\infty}^\infty \widehat{K}(t)dt=\Delta^2/(2\lambda)$.
\nocite{chen/goldstein/shao:2011}
In this case, one can show (see Chen et al.~2011, \S2.3) that
\begin{equation}
\label{modified Stein equation}
\mathbb E[Wf(W)]=\mathbb E\left[\int_{-\infty}^\infty f'(W+t)\widehat{K}(t)dt\right]
+\mathbb E[Rf(W)].
\end{equation}
\begin{propn}[Chen et al.~2011, Proposition~2.4i]
\label{chen propn}
Let $h$ be an absolutely continuous function with $\|h'\|<\infty$,
and ${\cal F}$ any $\sigma$-algebra containing $\sigma(W)$.
If~(\ref{modified Stein equation}) holds, then
\begin{equation}
\label{bound in exchangeable pair proposition}
\left|\mathbb E[h(W)]-{\cal N}(h)\right| \leq \|h'\|\left(\sqrt{\frac{2}{\pi}}
\mathbb E\left[|1-\widehat{K}_1|\right]+2\mathbb E[\widehat{K}_2]+2\mathbb E[|R|]\right),
\end{equation}
where
\begin{equation}
\label{K1 and K2}
\widehat{K}_1=\mathbb E\left[\left.\int_{-\infty}^\infty\widehat{K}(t)dt\right| {\cal F}\right]
=
\mathbb E\left[\left.
\frac{\Delta^2}{2\lambda}\right|{\cal F}\right]\quad\mbox{and}\quad
\widehat{K}_2=\int_{-\infty}^\infty\left|t\widehat{K}(t)\right|dt=
\frac{|\Delta |^3}{4\lambda}
\end{equation}
\end{propn}
\begin{corollary}
\label{chen corollary}
Suppose that $(W,W')$ is an exchangeable pair with $\mathbb E[W]=\mu_W$ and
$\mathtt{Var}(W)=\sigma_W^2$ with
\begin{equation}
\label{exchangeable pair requirement}
\mathbb E[W'|W]=(1-\lambda)W+\lambda \mathbb E[W]-\lambda R
\end{equation}
where $R$ is a random variable of small order. Then defining $\widehat{K}_1$,
$\widehat{K}_2$, $h$ and ${\mathcal F}$
as in Proposoition~\ref{chen propn},
\begin{equation}
\label{unnormalised difference}
\left|\mathbb E[h(W)]-{\cal N}_{\mu_W,\sigma_W^2} (h)\right|
\leq \|h'\|\left(\sqrt{\frac{2}{\pi}}\frac{1}{\sigma_W}
\mathbb E\left[|\sigma_W^2-\widehat{K}_1|\right]+
\frac{2}{\sigma_W^2}\mathbb E[\widehat{K}_2]+2\mathbb E[|R|]\right),
\end{equation}
where ${\cal N}_{\mu_W,\sigma_W^2}$ denotes the distribution of a normal random
variable with mean $\mu_W$ and variance $\sigma_W^2$.
\end{corollary}
\begin{remark}
Although this result is enough to guarantee that $W$ is asymptotically
normal, because we require $\|h'\|<\infty$,
it is not enough to bound even the
distance between the cumulative distribution function
of $W$ and that of a standard normal random variable with
an error of order $1/\sqrt{M}$.
To propagate our argument from one generation to the next
requires convergence of the density function of the observed trait value, and
once again it is our
assumption that there is some environmental noise (with
a smooth density) that allows us to guarantee this convergence
based on the result proved here.
\end{remark}
\section{Key Lemmas}
\label{key lemmas}
\begin{notn}
\label{noise in notation}
Throughout the rest of the appendices, we shall assume that the
environmental noise is subsumed into the trait value $Z$, so that
its distribution can be assumed to have a smooth density. Thus when we
write $\mathbb P[Z=z]$, we actually mean the density function of the
distribution of $\widetilde{Z}$ evaluated at the value $z$.
\end{notn}
In this section we prove two key lemmas which will underpin our proof.
They will allow us to estimate the effect on the distribution of the
allelic types at a particular locus, or particular
pair of loci, of knowing the
trait value. We shall be using Bayes' rule. With a slight abuse of notation
$$\mathbb P[(\chi_l^1,\chi_l^2)=(x,x')|Z=z]=
\frac{\mathbb P[Z=z|(\chi_l^1,\chi_l^2)=(x,x')]}{\mathbb P[Z=z]}\mathbb P[(\chi_l^1,\chi_l^2)=
(x,x')].$$
Let us write $\Psi_l(x,x')=\eta_l(x)+\eta_l(x')+\phi_l(x,x')$
and $Z_{-l}$ for the trait value of an individual with the effect of
locus $l$ removed, then the ratio in this expression becomes
$$\frac{\mathbb P[Z_{-l}=z-\Psi_l(x,x')]}{\mathbb P[Z=z]}.$$
Of course this ratio of probabilities should be interpreted as a
ratio of density functions. Moreover, bearing in mind
our remarks on environmental noise, we are going to suppose that these
density functions are sufficiently smooth that we can justify an
application of Taylor's Theorem.
Of course, we know that $Z_{-l}$ is approximately normally distributed,
using exactly the same argument as for $Z$, and it is no surprise that the
ratio differs from one by something of order $1/\sqrt{M}$. The
importance of the next lemma will become evident when we sum conditional
expectations over loci; c.f.~Remark~\ref{remark on troublesome}.
\begin{lemma}
\label{replacement for troublesome equation}
In the notation above,
\begin{multline*}
\mathbb P[Z_{-l}=z]=\mathbb P[Z=z]+\frac{1}{\sqrt{M}}\mathbb E[\Psi_l(\chi_l^1,\chi_l^2)]
\frac{d}{dz}\mathbb P[Z=z]\\
+\frac{1}{M}\mathbb E[\Psi_l(\chi_l^1,\chi_l^2)]^2\frac{d^2}{dz^2}\mathbb P[Z=z]
-\frac{1}{2M}\mathbb E[\Psi_l(\chi_l^1,\chi_l^2)^2]\frac{d^2}{dz^2}\mathbb P[Z=z]
+C_l(z)\frac{1}{M^{3/2}},
\end{multline*}
where the function $C_l(z)$ in the error term can be bounded independent of
$l$ and $z$.
\end{lemma}
\begin{remark}[Conditioning on the pedigree]
Although we have suppressed it in the notation, this lemma holds
in any generation, but the expressions
$\mathbb E[\Psi(\chi_l^1,\chi_l^2)]^2$ and
$\mathbb E[\Psi(\chi_l^1,\chi_l^2)^2]$ should be interpreted as being
calculated conditional on the pedigree (which will determine the
probability of identity of $\chi_l^1$, $\chi_l^2$).
\end{remark}
\noindent
{\bf Proof of Lemma~\ref{replacement for troublesome equation}}
We are going to abuse notation (still further) and imagine that
$\mathbb P[\chi_l^1=x, \chi_l^2=x', Z=z]$ has a density with respect to $x$, $x'$.
Of course we don't expect that to be true (even with environmental noise),
but it makes our expressions
easier to parse than using a more mathematically accurate notation.
We begin with an application of Taylor's Theorem (with respect to $z$):
\begin{eqnarray}
\label{Z-l as integral}
\mathbb P[Z_{-l}=z] &=& \int\int\mathbb P\bigg[\chi_l^1=x, \chi_l^2=x',
Z=z+\frac{1}{\sqrt{M}}\Psi_l(x,x')\bigg]dxdx'\\
\label{first term Z-l as integral}
&=&
\int\int\mathbb P[\chi_l^1=x, \chi_l^2=x', Z=z]dxdx'
\\
\label{second term Z-l as integral}
&&
+\frac{1}{\sqrt{M}}\int\int\Psi_l(x,x')\frac{\partial}{\partial z}
\mathbb P[\chi_l^1=x, \chi_l^2=x', Z=z]dxdx'
\\
\label{third term Z-l as integral}
&&
+\frac{1}{2M}\int\int\Psi_l(x,x')^2\frac{\partial^2}{\partial z^2}
\mathbb P[\chi_l^1=x, \chi_l^2=x', Z=z]dxdx'
+\widehat{C}_l(z)\frac{1}{M^{3/2}}.
\end{eqnarray}
Provided that $\mathbb P[Z=z]$ has a uniformly bounded third derivative, our
assumption that the terms that make up $\Psi_l$ are uniformly bounded allows
us to deduce that $\widehat{C}_l$ is uniformly bounded in $l$ and $z$.
Notice that the expression in~(\ref{first term Z-l as integral})
is just $\mathbb P[Z=z]$.
Since we are not conditioning on any trait values in
the pedigree, and the ancestral population is assumed to be in
linkage equilibrium, $(\chi_l^1,\chi_l^2)$ and $Z_{-l}$ are
independent.
Combining this observation with
equation~(\ref{Z-l as integral}), and, once again
applying Taylor's Theorem, we find
\begin{align*}
&\mathbb P[\chi_l^1=x, \chi_l^2=x', Z=z]\\
& =
\mathbb P[\chi_l^1=x, \chi_l^2=x']\mathbb P\bigg[Z_{-l}=z-\frac{1}{\sqrt{M}}\Psi_l(x,x')\bigg]
\\
&=
\mathbb P[\chi_l^1=x, \chi_l^2=x']\int\int
\mathbb P\bigg[\chi^1_l=y, \chi^2_l=y', Z=z-\frac{1}{\sqrt{M}}\Psi_l(x,x')
+\frac{1}{\sqrt{M}}\Psi_l(y,y')\bigg]dydy'
\\
&=
\mathbb P[\chi_l^1=x, \chi_l^2=x']\Big\{
\mathbb P[Z=z]+\frac{1}{\sqrt{M}}\int\int\big(\Psi(y,y')-\Psi(x,x')\big)
\\
&
\qquad\qquad
\qquad\qquad
\qquad\qquad
\qquad\qquad
\times
\frac{\partial}{\partial z}\mathbb P[\chi^1_l=y, \chi^2_l=y', Z=z]dydy'
+\widetilde{C}_l(x,x',z)\frac{1}{M}\Big\},
\end{align*}
where the function $\widetilde{C}_l$ in the last line is
uniformly bounded independent
of $l$ and $(x,x',z)$.
(To justify this last statement, recall that we are abusing notation and
implicitly subsuming the environmental noise into the distribution of $Z$. The density
function here is actually a convolution of that of the environmental noise, which is
smooth, and the true distribution of $Z$, and is therefore smooth.)
Still assuming sufficient regularity, differentiating the previous equation
we find
\begin{align}
\label{first deriv}
\frac{\partial}{\partial z}\mathbb P[&\chi_l^1=x, \chi_l^2=x', Z=z] \nonumber \\
& = \mathbb P[\chi_l^1=x, \chi_l^2=x']\bigg\{
\frac{d}{d z}\mathbb P[Z=z]
+\frac{1}{\sqrt{M}}\int\int\Big(\Psi(y,y')-\Psi(x,x')\Big) \nonumber\\
&\qquad \qquad \qquad \qquad \times
\frac{\partial^2}{\partial z^2}\mathbb P[\chi^1_l=y, \chi^2_l=y', Z=z]dydy'
+\frac{\partial}{\partial z}\widetilde{C}_l(x,x',z)\frac{1}{M}\bigg\},
\end{align}
and
\begin{equation}
\label{2nd deriv}
\frac{\partial^2}{\partial z^2}\mathbb P[\chi_l^1=x, \chi_l^2=x', Z=z]
=
\mathbb P[\chi_l^1=x, \chi_l^2=x']\left\{
\frac{d^2}{d z^2}\mathbb P[Z=z]
+\frac{1}{\sqrt{M}}\frac{\partial^2}{\partial z^2}\widetilde{\widetilde{C}}_l(x,x',z)
\right\},
\end{equation}
with $\tfrac{\partial}{\partial z}\widetilde{C}_l$
and $\tfrac{\partial^2}{\partial z^2}\widetilde{\widetilde{C}}_l$ uniformly bounded.
Finally, substituting~(\ref{first deriv}) and~(\ref{2nd deriv})
in~(\ref{second term Z-l as integral}) and~(\ref{third term Z-l as integral}),
we obtain
\begin{eqnarray*}
\mathbb P[Z_{-l}=z]&=&\mathbb P[Z=z]+
\frac{1}{\sqrt{M}}\int\int\Psi_l(x,x')\mathbb P[\chi^1_l=x, \chi^2_l=x']dxdx'
\frac{d}{d z}\mathbb P[Z=z]
\\
&&+\frac{1}{M}\int\int\int\int\Big(\Psi_l(y,y')-\Psi_l(x,x')\Big)\Psi_l(x,x')
\\
&&\qquad\qquad\times \mathbb P[\chi^1_l=y, \chi^2_l=y']\mathbb P[\chi^1_l=x, \chi^2_l=x']
dydy'dxdx'\frac{d^2}{d z^2}\mathbb P[Z=z]
\\
&&
+\frac{1}{2M}\int\int\Psi_l(x,x')^2\mathbb P[\chi^1_l=x, \chi^2_l=x']dxdx'
\frac{d^2}{d z^2}\mathbb P[Z=z]
+\widehat{C}_l(z)\frac{1}{M^{3/2}}
\\
&=&
\mathbb P[Z=z]+\frac{1}{\sqrt{M}}\mathbb E[\Psi_l(\chi_l^1,\chi_l^2)]
\frac{d}{dz}\mathbb P[Z=z]\\
&&+\frac{1}{M}\mathbb E[\Psi_l(\chi_l^1,\chi_l^2)]^2\frac{d^2}{dz^2}\mathbb P[Z=z]
-\frac{1}{2M}\mathbb E[\Psi_l(\chi_l^1,\chi_l^2)^2]\frac{d^2}{dz^2}\mathbb P[Z=z]
+\widehat{C}_l(z)\frac{1}{M^{3/2}},
\end{eqnarray*}
as required. \hfill $\Box$
We also require an analogue of Lemma~\ref{replacement for troublesome equation}
with which to control the effect of conditioning on the trait
value on the distribution of the allelic values at pairs of loci.
We write $Z_{-l-m}=Z-\tfrac{1}{\sqrt{M}}\big(
\Psi_l(\chi^1_l,\chi^2_l)+\Psi_m(\chi^1_m,\chi^2_m)\big)$
for the trait value with the contributions from loci $l$ and $m$
removed. The following lemma follows on iterating the argument that gave
us Lemma~\ref{replacement for troublesome equation}.
\begin{lemma}
\label{Z-l-m}
In the notation above,
\begin{align*}
\mathbb P[Z_{-l-m}=z]=& \mathbb P[Z=z]+
\frac{1}{\sqrt{M}}\Big(\mathbb E[\Psi_l(\chi_l^1,\chi_l^2)]+\mathbb E[\Psi_m(\chi_m^1,\chi_m^2)]\Big)
\frac{d}{d z}\mathbb P[Z=z] \nonumber\\
& +\Big\{\frac{1}{M}\mathbb E[\Psi_l(\chi_l^1,\chi_l^2)]^2-\frac{1}{2M}\mathbb E[\Psi_l(\chi_l^1,\chi_l^2)^2]
+\frac{1}{M}\mathbb E[\Psi_l(\chi_l^1,\chi_l^2)]\mathbb E[\Psi_m(\chi_m^1,\chi_m^2)]
\nonumber\\
& +\frac{1}{M}\mathbb E[\Psi_m(\chi_m^1,\chi_m^2)]^2-\frac{1}{2M}\mathbb E[\Psi_m(\chi_m^1,\chi_m^2)^2]\Big\}
\frac{d^2}{d z^2}\mathbb P[Z=z]+C_{l,m}(z)\frac{1}{M^{3/2}},
\end{align*}
where the functions $C_{l,m}(z)$ are uniformly bounded in $l$, $m$, $z$.
\end{lemma}
{\bf Proof}
We iterate the previous result:
\begin{multline*}
\mathbb P[Z_{-l-m}=z]=\mathbb P[Z_{-l}=z]
+\frac{1}{\sqrt{M}}\mathbb E[\Psi_m(\chi_m^1,\chi_m^2)]
\frac{d}{dz}\mathbb P[Z_{-l}=z]\\
+\frac{1}{M}\mathbb E[\Psi_m(\chi_m^1,\chi_m^2)]^2\frac{d^2}{dz^2}\mathbb P[Z_{-l}=z]
-\frac{1}{2M}\mathbb E[\Psi_m(\chi_m^1,\chi_m^2)^2]\frac{d^2}{dz^2}\mathbb P[Z_{-l}=z]
+C_m(z)\frac{1}{M^{3/2}};
\end{multline*}
now substitute for $\mathbb P[Z_{-l}=z]$ and its derivatives.
\hfill $\Box$
\begin{remark}
\label{remark on troublesome}
Just as for Lemma~\ref{replacement for troublesome equation}, the
proof of Lemma~\ref{Z-l-m} applies in any generation as long as one
interprets the expectations as being taken conditional on the pedigree.
We have assumed that our base population is in linkage equilibrium to
write $\mathbb E[\Psi_l(y,y')\Psi_m(x,x')]=\mathbb E[\Psi_l(y,y')]\mathbb E[\Psi_m(x,x')]$.
\end{remark}
We shall only be presenting the detailed proofs for individuals in
generation one. To extend to the general case requires an analogue of
Lemma~\ref{replacement for troublesome equation}
when we consider the trait values of the two parents of an individual.
For completeness, we record that lemma here.
\begin{lemma}
\label{troublesome lemma with two parents}
Let us use $\mathbb P[z_1,z_2]$ to denote
$\mathbb P[Z^{i[1]}=z_1,Z^{i[2]}=z_2]$.
In the following expression, all expectations should be
interpreted as taken conditional on the pedigree:
\begin{align*}
\mathbb P\Big[Z^{i[1]}_{-l}=z_1& , Z^{i[2]}_{-l}=z_2\Big]- \mathbb P[z_1,z_2]
\\ = &\,\frac{1}{\sqrt{M}}\mathbb E\big[\Psi_l(\chi_l^{i[1],1},\chi_l^{i[1],2})\big]\frac{\partial}{\partial z_1}\mathbb P[z_1,z_2]
+\frac{1}{\sqrt{M}}\mathbb E\big[\Psi_l(\chi_l^{i[2],1},\chi_l^{i[2],2})\big]\frac{\partial}{\partial z_2}\mathbb P[z_1,z_2]
\\
& +\left(\frac{1}{M}
\mathbb E\big[\Psi_l(\chi_l^{i[1],1},\chi_l^{i[1],2})\big]^2
-\frac{1}{2M}
\mathbb E\big[\Psi_l(\chi_l^{i[1],1},\chi_l^{i[1],2})^2\big]\right)
\frac{\partial^2}{\partial z_1^2}\mathbb P[z_1,z_2]
\\& +\left(\frac{1}{M}
\mathbb E\big[\Psi_l(\chi_l^{i[2],1},\chi_l^{i[2],2})\big]^2
-\frac{1}{2M}
\mathbb E\big[\Psi_l(\chi_l^{i[2],1},\chi_l^{i[2],2})^2\big]\right)
\frac{\partial^2}{\partial z_2^2}\mathbb P[z_1,z_2]
\\ & +\Bigg(\frac{2}{M}
\mathbb E\big[\Psi_l(\chi_l^{i[1],1},\chi_l^{i[1],2})\big]
\mathbb E\big[\Psi_l(\chi_l^{i[2],1},\chi_l^{i[2],2})\big]
\\
& \qquad -\frac{1}{M}
\mathbb E\big[\Psi_l(\chi_l^{i[1],1},\chi_l^{i[1],2})
\Psi_l(\chi_l^{i[2],1},\chi_l^{i[2],2})\big]\Bigg)
\frac{\partial^2}{\partial z_1\partial z_2}\mathbb P[z_1,z_2]
+\mathcal{O}\Big(\frac{1}{M^{3/2}}\Big).
\end{align*}
\end{lemma}
\section{Mean and variance of trait values conditional on parental traits}
We remind the reader that Notation~\ref{noise in notation} remains in force.
We now turn to calculating the conditional distribution of the trait values,
conditional not just on the pedigree, as we did in
Appendix~\ref{QG derivations},
but also on the (observed) trait values in the parental generation.
We spell out the details in generation one.
Here already we can identify the key points, without being
overwhelmed by notation. Recall that we are implicitly conditioning not
on the exact trait values of the parents, but on the observed trait
values when environmental noise is taken into account,
so that we can assume that the distribution of parental trait
values has a smooth density.
First we calculate the conditional mean.
We distinguish the case of two
distinct parents and a family produced by selfing.
Recall that we wrote ${\cal A}^i+{\cal D}^i$ for the component shared by all
individuals in the family, with ${\cal A}^i$ and ${\cal D}^i$ defined
in~(\ref{defn of A}) and~(\ref{defn of D}).
\subsubsection*{Generation one: mean trait value, distinct parents}
Since the parents are, by
assumption, unrelated, we anticipate that
the expected value of the dominance component
is zero, and so the expected value of the shared component
${\cal A}^i+{\cal D}^i$ should be the mean value of
the parental traits. However, since we are conditioning on knowing the trait
values, we do have some information about the allelic types, and we must verify
that this does not significantly distort the expectations.
We exploit again the fact that since the parents are unrelated, their
trait values (and allelic states at locus $l$) are independent. Thus
\begin{align}
\label{independence of trait values}
&\mathbb P\left[ Z^{i[1]}=z_1, Z^{i[2]}=z_2 \Big|
\big(\chi_l^{i[1],1},\chi_l^{i[1],2}\chi_l^{i[2],1}\chi_l^{i[2],2}\big)
=(x,x',y,y')\right]
\\
& =\!
\mathbb P\!\left[Z_{-l}^{i[1]}=z_1-\frac{1}{\sqrt{M}}\Big(\eta_l(x)+\eta_l(x')+\phi_l(x,x')
\Big)\right]\!
\mathbb P\!\left[Z_{-l}^{i[2]}=z_2-\frac{1}{\sqrt{M}}\Big(\eta_l(y)+\eta_l(y')+\phi_l(y,y')
\Big)\right]\!. \nonumber
\end{align}
We now use Lemma~\ref{replacement for troublesome equation}
and Taylor's Theorem to deduce that
\begin{multline*}
\mathbb P\left[\left. (\chi_l^{i[1],1}, \chi_l^{i[1],2})=(x,x') \right|
Z^{i[1]}=z\right]=
\mathbb P\left[(\chi_l^{i[1],1}, \chi_l^{i[1],2})=(x,x')\right]
\\
\times
\left\{1-\frac{1}{\sqrt{M}}\Big(\Psi_l(x,x')-\mathbb E[\Psi_l]\Big)
\frac{1}{\mathbb P[Z^{i[1]}=z]}
\frac{d}{d z}\mathbb P[Z^{i[1]}=z]+\mathcal{O} \Big(\frac{1}{M}\Big)
\right\},
\end{multline*}
with a symmetric expression for $i[2]$.
Integrating against this expression and
using~(\ref{vanishing cross variation}),
(\ref{mean phi zero}), and~(\ref{mean eta zero}),
we find, in an obvious notation,
\begin{equation}
\label{conditional dist eta gen one}
\mathbb E\left[\left.\eta_l(\chi_l^{i[1],1})\right|i[1]\neq i[2], Z^{i[1]},
Z^{i[2]}\right]
=-\frac{1}{\sqrt{M}}\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\mathtt{Var}\left(\eta_l(\widehat{\chi}_l)\right)
+{\mathcal O}\left(\frac{1}{M}\right).
\end{equation}
Note that approximating $\mathbb P[Z^{i[1]}]$ by a normal density and ignoring the
environmental component,
the order $1/M$ terms involves $1/(\sigma_A^2+\sigma_D^2)$
and $\big(Z^{i[1]}-\bar{z}_0\big)^2/(\sigma_A^2+\sigma_D^2)$,
and is controlled through these quantities and
our bounds on $\eta_l$ and $\phi_l$.
In particular, the approximation breaks down if the genetic variance
is too small or if the trait of the parent is too extreme.
Multiplying by $1/\sqrt{M}$ and summing over loci and parents, we arrive at
\begin{equation}
\label{mean additive bit gen one}
\mathbb E\left[\left. {\cal A}^i\right|i[1]\neq i[2], Z^{i[1]}, Z^{i[2]}\right]
=-\left(\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
+\frac{\mathbb P'[Z^{i[2]}]}{\mathbb P[Z^{i[2]}]}\right)
\frac{1}{M}\sum_{l=1}^M
\mathtt{Var}\left(\eta_l(\widehat{\chi}_l)\right)
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{equation}
\begin{remark}
Since we already checked that
the trait $Z^{i[1]}$ is approximately normally distributed, and the same
argument evidently gives that $Z^{i[1]}_{-l}$ is approximately
normally distributed for each $l$, the derivation
above may seem unnecessarily complex. However,
in summing the terms in~\eqref{conditional dist eta gen one} over loci,
we exploited the fact that we could pull the ratio
$\mathbb P'[Z^{i[1]}]/\mathbb P[Z^{i[1]}]$ outside
the sum. Only then did we approximate it by the limiting normal
distribution. We could only do this because we expressed everything in terms
of the distribution of the whole trait. If we try to approximate
the distribution of $Z^{i[1]}_{-l}$ directly by a normal distribution, and
then sum, we cannot control the error. We shall use this trick
repeatedly in what follows.
\end{remark}
Similarly,
\begin{multline*}
\mathbb E\left[\left.\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],1})\right|
i[1]\neq i[2], Z^{i[1]}=z_1,
Z^{i[2]}=z_2\right]\\
=
\int\ldots\int \phi_l(x,y)
\Big\{1-\frac{1}{\sqrt{M}}\Big(\Psi_l(x,x')-\mathbb E[\Psi_l]\Big)
\frac{1}{\mathbb P[Z^{i[1]}=z_1]}\frac{d}{d z_1}\mathbb P[Z^{i[1]}=z_1]\Big\}
\\
\Big\{1-\frac{1}{\sqrt{M}}\Big(\Psi_l(y,y')-\mathbb E[\Psi_l]\Big)
\frac{1}{\mathbb P[Z^{i[2]}=z_2]}\frac{d}{d z_2}\mathbb P[Z^{i[2]}=z_2]\Big\}
\\
\widehat{\nu}_l(dx)\widehat{\nu}_l(dx')
\widehat{\nu}_l(dy)\widehat{\nu}_l(dy')
+{\mathcal O}\left(\frac{1}{M}\right).
\end{multline*}
The terms of order one and $1/\sqrt{M}$ vanish as a result of
equations
(\ref{vanishing conditional distribution phi}),
(\ref{vanishing cross variation}),
(\ref{mean phi zero}),
and
(\ref{mean eta zero}).
Multiplying by $1/\sqrt{M}$ and summing over loci, we find that
$\mathbb E[{\cal D}^i]={\mathcal O}(1/\sqrt{M})$.
Recalling that the trait distribution in the ancestral population is
(almost) normally distributed with mean $\bar{z}_0$, we see that if we ignore
environmental effects, so that the variance of the trait distribution in
generation zero is $\sigma_A^2+\sigma_D^2$, then
adding $\bar{z}_0$ to the right hand side
of~(\ref{mean additive bit gen one}),
and substituting
$$\mathbb P[Z^{i[1]}]=\frac{1}{\sqrt{2\pi(\sigma_A^2+\sigma_D^2)}}
\exp\left(-\frac{(Z^{i[1]}-\bar{z}_0)^2}{2(\sigma_A^2+\sigma_D^2)}\right),$$
we recover that up to an error of order $1/\sqrt{M}$,
the expected trait value among offspring is
$$\bar{z}_0+\frac{\sigma_A^2}{\sigma_A^2+\sigma_D^2}\left(
\frac{Z^{i[1]}+Z^{i[2]}}{2}-\bar{z}_0\right),$$
as predicted by Theorem~\ref{conditioning multivariate normals}.
\begin{remark}[The breeder's equation]
\label{remark on breeder's equation}
Suppose that
as a result of environmental noise, the
observed trait of each individual in
the ancestral population is its genetic trait plus an independent
${\cal N}(0,\sigma_E^2)$ random variable.
Then assuming normality of the ancestral trait distribution, and
using
Theorem~\ref{conditioning multivariate normals},
we find that for unrelated parents
the mean trait
in generation one is
\begin{equation}
\label{breeder's equation}
\bar{z}_0+\frac{\sigma_A^2}{\sigma_Z^2}
\left(\frac{(Z^{i[1]}+Z^{i[2]})}{2}-\bar{z}_0\right),
\end{equation}
where $\sigma_Z^2$ is the total variance of the {\em observed} trait
in the
ancestral population; that is $\sigma_Z^2=\sigma_A^2+\sigma_D^2+\sigma_E^2$.
Equation~\eqref{breeder's equation} is
the {\em breeder's equation}.
\end{remark}
\subsubsection*{Mean trait value, same parent}
We now turn to the expected trait value in
a family in generation one that
is produced by selfing. The calculation for the additive term is
unchanged,
but now we have a non-trivial contribution from the
dominance component. We denote the parent $Z^{i[1]}$.
Since $Z^{i[1]}=Z^{i[2]}$, we must
calculate $\mathbb E[\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],1})|Z^{i[1]}]$
and
$\mathbb E[\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],2})|Z^{i[1]}]$.
Our strategy is as before: we express each of these probabilities
in terms of the distribution of the trait value minus the contribution
from locus $l$ and we apply
Lemma~\ref{replacement for troublesome equation}.
Thus, once again using that in generation zero, before conditioning, the
two alleles at locus $l$ in $Z^{i[1]}$ are independent draws
from $\widehat{\nu}_l$,
\begin{multline*}
\mathbb E\left[\left.\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],1})\right|
i[1]=i[2], Z^{i[1]}=z\right]
=
\int\int
\phi_l(x,x)\\
\left(1-\frac{1}{\sqrt{M}}\Big(\Psi_l(x,x')-\mathbb E[\Psi_l]\Big)
\frac{1}{\mathbb P[Z^{i[1]}=z]}\frac{d}{d z}\mathbb P[Z^{i[1]}=z]\right)
\widehat{\nu}_l(dx)\widehat{\nu}_l(dx')
+{\mathcal O}\left(\frac{1}{M}\right).
\end{multline*}
Using equations~(\ref{vanishing cross variation}), (\ref{mean phi zero}), (\ref{mean eta zero}), we
see that on integration the only non-zero contribution comes from the term
$\eta_l(x)\phi_l(x,x)$
which can be integrated to yield
\begin{equation}
\mathbb E\big[\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],1})|i[1]=i[2], Z^{i[1]}\big]
=
\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]
-\frac{1}{\sqrt{M}}\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\mathbb E[\eta_l(\widehat{\chi}_l)\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]
+{\mathcal O}\left(\frac{1}{M}\right).
\end{equation}
Similarly,
\begin{equation}
\mathbb E\Big[\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],2})|i[1]=i[2], Z^{i[1]}\Big]
=
-\frac{1}{\sqrt{M}}\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_2)^2]
+{\mathcal O}\left(\frac{1}{M}\right).
\end{equation}
Multiplying by $1/\sqrt{M}$ and summing over loci, we find that the mean
of the term ${\cal D}^i$ in~(\ref{defn of D}), conditional on $i[1]=i[2]$ and
on knowing the trait value $Z^{i[1]}$, is
\begin{equation}
\frac{1}{2\sqrt{M}}\!\sum_{l=1}^M\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]
-\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\left(\frac{1}{2M}\sum_{l=1}^M
\mathbb E[\eta_l(\widehat{\chi}_l)\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]
+\frac{1}{2M}\sum_{l=1}^M
\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_2)^2]\right)\!
+{\mathcal O}\!\left(\frac{1}{\sqrt{M}}\right).
\end{equation}
Adding on the additive terms that we calculated before and
restating everything in terms of the quantities in
Table~\ref{QG coefficients},
we obtain that for two identical parents
\begin{equation}
\label{mean gen one identical parents}
\mathbb E[Z^i|i[1]=i[2], Z^{i[1]}]=\bar{z}_0+\left(\frac{1}{2}\iota
-\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\left(\sigma_A^2+
\frac{\sigma_D^2}{2} +\frac{\sigma_{ADI}}{4} \right)\right)
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{equation}
Notice that the factor of $1/2$ in front of $\iota$ is the probability
of identity $F^*$ of the two alleles in the offspring.
Of course there is no surprise here: $\mathbb E[({\cal A}^i+{\cal D}^i)|i[1]=i[2]]=\iota/2$
and
\begin{align}
\label{conditioned covariances}
&\mathtt{Cov}\big({\cal A}^i+{\cal D}^i, Z^{i[1]}|i[1]=i[2]\big) \nonumber \\
& = \frac{1}{M}\sum_{l=1}^M
\mathbb E\Bigg[\Big(\eta_l(\widehat{\chi}_l^1)+\eta_l(\widehat{\chi}_2^1)
+\frac{\phi(\widehat{\chi}_l^1,\widehat{\chi}_l^1)+
\phi(\widehat{\chi}_l^2,\widehat{\chi}_l^2)+
2\phi(\widehat{\chi}_l^1,\widehat{\chi}_l^2)}{4}\Big)
\Big(\eta(\widehat{\chi}_l^1)+\eta(\widehat{\chi}_l^2)+
\phi(\widehat{\chi}_l^1,\widehat{\chi}_l^2)\Big)\Bigg]
\nonumber\\
& =\sigma_A^2+\frac{\sigma_D^2}{2}+\frac{\sigma_{ADI}}{4}.
\end{align}
Thus, up to the error term, (\ref{mean gen one identical parents}) is just
$$\bar{z}_0+\mathbb E[{\cal A}^i+{\cal D}^i]+\mathtt{Cov}\big({\cal A}^i+{\cal D}^i,Z^{i[1]}\big)\frac{\big(Z^{i[1]}-
\mathbb E[Z^{i[1]}]\big)}{\mathtt{Var}\big(Z^{i[1]}\big)},$$
as we expect from the (approximately) bivariate normal
distribution of $({\cal A}^i+{\cal D}^i)$ and $Z^{i[1]}$.
\subsection*{Variance of the shared parental contribution ${\cal A}^i+{\cal D}^i$, generation
one}
We now turn to the variance of the shared parental contribution. This is
where the complications associated with incorporating dominance really
start to be felt.
In the process of calculating the conditional mean above, we established
that conditioning on the parental trait values (and whether or not
they are identical) distorts the distribution of the allelic state at
a given locus by a factor of order $1/\sqrt{M}$. This distortion is
enough to shift the mean trait (as we see in the
breeder's equation), and, as we shall see, the variance of the
sum over loci will have a contribution from linkage disequilibrium.
\subsection*{Conditional variance $({\cal A}^i+{\cal D}^i)$, generation one, same parent}
First we consider the case in which the parents are the same.
We need to calculate the expectation of $({\cal A}^i+{\cal D}^i)^2$ conditional
upon the parental trait. We begin with the `diagonal' terms, corresponding
to a single locus.
We take these in three parts.
First, proceeding as before,
\begin{align}
\label{diagonal terms behave well}
& \mathbb E\left[\eta_l(\chi_l^{i[1],1})^2\Big|i[1]=i[2], Z^{i[1]}=z\right] \nonumber\\
& =\int\int \eta_l(x)^2 \left(1-\frac{1}{\sqrt{M}}\Big(\Psi_l(x,x')-\mathbb E[\Psi_l]\Big)
\frac{1}{\mathbb P[Z^{i[1]}=z]}\frac{d}{dz}\mathbb P[Z^{i[1]}=z]\right)
\widehat{\nu}_l(dx)\widehat{\nu}_l(dx')
+{\mathcal O}\left(\frac{1}{M}\right
\nonumber\\
& =
\mathbb E[\eta_l(\widehat{\chi}_l)^2]
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{align}
Notice that the term arising from the Taylor expansion is already of order
$1/\sqrt{M}$, and, since we multiply each of the terms in the sum by $1/M$,
we have no need to develop the expansion further. Indeed, all terms in the
expression for the variance will be multiplied by $1/M$ and so for the
`diagonal' terms in the square of the sum, we only
need an expression to leading order.
\begin{remark}
The error that we are making in discarding the terms arising from
the Taylor expansion is $1/\sqrt{M}$ multiplied by a term that depends
on
$\mathbb P'[Z^{i[1]}]/\mathbb P[Z^{i[1]}]=-(Z^{i[1]}-\mathbb E[Z^{i[1]}])/\mathtt{Var}(Z^{i[1]})$.
As usual, the approximation will be poor if the trait value of the parent
is too extreme, or the variance is too small.
\end{remark}
As a result, for these terms we can calculate with respect to the
distribution in the ancestral population and we find
\begin{eqnarray*}
\frac{1}{M}\mathbb E\left[\left.\sum_{l=1}^M\left(\eta_l(\chi_l^{i[1],1})+
\eta_l(\chi_l^{i[1],2})\right)^2\right|i[1]=i[2], Z^{i[1]}\right]
&=&\frac{2}{M}\sum_{l=1}^M\mathbb E\left[\eta_l(\widehat{\chi}_l)^2\right]
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right)
\\
&=&
\sigma_A^2+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{eqnarray*}
Similarly, recalling that we are still considering the case of
identical parents,
\begin{eqnarray*}
&&\frac{1}{16M}\mathbb E\left[\left.\sum_{l=1}^M\left(
\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],1})
+2\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],2})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[1],2})\right)^2\right|
i[1]=i[2], Z^{i[1]}\right]
\\
&=&
\frac{1}{16 M}\sum_{l=1}^M\left(2\mathbb E[\phi_l(\widehat{\chi}_l,
\widehat{\chi}_l)^2]
+4\mathbb E[\phi_l(\widehat{\chi}^1_l,\widehat{\chi}^2_l)^2]\right)
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right)
\\
&=&\frac{1}{8}\left(\sigma_{DI}^2+\iota^*\right)+\frac{1}{4}\sigma_D^2
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right),
\end{eqnarray*}
and
\begin{eqnarray*}
&&\frac{1}{2M}\mathbb E\Bigg[\sum_{l=1}^M\Bigg(\eta_l(\chi_l^{i[1],1})+
\eta_l(\chi_l^{i[1],2})\Bigg)\\
&&\times
\Bigg(\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],1})
+2\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],2})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[1],2})\Bigg)\Bigg|
i[1]=i[2], Z^{i[1]}\Bigg]
\\
&=&
\frac{1}{2M}\mathbb E\Bigg[\sum_{l=1}^M\Bigg(\eta_l(\chi_l^{i[1],1})+
\eta_l(\chi_l^{i[1],2})\Bigg)\\
&&\times \Bigg(
\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],1})
+2\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],2})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[1],2})\Bigg)\Bigg]
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right)
\\
&=&
\frac{1}{2M}\sum_{l=1}^M2\mathbb E\left[\eta_l(\widehat{\chi}_l)
\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)\right]
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right)
\\
&=& \frac{\sigma_{ADI}}{2}
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{eqnarray*}
Combining all these terms we find that if the parents are identical, then
the contribution to $\mathbb E[({\cal A}^i+{\cal D}^i)^2|i[1]=i[2], Z^{i[1]}]$
from the `diagonal' terms is
\begin{equation}
\label{diagonal terms A+D squared same parent}
\sigma_A^2+\frac{1}{8}\left(\sigma_{DI}^2+\iota^*\right)
+\frac{1}{4}\sigma_D^2+\frac{1}{2}\sigma_{ADI}
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{equation}
We must now turn to the contribution from correlations across loci.
For this we must compute
\begin{eqnarray}
\label{correlations 1}
&&
\frac{1}{M}\mathbb E\Bigg[\sum_{l\neq m}\Big(\eta_l(\chi_l^{i[1],1})
+\eta_l(\chi_l^{i[1],2})\Big)\Big(\eta_m(\chi_m^{i[1],1})+
\eta_m(\chi_m^{i[1],2})\Big)\Bigg|i[1]=i[2], Z^{i[1]}\Bigg]
\\
\label{correlations 2}
&&+
\frac{1}{2M}\mathbb E\Bigg[\sum_{l\neq m}\Big(\eta_l(\chi_l^{i[1],1})
+\eta_l(\chi_l^{i[1],2})\Big)
\\
\nonumber
&&\qquad\qquad\qquad\times
\Big(\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],1})+
\phi_m(\chi_m^{i[1],2},\chi_m^{i[1],2})+
2\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],2})\Big)\Bigg|i[1]=i[2], Z^{i[1]}\Bigg]
\\
\label{correlations 3}
&&+
\frac{1}{16M}\mathbb E\Bigg[\sum_{l\neq m}
\Big(\phi_l(\chi_l^{i[1],1},\chi_l^{i[1],1})+
\phi_l(\chi_l^{i[1],2},\chi_l^{i[1],2})+
2\phi_l(\chi_l^{i[1],1},\chi_l^{i[1],2})\Big)
\\
\nonumber
&&\qquad\qquad\qquad\times
\Big(\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],1})+
\phi_m(\chi_m^{i[1],2},\chi_m^{i[1],2})+
2\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],2})\Big)
\Bigg|i[1]=i[2], Z^{i[1]}\Bigg].
\end{eqnarray}
This time we use Lemma~\ref{Z-l-m}.
\begin{align}
\label{two loci}
& \frac{1}{\mathbb P[Z^{i[1]}=z]}
\mathbb P\left[\left. Z^{i[1]}=z\right| \Big(\chi_l^{i[1],1},
\chi_l^{i[1],2},\chi_m^{i[1],1},\chi_m^{i[1],2}\Big)=(x,x',y,y')\right]
\nonumber\\
& = \frac{1}{\mathbb P[Z^{i[1]}=z]}
\mathbb P\left[ Z_{-l-m}^{i[1]}=z-\frac{1}{\sqrt{M}}\Big(
\eta_l(x)+\eta_l(x')+\phi_l(x,x')
+\eta_m(y)+\eta_m(y')+\phi_m(y,y')\Big)
\right]\nonumber \\
& =1-\frac{1}{\sqrt{M}}\Big(\Psi_l(x,x')+\Psi_m(y,y')-\mathbb E[\Psi_l+\Psi_m]
\Big)\frac{1}{\mathbb P[Z^{i[1]}=z]}\frac{d}{dz}\mathbb P[Z^{i[1]}=z] \nonumber
\\& \qquad +\frac{1}{2M}\Bigg(\big(\Psi_l(x,x')+\Psi_m(y,y')\big)^2
-\mathbb E[(\Psi_l+\Psi_m)^2]\Bigg)
\frac{1}{\mathbb P[Z^{i[1]}=z]}\frac{d^2}{dz^2}\mathbb P[Z^{i[1]}=z] \nonumber\\
& \qquad -\frac{1}{M}\Bigg(\Big(\Psi_l(x,x')+\Psi_m(y,y')-\mathbb E[(\Psi_l+\Psi_m)]\Big)
\mathbb E[\Psi_l+\Psi_m]\Bigg)
\frac{1}{\mathbb P[Z^{i[1]}=z]}\frac{d^2}{dz^2}\mathbb P[Z^{i[1]}=z] \nonumber \\
& \qquad +\mathcal{O}\Big(\frac{1}{M^{3/2}}\Big).
\end{align}
Using that in the ancestral population we are at linkage
equilibrium with $x,x'$ and $y,y'$ sampled independently from
$\widehat{\nu}_l$ and $\widehat{\nu}_m$ respectively, multiplying by
$\eta_l(x)\eta_m(y)$ and integrating
against
$\widehat{\nu}(dx)\widehat{\nu}(dy)$,
the only non-zero term corresponds to the
term $\eta_l(x)\eta_m(y)$ in $(\Psi_l(x,x')+\Psi_m(y,y'))^2$, so
that
\begin{align}
\nonumber
&\frac{1}{M}\mathbb E\Bigg[\sum_{l\neq m}\Big(\eta_l(\chi_l^{i[1],1})
+\eta_l(\chi_l^{i[1],2})\Big)\Big(\eta_m(\chi_m^{i[1],1})+
\eta_m(\chi_m^{i[1],2})\Big)\Bigg|i[1]=i[2], Z^{i[1]}\Bigg]
\\
\nonumber
&=
\frac{4}{M^2}\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\sum_{l\neq m}\mathbb E[\eta_l(\widehat{\chi}_l)^2]\mathbb E[\eta_m(\widehat{\chi}_m)^2]
+{\mathcal O}\Big(\frac{1}{\sqrt{M}}\Big)\\
&=
\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]} (\sigma_A^2)^2
+{\mathcal O}\Big(\frac{1}{\sqrt{M}}\Big).
\label{eta by eta}
\end{align}
(The factor of 4 corresponds to the 4 possible ways of choosing the
parents at the two loci.)
Similarly, to calculate
$$\mathbb E\left[\left.\eta_l(\chi_l^{i[1],1})\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],1})
\right|i[1]=i[2], Z^{i[1]}\right]$$
we multiply~(\ref{two loci}) by $\eta_l(x)\phi_m(y,y)$ and integrate.
Once again, using
equations~(\ref{vanishing cross variation})-(\ref{mean eta zero}),
we find that
most of the terms vanish, leaving only
\begin{multline}
\label{eta by phi one one}
-\frac{1}{\sqrt{M}}\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\mathbb E[\eta_l(\widehat{\chi}_l)^2]\mathbb E[\phi_m(\widehat{\chi}_m,\widehat{\chi}_m)]\\
+\frac{1}{2M}
\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\Big\{
\mathbb E[\eta_l(\widehat{\chi}_l)^3]\mathbb E[\phi_m(\widehat{\chi}_m,\widehat{\chi}_m)]
+\mathbb E[\eta_l(\widehat{\chi}_l)\phi_l(\widehat{\chi}_l^1, \widehat{\chi}_l^2)^2]
\mathbb E[\phi_m(\widehat{\chi}_m,\widehat{\chi}_m)]
\\
+
2\mathbb E[\eta_l(\widehat{\chi}_l)^2]
\mathbb E[\eta_m(\widehat{\chi}_m)\phi_m(\widehat{\chi}_m, \widehat{\chi}_m)]
\Big\}.
\end{multline}
Multiplying by $1/(2M)$ and summing over loci, in the notation
of Table~\ref{QG coefficients}, the first term yields
$$-\iota\sigma_A^2\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}.$$
(There are four terms of this form in~(\ref{correlations 2}) and we have taken account of
all of them.)
The last term gives
$$\frac{\sigma_A^2\sigma_{ADI}}{2}\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
$$
(again counting the contribution from all
four terms of this form
in~(\ref{correlations 2})).
Now observe that
\begin{equation*}
\frac{1}{M}\sum_{m=1}^M\mathbb E[\phi_m(\widehat{\chi}_m,\widehat{\chi}_m)]
=\frac{\iota}{\sqrt{M}},
\end{equation*}
so that summing over loci, the contribution from the first two terms
multiplying the second derivative will be ${\mathcal O}(1/\sqrt{M})$.
\begin{remark}
\label{where we use inbreeding}
Up to this point, it has been possible to
neglect the error terms under the assumption that the within-family variance
is not too small and we
are not too far out into the tails of the distribution of $Z^{i[1]}$;
the more extreme the trait of the parent, the worse the approximation will be.
Now things change.
In order for $\mathbb E[{\cal A}^i+{\cal D}^i]$ to be finite, we required that the
inbreeding depression~$\iota$ be well-defined; here we see that it also enters into
the error terms.
\end{remark}
In the same way we calculate
$$
\mathbb E\left[\left.\eta_l(\chi_l^{i[1],1})\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],2})
\right|i[1]=i[2], Z^{i[1]}\right]$$
by multipling~(\ref{two loci}) by $\eta_l(x)\phi_m(y,y')$ and integrating.
The only term to survive integration is
\begin{equation}
\label{eta by phi one two}
\frac{1}{2M}\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\mathbb E[2\eta_l(\widehat{\chi}_l)^2]
\mathbb E[\phi_m(\widehat{\chi}_m^1,\widehat{\chi}_m^2)^2].
\end{equation}
There are four terms of this form in~(\ref{correlations 2}),
each of which is weighted by $1/(2M)$ and
so, summing over loci,
we arrive at an overall contribution of $\sigma_A^2\sigma_D^2\mathbb P''[Z^{i[1]}]/\mathbb P[Z^{i[1]}]$.
Equations~(\ref{eta by phi one one}) and (\ref{eta by phi one two}) yield
that~(\ref{correlations 2}) equals
\begin{align}
\label{eta by phi}
& \frac{1}{2M}\mathbb E\Bigg[\sum_{l\neq m}\Big(\eta_l(\chi_l^{i[1],1})
+\eta_l(\chi_l^{i[1],2})\Big) \nonumber \\
& \qquad \qquad \times \Big(\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],1})+
\phi_m(\chi_m^{i[1],2},\chi_m^{i[1],2})+
2\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],2})\Big)\Bigg|i[1]=i[2], Z^{i[1]}\Bigg]
\nonumber\\
& =
-\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\iota \sigma_A^2
+
\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\Big\{\frac{\sigma_A^2\sigma_{ADI}}{2}
+\sigma_A^2\sigma_D^2
\Big\}
+{\mathcal O}\Big(\frac{1}{\sqrt{M}}\Big).
\end{align}
Continuing in this way,
$$\mathbb E\left[\left.\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],1})
\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],1})
\right|i[1]=i[2], Z^{i[1]}\right]$$
is obtained by multiplying~(\ref{two loci}) by $\phi_l(x,x)\phi_m(y,y)$
and integrating.
When we sum the `constant' term over loci we will obtain $\iota^2/M$ which tends
to zero. The remaining non-zero terms are
\begin{multline}
\label{phi one one by phi one one}
-\frac{1}{\sqrt{M}}
\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\Big\{
\mathbb E[\eta_l(\widehat{\chi}_l) \phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]
\mathbb E[\phi_m(\widehat{\chi}_m,\widehat{\chi}_m)]
+
\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]
\mathbb E[\eta_m(\widehat{\chi}_m)
\phi_m(\widehat{\chi}_m,\widehat{\chi}_m)]\Big\}
\\
+
\frac{1}{2M}\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\Bigg\{
\mathbb E\left[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)
\phi_m(\widehat{\chi}_m,\widehat{\chi}_m)\Big(
\eta_l(\widehat{\chi}_l)^2
+\eta_m(\widehat{\chi}_m)^2+
2\eta_l(\widehat{\chi}_l)
\eta_m(\widehat{\chi}_m)\Big)\right]
\\
+
\mathbb E\left[\phi_l(\widehat{\chi}_l^1,\widehat{\chi}_l^1)
\phi_m(\widehat{\chi}_m^1,\widehat{\chi}_m^1)\Big(
\eta_l(\widehat{\chi}_l^2)^2+
\eta_m(\widehat{\chi}_m^2)^2\Big)\right]\Bigg\}.
\end{multline}
The terms in the last line will contribute ${\mathcal O}(1/\sqrt{M})$
when we sum, as will the first two terms in the middle line.
There are four terms of this form in~(\ref{correlations 3}) and we
are multiplying by $1/(16M)$ and summing over loci, so the top line
contributes $-\iota\sigma_{ADI}\mathbb P'[Z^{i[1]}]/(4\mathbb P[Z^{i[1]}])$,
similarly the second line will
contribute $\sigma_{ADI}^2\mathbb P''[Z^{i[1]}]/(16\mathbb P[Z^{i[1]}])$.
Now, again using~(\ref{two loci}),
\begin{align*}
&\mathbb E\left[\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],1})
\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],2})
\Big|i[1]=i[2], Z^{i[1]}\right] \nonumber \\
& = \int\ldots\int \phi_l(x,x)\phi_m(y,y')
\Bigg[ 1-\frac{1}{\sqrt{M}}\Big(\Psi_l(x,x')+\Psi_m(y,y')-\mathbb E[\Psi_l+\Psi_m]
\Big)\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]} \\
&\qquad +\frac{1}{2M}\Bigg(\big(\Psi_l(x,x')+\Psi_m(y,y')\big)^2
-\mathbb E[(\Psi_l+\Psi_m)^2]\Bigg)
\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]} \\
&\qquad -\frac{1}{M}\Bigg(\Big(\Psi_l(x,x')+\Psi_m(y,y')-\mathbb E[(\Psi_l+\Psi_m)]\Big)
\mathbb E[\Psi_l+\Psi_m]\Bigg)
\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\Bigg]
\widehat{\nu}_l(dx)
\widehat{\nu}_l(dx')
\widehat{\nu}_m(dy)
\widehat{\nu}_m(dy') \nonumber \\
& \qquad +{\mathcal O}\big(\frac{1}{M^{3/2}}\big).
\end{align*}
There are eight terms of this form in~(\ref{correlations 3}), and we are multiplying by $1/(16M)$ and
summing over loci, so the first term will correspond to a contribution of
$$-\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\frac{\iota\sigma_D^2}{2}.$$
As usual, terms multiplying the second derivative that involve the locus
$l$ only through $\phi_l(x,x)$ will contribute ${\mathcal O}(1/\sqrt{M})$
to the sum and we find that the nontrivial contributions will be
\begin{multline*}
-\frac{1}{\sqrt{M}}\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\mathbb E[\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]
\mathbb E[\phi_m(\widehat{\chi}_m^1,\widehat{\chi}_m^2)^2]
+
\frac{1}{2M}\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\mathbb E[2\eta_l(\widehat{\chi}_l)\phi_l(\widehat{\chi}_l,\widehat{\chi}_l)]
\mathbb E[\phi_m(\widehat{\chi}_m^1,\widehat{\chi}_m^2)^2].
\end{multline*}
There are eight terms of this form in~(\ref{correlations 3}), so multiplying by $1/(16M)$ and
summing over loci gives
\begin{equation}
\label{phi one one by phi one two}
-\frac{\iota \sigma_D^2}{2}\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}+
\frac{\sigma_{ADI}\sigma_D^2}{4}\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}.
\end{equation}
Finally,
when we scale and sum over loci,
the only nontrivial term in our expression for
$$
\mathbb E\left[\left.\phi_l(\chi_l^{i[1],1}, \chi_l^{i[1],2})
\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],2})
\right|i[1]=i[2], Z^{i[1]}\right]
$$
is
\begin{equation*}
+\frac{1}{2M}\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\mathbb E[2 \phi_l(\widehat{\chi}_l^1,\widehat{\chi}_l^2)^2]
\mathbb E[\phi_m(\widehat{\chi}_m^1,\widehat{\chi}_m^2)^2].
\end{equation*}
There are four terms of this form, and so multiplying by $1/(16M)$ and
summing gives
\begin{equation}
\label{phi one two by phi one two}
\frac{(\sigma_D^2)^2}{4}\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}.
\end{equation}
Combining \eqref{phi one one by phi one one},
\eqref{phi one one by phi one two},
and \eqref{phi one two by phi one two}, we find that~(\ref{correlations 3}) is
\begin{align}
\label{phi by phi}
& \frac{1}{16M}\mathbb E\Bigg[\sum_{l\neq m}
\Big(\phi_l(\chi_l^{i[1],1},\chi_l^{i[1],1})+
(\phi_l(\chi_l^{i[1],2},\chi_l^{i[1],2})+
2\phi_l(\chi_l^{i[1],1},\chi_l^{i[1],2})\Big) \nonumber\\
& \qquad \times
\Big(\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],1})+
(\phi_m(\chi_m^{i[1],2},\chi_m^{i[1],2})+
2\phi_m(\chi_m^{i[1],1},\chi_m^{i[1],2})\Big)
\bigg|i[1]=i[2], Z^{i[1]}\Bigg] \nonumber \\
& = -\frac{\mathbb P'[Z^{i[1],1}]}{\mathbb P[Z^{i[1],1}]}\Big(
\frac{\iota\sigma_{ADI}}{4}
+\frac{\iota\sigma_D^2}{2}\Big)
+\frac{\mathbb P''[Z^{i[1],1}]}{\mathbb P[Z^{i[1],1}]}\Big(\frac{\sigma_{ADI}^2}{16}
+\frac{\sigma_{ADI}\sigma_D^2}{4}+\frac{(\sigma_D^2)^2}{4}\Big)
+{\mathcal O}\Big(\frac{1}{\sqrt{M}}\Big).
\end{align}
Adding~(\ref{diagonal terms A+D squared same parent}), (\ref{eta by eta}),
(\ref{eta by phi}), and~(\ref{phi by phi}) yields $\mathbb E[({\cal A}^i+{\cal D}^i)^2]$,
and subtracting the square
of~(\ref{mean gen one identical parents}), we obtain
\begin{align}
\mathtt{Var}\big(&{\cal A}^i+{\cal D}^i|i[1]=i[2], Z^{i[1]}\big)\nonumber\\
&=
\sigma_A^2+\frac{1}{8}\left(\sigma_{DI}^2+\iota^*\right)
+\frac{1}{4}\sigma_D^2+\frac{1}{2}\sigma_{ADI}
-\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\left\{\iota\sigma_A^2+\frac{\iota\sigma_{ADI}}{4}+\frac{\iota\sigma_D^2}{2}
\right\} \nonumber \\
&\quad + \frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
\left\{\big(\sigma_A^2\big)^2+\frac{\sigma_A^2\sigma_{ADI}}{2}
+\sigma_A^2\sigma_D^2
+\frac{\big(\sigma_D^2\big)^2}{4}
+\frac{\sigma_{ADI}^2}{16}+\frac{\sigma_D^2\sigma_{ADI}}{4}\right\} \nonumber
\\
& \quad -\left(\frac{\iota}{2}-
\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\left(\sigma_A^2+\frac{\sigma_D^2}{2}
+\frac{\sigma_{ADI}}{4}\right)\right)^2
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{align}
Now if we substitute the Gaussian density for $Z^{i[1]}$, observing that
$$
\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}
-\left(\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\right)^2=-\frac{1}{\sigma_A^2+\sigma_D^2},$$
we see that the variance reduces to
\begin{equation}
\label{cond variance gen one same parent}
-\frac{\iota^2}{4}+\frac{1}{\sigma_A^2+\sigma_D^2}\left(\sigma_A^2+
\frac{\sigma_D^2}{2}+\frac{\sigma_{ADI}}{4}\right)^2
+\sigma_A^2+\frac{1}{8}\left(\sigma_{DI}^2+\iota^*\right)
+\frac{1}{4}\sigma_D^2+\frac{1}{2}\sigma_{ADI}
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{equation}
Again, that was a lot of work to recover exactly the expression that we
expected from conditioning the multivariate normal random variable
$\big(({\cal A}^i+{\cal D}^i), Z^{i[1]}\big)$ on its second argument. However, in the
process, we have identified where the normal approximation to the
conditioned process will break down.
The bounds that we have obtained will be poor if the trait value of either
parent is too extreme, or if the pedigree is too inbred (as a
result of which
the variance of trait values will be small and
inbreeding depression may be high).
Of course, we have not proved that the conditional distribution of
$({\cal A}^i+{\cal D}^i)$ converges to a normal, we have just checked that the first two
moments are asymptotically
what we would expect. We defer the proof of normality until we
have calculated the conditional variance of $({\cal A}^i+{\cal D}^i)$ in the (much simpler)
case of two distinct parents.
\subsection*{Conditional variance $({\cal A}^i+{\cal D}^i)$, generation one, distinct parents}
If the parents are distinct, then the expressions are much simpler.
First
\begin{align*}
& \frac{1}{4M}\mathbb E\left[\left.\sum_{l=1}^M\left(\eta_l(\chi_l^{i[1],1})+
\eta_l(\chi_l^{i[1],2}) +\eta_l(\chi_l^{i[2],1}) +\eta_l(\chi_l^{i[2],2})
\right)^2\right|i[1]\neq i[2], Z^{i[1]}, Z^{i[2]}\right]
\\
&=\frac{1}{M}\sum_{l=1}^M\mathbb E\left[\eta_l(\widehat{\chi}_l)^2\right]
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right)
= \frac{\sigma_A^2}{2}
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{align*}
Next
\begin{align*}
&\frac{1}{16M}\mathbb E\Bigg[\left.\sum_{l=1}^M\left(
\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],1})
+\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],2})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],1})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],2})\right)^2\right| \\
& \qquad \qquad \qquad \qquad i[1]\neq i[2], Z^{i[1]}, Z^{i[2]}\Bigg] \\
&=
\frac{1}{4M}\sum_{l=1}^M\mathbb E\left[\phi_l(\widehat{\chi}^1_l,
\widehat{\chi}^2_l)^2\right]
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right)\\
& =\frac{1}{4}\sigma_D^2
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{align*}
Finally,
\begin{align*}
&\frac{1}{4M}\mathbb E\Bigg[\sum_{l=1}^M
\left(\eta_l(\chi_l^{i[1],1})+
\eta_l(\chi_l^{i{1},2}) +\eta_l(\chi_l^{i[2],1}) +\eta_l(\chi_l^{i[2],2})
\right)
\\
& \times
\left(
\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],1})
+\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],2})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],1})
\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],2})\right)\Bigg|
i[1]\neq i[2], Z^{i[1]}, Z^{i[2]}
\Bigg] \\
& =
{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{align*}
We now turn to the off-diagonal terms. We need to be able to calculate
the conditional expectation of
\begin{multline}
\label{off-diag term}
\Bigg[\frac{1}{2}
\left(\eta_l(\chi_l^{i[1],1})+
\eta_l(\chi_l^{i[1],2}) +\eta_l(\chi_l^{i[2],1}) +\eta_l(\chi_l^{i[2],2})
\right)
\\
+\frac{1}{4}
\left(
\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],1})
+\phi_l(\chi_l^{i[1],1}, \chi_l^{i[2],2})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],1})
+\phi_l(\chi_l^{i[1],2}, \chi_l^{i[2],2})\right)\Bigg]
\\
\times
\Bigg[\frac{1}{2}
\left(\eta_m(\chi_m^{i[1],1})+
\eta_m(\chi_m^{i[1],2}) +\eta_m(\chi_m^{i[2],1}) +\eta_m(\chi_m^{i[2],2})
\right)
\\ +\frac{1}{4}
\left(
\phi_m(\chi_m^{i[1],1}, \chi_m^{i[2],1})
+\phi_m(\chi_m^{i[1],1}, \chi_m^{i[2],2})
+\phi_m(\chi_m^{i[1],2}, \chi_m^{i[2],1})
+\phi_m(\chi_m^{i[1],2}, \chi_m^{i[2],2})\right)\Bigg]
\end{multline}
given the trait values in the (unrelated) parents $i[1]$ and $i[2]$.
Because the parents are distinct, and
they are in generation zero, as
in~(\ref{independence of trait values})
in our calculation of the
conditional mean, we can exploit the fact that
the trait values $Z^{i[1]}$ and $Z^{i[2]}$ are
independent so that
the joint probability that
$$\big(\chi_l^{i[1],1}, \chi_l^{i[1],2}, \chi_m^{i[1],1},
\chi_m^{i[1],2}\big)=(x,x',y,y'),$$
conditional on $Z^{i[1]}, Z^{i[2]}$
is just the same as if we only
condition on $Z^{i[1]}$.
Recalling~(\ref{independence of trait values}),
we can calculate the conditional expectation of~(\ref{off-diag term})
using~(\ref{two loci}).
None of the alleles at either locus are identical
by descent, and so integrating against the term of order $1/\sqrt{M}$ in the
Taylor expansion in~(\ref{two loci}) gives zero, but since we are calculating the conditional
expectation of ${\cal O}(M^2)$
terms, each of which is of order $1/M$,
we can expect to see a contribution from the term of order $1/M$.
All the terms involving the dominance components vanish, as do those
terms involving only one copy of the additive component at one of the loci.
In total we find that the conditional expectation of~\eqref{off-diag term}
is
$$\frac{1}{M}\mathbb E[\eta_l(\widehat{\chi}_l)^2]\mathbb E[\eta_m(\widehat{\chi}_m)^2]
\left\{\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}+
\frac{\mathbb P''[Z^{i[2]}]}{\mathbb P[Z^{i[2]}]}
+2\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\frac{\mathbb P'[Z^{i[2]}]}{\mathbb P[Z^{i[2]}]}
\right\}.$$
Summing over loci (and noting that we may include the diagonal terms and
only incur an error of order $1/M$), we find that,
in the case of different parents, the variance of the shared terms ${\cal A}^i+
{\cal D}^i$,
conditional on the trait values of the parent is
\begin{multline}
\frac{1}{2}\sigma_A^2
+\frac{1}{4}\sigma_D^2
-\left(\left(\frac{\mathbb P'(Z^{i[1]})}{\mathbb P(Z^{i[1]})}
+\frac{\mathbb P'(Z^{i[2]})}{\mathbb P(Z^{i[2]})}\right)\frac{\sigma_A^2}{2}\right)^2
\\+
\left\{\frac{\mathbb P''[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}+
\frac{\mathbb P''[Z^{i[2]}]}{\mathbb P[Z^{i[2]}]}
+2\frac{\mathbb P'[Z^{i[1]}]}{\mathbb P[Z^{i[1]}]}\frac{\mathbb P'[Z^{i[2]}]}{\mathbb P[Z^{i[2]}]}
\right\}\Big(\frac{\sigma_A^2}{2}\Big)^2
+{\mathcal O}\left(\frac{1}{\sqrt{M}}\right).
\end{multline}
Once again we see that if we approximate the distribution of $Z^{i[1]}$
and $Z^{i[2]}$ by that of independent normal random variables with mean
$\bar{z}_0$ and variance $\sigma_A^2+\sigma_D^2$, most of these terms
cancel and we are left with
$$\frac{\sigma_A^2}{2}+\frac{\sigma_D^2}{4}
-\frac{\sigma_A^4}{2(\sigma_A^2+\sigma_D^2)},$$
exactly as predicted by Theorem~\ref{conditioning multivariate normals}.
\subsubsection*{The general case}
So far we have only dealt with generation one, where expressions
are simplified by the fact that $Z^{i[1]}$, $Z^{i[2]}$ are
either identical or independent. More generally, we can perform
entirely analogous calculations using
Lemma~\ref{troublesome lemma with two parents}
in place of Lemma~\ref{replacement for troublesome equation}.
In the interests of sanity, we omit the details.
\section{Convergence to normal of $({\cal A}^i+{\cal D}^i)$ conditional on parental traits}
\label{convergence to normal}
\begin{notn}
\label{notation F1}
We remind the reader that Notation~\ref{noise in notation} remains in force.
Moreover, since the environmental noise $E^i$ is assumed to be shared by
all offspring of the couple $i[1]$, $i[2]$, with this convention we can also
assume that the distribution of
${\cal A}^i+{\cal D}^i$ has a smooth density.
\end{notn}
We have verified that the first two moments of the conditional
distribution converge to the limits that we would expect if the limit
of $({\cal A}^i+{\cal D}^i)$ were multivariate normal, but this is not sufficient.
To prove that the conditional distribution is indeed asymptotically normal, we appeal
to Proposition~\ref{chen propn}, or rather Corollary~\ref{chen corollary}.
We perform the calculation in the case of identical parents, the case
of distinct parents being analogous (and less surprising). For
definiteness, we consider only generation one.
The same argument will work in any generation, but the calculations
become considerably more involved,
c.f.~Lemma~\ref{troublesome lemma with two parents}.
Recall that ${\cal A}^i+{\cal D}^i=\sum_{l=1}^M\Phi(l)/\sqrt{M}$ with $\Phi$ defined
in~(\ref{defn of Phil}). Since we are considering the case of a single parent $Z^{i[1]}=Z^{i[2]}$.
We shall write $\Phi_l(\chi_l^1,\chi_l^2)$
when we need to specify the alleles at locus $l$ in $Z^{i[1]}$ on which
this is evaluated.
Writing $W=\sum_{l=1}^M\Phi(l)/\sqrt{M}$ (as a shorthand for ${\cal A}^i+
{\cal D}^i$),
we write
$$\widehat{W}=\frac{1}{\sqrt{M}}\sum_{l=1}^M \Phi(l)\Big|
i[1]=i[2], Z^{i[1]};$$
that is $\widehat{W}$ is the random variable $W$ in the $i$th individual,
conditional on it being produced by selfing and on the parental trait value.
This is the quantity that we should like to prove is normally distributed.
The first step is to find a suitable exchangeable pair.
We write $\widehat{\Phi}(l)$ for the conditioned version of $\Phi(l)$.
For each $l\in\{1,\ldots ,M\}$, let $\widehat{\Phi}^*(l)$ be
an independent draw from the conditional distribution
of $\widehat{\Phi}(l)$ given the sum of $\widehat{\Phi}(m)$ over all $m\neq l$;
that is, in an obvious notation,
$\widehat{\Phi}^*(l)$ has the same distribution as
$$\Phi(l)\Big|\sum_{m\neq l}\widehat{\Phi}(m), i[1]=i[2], Z^{i[1]}.$$
Now let $L$ be a uniform random variable on $\{1,\ldots ,M\}$ and
define
$$\widehat{W}'=\widehat{W}-
\frac{\big(\widehat{\Phi}(L)-\widehat{\Phi}^*(L)\big)}{\sqrt{M}}.$$
Then $(\widehat{W},\widehat{W}')$ is an exchangeable pair.
Observe that
\begin{eqnarray}
\nonumber
\mathbb E\big[\widehat{W}-\widehat{W}'|\widehat{W}\big]&=&
\mathbb E\left[\left.\frac{1}{\sqrt{M}}\frac{1}{M}\sum_{l=1}^M
\big(\widehat{\Phi}(l)-\widehat{\Phi}^*(l)\big)
\right|\widehat{W}\right]
\\
\nonumber
&=&\frac{1}{M}\widehat{W}-\frac{1}{\sqrt{M}}\frac{1}{M}\sum_{l=1}^M
\mathbb E\big[\widehat{\Phi}^*(l)\big|\widehat{W}\big]
\\
\label{defn of first remainder}
&:=&\frac{1}{M}\widehat{W}-
T(\widehat{W}).
\end{eqnarray}
\begin{remark}
We wish to apply Corollary~\ref{chen corollary}.
Our first instinct is to write $\mathbb E[\widehat{W}'|\widehat{W}]=\widehat{W}(1-1/M)+T(\widehat{W})$
and take $\lambda =1/M$ in~(\ref{exchangeable pair requirement}). This will not suffice, as, with
this choice, the first term on the right of~(\ref{unnormalised difference})
will be too big. As we shall see, the resolution is to take a larger value of $\lambda$ which
captures the dependence of $\widehat{W}'$ on $\widehat{W}$.
\end{remark}
Before we can apply Corollary~\ref{chen corollary},
we need to investigate $T$. The first step is to establish the distribution of
$\chi_l^1, \chi_l^2$ conditional on $i[1]=i[2]$, $Z^{i[1]}$ (which we
shall for the rest of this section abbreviate to $Z$) and $W_{-l}$.
Keeping in mind Notation~\ref{notation F1}, and recalling that $(Z,W)$ is
shorthand for
$(Z^{i[1]}, {\cal A}^i+{\cal D}^i)$, we write $\mathbb P[z,w]$ for the
density function of $(Z,W)$ evaluated at $(z,w)$ and $\mathbb P_z[z,w]$, $\mathbb P_w[z,w]$, and so on,
for the corresponding partial derivatives.
The proof of the following lemma mirrors those of Appendix~\ref{key lemmas}.
\begin{lemma}
\label{Z-l and W-l}
The (unconditional) distribution of $(Z_{-l}, W_{-l})$ can be
written as
\begin{align*}
\mathbb P[&Z_{-l}=z, W_{-l}=w] \nonumber\\
& =\mathbb P[z,w]+\frac{1}{\sqrt{M}}\mathbb E[\Psi_l]\mathbb P_z[z,w]
+\frac{1}{\sqrt{M}}\mathbb E[\Phi_l]\mathbb P_w[z,w] +\frac{1}{M}\Big(\mathbb E[\Psi_l]^2-\mathbb E[\Psi_l^2]\Big)\mathbb P_{zz}[z,w] \nonumber
\\
& \quad +\frac{2}{M}\Big(\mathbb E[\Phi_l]\mathbb E[\Psi_l]-\mathbb E[\Phi_l\Psi_l]\Big)\mathbb P_{zw}[z,w]
+\frac{1}{M}\Big(\mathbb E[\Phi_l]^2-\mathbb E[\Phi_l^2]\Big)\mathbb P_{ww}[z,w] \nonumber\\
& \quad +\frac{1}{2M}\mathbb E[\Psi_l^2]\mathbb P_{zz}[z,w]
+\frac{1}{M}\mathbb E[\Phi_l\Psi_l]\mathbb P_{zw}[z,w]
+\frac{1}{2M}\mathbb E[\Phi_l^2]\mathbb P_{ww}[z,w]
+\mathcal{O}\Big(\frac{1}{M^{3/2}}\Big).
\end{align*}
\end{lemma}
\noindent
\newpage
{\bf Proof}
The key, as usual, is Taylor's Theorem.
\begin{align*}
& \mathbb P[Z_{-l}=z, W_{-l}=w] \\
&=\int\int\mathbb P\bigg[\chi_l^1=x, \chi_l^2=x',
Z=z+\frac{1}{\sqrt{M}}\Psi_l(x,x'), W=w+\frac{1}{\sqrt{M}}\Phi_l(x,x')\bigg]dxdx'\\
&=
\int\int \mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]dxdx'
\\
& \quad +\frac{1}{\sqrt{M}}\int\int\Psi_l(x,x')\frac{\partial}{\partial z}
\mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]dxdx'
\\
& \quad+\frac{1}{\sqrt{M}}\int\int\Phi_l(x,x')\frac{\partial}{\partial w}
\mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]dxdx'
\\
& \quad+\frac{1}{2M}\int\int\Psi_l^2(x,x')\frac{\partial^2}{\partial z^2}
\mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]dxdx'
\\
& \quad+\frac{1}{M}\int\int\Psi_l(x,x')\Phi_l(x,x')\frac{\partial^2}{\partial z\partial w}
\mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]dxdx'
\\
& \quad+\frac{1}{2M}\int\int\Phi_l^2(x,x')\frac{\partial^2}{\partial w^2}
\mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]dxdx'
+\mathcal{O}\Big(\frac{1}{M^{3/2}}\Big).
\end{align*}
Now write
\begin{multline*}
\mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]=
\mathbb P[\chi_l^1=x,\chi_l^2=x']\\
\times
\mathbb P\left[Z_{-l}=z-\frac{1}{\sqrt{M}}\Psi_l(x,x'),
W_{-l}=w-\frac{1}{\sqrt{M}}\Phi_l(x,x')\right].
\end{multline*}
Using the notation $\mathbb P[x,x',z,w]:=
\mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]$
we substitute from above and apply Taylor's Theorem to obtain,
\begin{multline*}
\mathbb P[x, x',z, w]
=
\mathbb P[\chi_l^1=x,\chi_l^2=x']
\\
\times
\int\int\mathbb P\Big[y, y',
z-\frac{1}{\sqrt{M}}\Psi_l(x,x')+\frac{1}{\sqrt{M}}\Psi_l(y,y'),
w-\frac{1}{\sqrt{M}}\Phi_l(x,x')+\frac{1}{\sqrt{M}}\Phi_l(y,y')\Big]dydy'
\\
=
\mathbb P[\chi_l^1=x,\chi_l^2=x']
\Bigg\{
\mathbb P[Z=z, W=w]
+\frac{1}{\sqrt{M}}\int\int\Big(\Psi_l(y,y')-\Psi_l(x,x')\Big)
\frac{\partial}{\partial z}\mathbb P[y,y',z,w]dydy'
\\
+\frac{1}{\sqrt{M}}\int\int\Big(\Phi_l(y,y')-\Phi_l(x,x')\Big)
\frac{\partial}{\partial w}\mathbb P[y,y',z,w]dydy'
+\mathcal{O}\Big(\frac{1}{M}\Big)\Bigg\}.
\end{multline*}
Differentiating with respect to $z$ (and assuming sufficient regularity),
\begin{multline*}
\frac{\partial}{\partial z}
\mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]
=
\mathbb P[\chi_l^1=x,\chi_l^2=x']
\\
\times
\Bigg\{
\frac{\partial}{\partial z}\mathbb P[Z=z, W=w]
+\frac{1}{\sqrt{M}}\int\int\Big(\Psi_l(y,y')-\Psi_l(x,x')\Big)
\frac{\partial^2}{\partial z^2}\mathbb P[y,y',z,w]dydy'
\\
+\frac{1}{\sqrt{M}}\int\int\Big(\Phi_l(y,y')-\Phi_l(x,x')\Big)
\frac{\partial^2}{\partial z\partial w}\mathbb P[y,y',z,w]dydy'
+\mathcal{O}\Big(\frac{1}{M}\Big)\Bigg\},
\end{multline*}
and similarly
\begin{multline*}
\frac{\partial}{\partial w}
\mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]
=
\mathbb P[\chi_l^1=x,\chi_l^2=x']
\\
\times
\Bigg\{
\frac{\partial}{\partial w}\mathbb P[Z=z, W=w]
+\frac{1}{\sqrt{M}}\int\int\Big(\Psi_l(y,y')-\Psi_l(x,x')\Big)
\frac{\partial^2}{\partial z\partial w}\mathbb P[y,y',z,w]dydy'
\\
+\frac{1}{\sqrt{M}}\int\int\Big(\Phi_l(y,y')-\Phi_l(x,x')\Big)
\frac{\partial^2}{\partial w^2}\mathbb P[y,y',z,w]dydy'
+\mathcal{O}\Big(\frac{1}{M}\Big)\Bigg\}.
\end{multline*}
As in the proof of Lemma~\ref{replacement for troublesome equation}
we only require the second derivatives to leading order
$$\frac{\partial^2}{\partial z^2}
\mathbb P[\chi_l^1=x,\chi_l^2=x', Z=z, W=w]
=
\mathbb P[\chi_l^1=x,\chi_l^2=x']
\frac{\partial^2}{\partial z^2}\mathbb P[Z=z, W=w]
+\mathcal{O}\Big(\frac{1}{\sqrt{M}}\Big),$$
with similar expressions for the other second partial derivatives.
Substituting back into the first display yields the result.
\hfill $\Box$
\begin{lemma}
The conditional distribution of $\chi_l^1, \chi_l^2$ given $Z$ and $W_{-l}$
is given by
\begin{multline}
\label{distbn alleles in Phistar}
\mathbb P[\chi_l^1=x,\chi_l^2=x'|Z=z, W_{-l}=w_{-l}]
\\
=\mathbb P[\chi_l^1=x,\chi_l^2=x']
\Bigg\{
1-\frac{\Psi_l(x,x')}{\sqrt{M}\mathbb P[z,w_{-l}]}
\mathbb P_z[z, w_{-l}]
+\frac{\mathbb E[\Psi_l]}{\sqrt{M}\mathbb P[z, w_{-l}]}
\mathbb P_z[z, w_{-l}]
\Bigg\}
+\mathcal{O}\Big(\frac{1}{M}\Big).
\end{multline}
\end{lemma}
\noindent
{\bf Proof}
This is just an application of Bayes' rule:
\begin{align}
\mathbb P[\chi_l^1=x,\chi_l^2=x'|Z=z, W_{-l}=w_{-l}]
&=
\frac{\mathbb P[Z=z, W_{-l}=w_{-l}| \chi_l^1=x,\chi_l^2=x']}{
\mathbb P[Z=z, W_{-l}=w_{-l}]}
\mathbb P[\chi_l^1=x,\chi_l^2=x']
\nonumber
\\
&=
\frac{\mathbb P[Z_{-l}=z-\frac{\Psi_l(x,x')}{\sqrt{M}}, W_{-l}=w_{-l}]}{
\mathbb P[Z=z, W_{-l}=w_{-l}]}
\mathbb P[\chi_l^1=x,\chi_l^2=x']
\label{cond dist alleles}
\end{align}
Using Lemma~\ref{Z-l and W-l} and Taylor's Theorem,
\begin{align*}
&\mathbb P\bigg[Z_{-l}=z-\frac{\Psi_l(x,x')}{\sqrt{M}}, W_{-l}=w_{-l}\bigg]
\\
&=\mathbb P\bigg[Z=z-\frac{\Psi_l(x,x')}{\sqrt{M}}, W_{-l}=w_{-l}\bigg]
+\frac{1}{\sqrt{M}}\mathbb E[\Psi_l]\mathbb P_z\bigg[z-\frac{\Psi_l(x,x')}{\sqrt{M}}, w_{-l}\bigg] \\
& \qquad \qquad
+\frac{1}{\sqrt{M}}\mathbb E[\Phi_l]\mathbb P_w\bigg[z-\frac{\Psi_l(x,x')}{\sqrt{M}}, w_{-l}\bigg]
\\
&=
\mathbb P[Z=z, W=w_{-l}]-\frac{\Psi_l(x,x')}{\sqrt{M}}
\mathbb P_z[z,w_{-l}]
+\frac{\mathbb E[\Psi_l]}{\sqrt{M}}
\mathbb P_z[z,w_{-l}]
+\frac{\mathbb E[\Phi_l]}{\sqrt{M}}
\mathbb P_w[z,w_{-l}] +
\mathcal{O}\Big(\frac{1}{M}\Big).
\end{align*}
When we integrate this expression with respect to $x$ and $x'$, to
calculate the denominator in~(\ref{cond dist alleles})
we recover
$\mathbb P_z[z,w_{-l}]
+\frac{\mathbb E[\Phi_l]}{\sqrt{M}} \mathbb P_w[z,w_{-l}] +
\mathcal{O}\big(\frac{1}{M}\big)$
(since the
expectation of $\Psi_l$ cancels).
Expanding the ratio in~(\ref{cond dist alleles}) in powers of $1/\sqrt{M}$, the
terms involving $\mathbb E[\Phi_l]$ cancel, and the result follows. \hfill $\Box$
Finally we are in a position to calculate the quantity
$T(\widehat{W})$ that was defined in~(\ref{defn of first remainder}).
Recall that $\widehat{\Phi}_l^*$ is an independent draw from the
conditional distribution of $\widehat{\Phi}_l$ given $W_{-l}$ and $Z$,
so using~(\ref{distbn alleles in Phistar}),
\begin{equation}
\label{cond expn Phistar}
\mathbb E[\widehat{\Phi}_l^*|\widehat{W}]
=\mathbb E[\Phi_l]-\Big(\mathbb E[\Phi_l\Psi_l]-\mathbb E[\Phi_l]\mathbb E[\Psi_l]\Big)
\frac{1}{\sqrt{M}}\mathbb E\left[\left.\frac{1}{\mathbb P[Z,W_{-l}]}
\frac{\partial}{\partial z}\mathbb P[Z,W_{-l}]\right| W=\widehat{W}\right] +
\mathcal{O}\Big(\frac{1}{M}\Big).
\end{equation}
Conditioning only on $i[1]=i[2]$,
using the calculations in Appendix~\ref{QG derivations}
and equation~(\ref{conditioned covariances}),
by an
application of Theorem~\ref{rinott clt},
(up to an error of order $1/\sqrt{M}$)
the joint distribution of $({\cal A}^i+{\cal D}^i, Z^{i[1]})$ is approximately
that of a bivariate
normal.
We will need that for a bivariate normal distribution with mean vector
$(\mu_Z,\mu_W)$ and covariance matrix
$$\begin{pmatrix}
\sigma_{Z}^2 & \mathtt{Cov}(Z,W)\\
\mathtt{Cov}(Z,W) & \sigma_W^2
\end{pmatrix}$$
the density function takes the form
$$p(z,w)\!=\!\frac{1}{2\pi\sigma_Z\sigma_W\sqrt{1-\rho^2}}
\exp\!\bigg\{\!-\frac{1}{2(1-\rho^2)}\left(\frac{(z-\mu_Z)^2}{\sigma_Z^2}
-\frac{2\rho (z-\mu_Z)(w-\mu_W)}{\sigma_Z\sigma_W}+
\frac{(w-\mu_W)^2}{\sigma_W^2}\right)\!\bigg\},$$
where $\rho=\mathtt{Cov}(Z,W)/(\sigma_Z\sigma_W)$. Differentiating, we find
\begin{equation}
\label{bivariate normal calculation}
\frac{1}{p(z,w)}\frac{\partial}{\partial z}p(z,w)=
\frac{1}{(1-\rho^2)}\left\{\frac{\rho(w-\mu_W)}{\sigma_Z\sigma_W}
-\frac{(z-\mu_Z)}{\sigma_Z^2}\right\}.
\end{equation}
Recall the definition of $T$ from~(\ref{defn of first remainder}).
Multiplying~(\ref{cond expn Phistar}) by $1/\sqrt{M}$,
observing that $\mathbb E[W_{-l}|W,Z]=W+\mathcal{O}(1/\sqrt{M})$ (and since $\Phi_l$ is uniformly bounded
independent of $l$, the
error is bounded independent of $l$),
and then averaging
out over $l$ as in the definition of $T(\widehat{W})$, on
substituting~(\ref{bivariate normal calculation}) and $\mathtt{Cov}(Z,W)=\rho\sigma_Z\sigma_W$,
we find
\begin{align}
\label{TWHat}
T(\widehat{W})& =
\frac{1}{M}\mathbb E[W]+\frac{\mathtt{Cov}(Z,W)}{M}
\left\{\frac{Z-\mathbb E[Z]}{\sigma_Z^2(1-\rho^2)}-\frac{\rho}{1-\rho^2}
\frac{\widehat{W}-\mathbb E[W]}{\sigma_Z\sigma_W}\right\}
+\mathcal{O}\Big(\frac{1}{M^{3/2}}\Big) \nonumber\\
& =
\frac{1}{M}\mathbb E[W]+\frac{\rho\sigma_Z\sigma_W}{M}
\left\{\frac{Z-\mathbb E[Z]}{\sigma_Z^2(1-\rho^2)}-\frac{\rho}{1-\rho^2}
\frac{\widehat{W}-\mathbb E[W]}{\sigma_Z\sigma_W}\right\}
+\mathcal{O}\Big(\frac{1}{M^{3/2}}\Big) \nonumber
\\
& =\frac{1}{M}\mathbb E[W]+\frac{1}{M}\rho\frac{\sigma_W}{\sigma_Z}
\frac{Z-\mathbb E[Z]}{1-\rho^2}-\frac{1}{M}\frac{\rho^2}{1-\rho^2}(\widehat{W}-\mathbb E[W])
+\mathcal{O}\Big(\frac{1}{M^{3/2}}\Big).
\end{align}
Using the approximation for the conditional distribution
of $(\chi_l^1, \chi_l^2)$ given $Z$ obtained in Appendix~\ref{key lemmas},
$$\mathbb E[\widehat{W}]=\mathbb E[W|Z]=\mathbb E[W]+\rho\frac{\sigma_w}{\sigma_z}\big(Z-\mathbb E[Z]\big)
+\mathcal{O}\Big(\frac{1}{\sqrt{M}}\Big),$$
so we can rewrite~(\ref{TWHat}) as
$$T(\widehat{W})=\frac{1}{M}\frac{1}{(1-\rho^2)}\mathbb E[\widehat{W}]-
\frac{1}{M}\frac{\rho^2}{1-\rho^2}\widehat{W}
+\mathcal{O}\Big(\frac{1}{M^{3/2}}\Big).
$$
Substituting in~(\ref{defn of first remainder}),
\begin{equation}
\label{choice of lambda}
\mathbb E[\widehat{W}-\widehat{W}'|\widehat{W}]=\frac{1}{M}\frac{1}{1-\rho^2}
\Big(\widehat{W}-\mathbb E[\widehat{W}]\Big)
+\mathcal{O}\Big(\frac{1}{M^{3/2}}\Big).
\end{equation}
We are going to apply Corollary~\ref{chen corollary} to
$(\widehat{W},\widehat{W}')$ with ${\cal F}=\sigma(\widehat{W})$.
We set $\lambda =1/(M(1-\rho^2))$ and
observe from~(\ref{choice of lambda})
that we may take a remainder term $R$ with $\mathbb E[|R|]$
of order $1/M^{1/2}$ in~(\ref{unnormalised difference}).
Moreover,
$$\widehat{K}_2=\frac{1}{2\lambda}\frac{|\Delta|^3}{2},$$
and so, since by construction $|\Delta|<C/\sqrt{M}$,
$\mathbb E[\widehat{K}_2]$ is also order at most $1/M^{1/2}$.
Since with these definitions
$$\widehat{K}_1=\frac{M(1-\rho^2)}{2}
\mathbb E\Big[(\widehat{W}-\widehat{W}')^2|\widehat{W}\Big],$$
it remains to control
\begin{equation}
\label{difference from one control}
\mathbb E\left[\left| \sigma_{\widehat{W}}^2-\frac{M(1-\rho^2)}{2}
\mathbb E\Big[(\widehat{W}-\widehat{W}')^2|\widehat{W}\Big]
\right|
\right].
\end{equation}
Again using the results of Appendix~\ref{key lemmas},
$$\sigma_{\widehat{W}}^2=(1-\rho^2)\sigma_W^2
+\mathcal{O}\bigg(\frac{1}{\sqrt{M}}\bigg),$$
(the first term being the conditional variance if the
random variables were distributed exactly as a bivariate normal),
whereas
$$\mathbb E\Big[\mathbb E[(\widehat{W}-\widehat{W}')^2 |\widehat{W}]\Big]
=\mathbb E[(\widehat{W}-\widehat{W}')^2]
=\mathbb E\Big[\frac{1}{M^2}\sum_{l=1}^M (\widehat{\Phi}_l-\widehat{\Phi}_l^*)^2\Big]
=2\frac{1}{M}\sigma_W^2
+\mathcal{O}\bigg(\frac{1}{M^{3/2}}\bigg).$$
(Note that we see the unconditioned $\sigma_W^2$ in this second expression
since it involves only diagonal terms.)
To control~(\ref{difference from one control}),
observing that, by Cauchy-Schwarz inequality,
\begin{equation}
\mathbb E\left[\left|\mathbb E[M(\widehat{W}-\widehat{W}')^2]-\mathbb E[M(\widehat{W}-\widehat{W}')^2|\widehat{W}]\right|\right]
\leq \mathtt{Var}\left(\mathbb E[M(\widehat{W}-\widehat{W}]')^2|\widehat{W}]\right)^{1/2},
\end{equation}
it suffices
to control
$$\mathtt{Var}\left(\mathbb E[M(\widehat{W}-\widehat{W}')^2|\widehat{W}]\right).$$
In particular, we should like to show that this expression is of order
$\mathcal{O}(1/M)$.
Now we use the standard decomposition of conditional expectations:
for two random variables $X$ and $F$,
\begin{eqnarray*}
\mathtt{Var}(X)&=&
\mathbb E\left[\mathbb E[X^2|F]-\big(\mathbb E[X|F]\big)^2
+\big(\mathbb E[X|F]\big)^2\right]-\mathbb E\left[\mathbb E[X|F]\right]^2\\
&=&\mathbb E\big[\mathtt{Var}(X|F)\big] +
\mathtt{Var}\big(\mathbb E[X|F]\big).
\end{eqnarray*}
So
$$\mathtt{Var}(\mathbb E[X|F])=\mathtt{Var}(X)-\mathbb E[\mathtt{Var}(X|F)].$$
For us, $X=M(\widehat{W}-\widehat{W}')^2=(\Phi_L-\Phi_L^*)^2$, and
$F=\widehat{W}$,
so
$$\mathtt{Var}(X)=\frac{1}{M}\sum_{l=1}^M\mathbb E[(\Phi_l-\Phi_l^*)^4]
-\left(\frac{1}{M}\sum_{l=1}^M\mathbb E\big[(\Phi_l-\Phi_l^*)^2\big]\right)^2,$$
and we seek
\begin{multline*}
\frac{1}{M}\sum_{l=1}^M\mathbb E[(\Phi_l-\Phi_l^*)^4]
-\left(\frac{1}{M}\sum_{l=1}^M\mathbb E[(\Phi_l-\Phi_l^*)^2]\right)^2
\\
-\frac{1}{M}\sum_{l=1}^M\mathbb E\Big[\mathbb E[(\Phi_l-\Phi_l^*)^4] |\widehat{W}\Big]
+\mathbb E\Bigg[\left(\frac{1}{M}\sum_{l=1}^M\mathbb E[(\Phi_l-\Phi_l^*)^2|\widehat{W}]
\right)^2\Bigg],
\end{multline*}
where the expectation is with respect to the distribution of $\widehat{W}$.
By the tower property, the terms involving $(\Phi_l-\Phi_l^*)^4$ cancel,
leaving
\begin{equation}
\label{controlling the variance}
\frac{1}{M^2}\sum_{l=1}^M\sum_{m=1}^M\Bigg\{
\mathbb E\Bigg[\mathbb E\big[(\Phi_l-\Phi_l^*)^2|\widehat{W}\big]
\mathbb E\big[(\Phi_m-\Phi_m^*)^2|\widehat{W}\big]\Bigg]
-
\mathbb E[(\Phi_l-\Phi_l^*)^2]
\mathbb E[(\Phi_m-\Phi_m^*)^2]\Bigg\}.
\end{equation}
Expanding
$\mathbb E\big[(\Phi_l-\Phi_l^*)^2|\widehat{W}\big]
\mathbb E\big[(\Phi_m-\Phi_m^*)^2|\widehat{W}\big]$
in an entirely analogous way to~(\ref{cond expn Phistar}),
when we take expectations, using the tower property of conditional
expectations,
the part of the product that is an affine function of $\widehat{W}$
will cancel in~(\ref{controlling the variance}), leaving quadratic (and higher
order) terms, each of which is of order $\mathcal{O}(1/M)$ in the summand.
Overall then~(\ref{controlling the variance}) is $\mathcal{O}(1/M)$,
and applying Corollary~\ref{chen corollary}, the proof that ${\cal A}^i+{\cal D}^i$ is
normal with an error of order $1/\sqrt{M}$ is complete.
\section*{The residuals, generation one}
Proving that $R^i+S^i$ is normal is much simpler. Since Mendelian inheritance
is independent across loci we are able to use Theorem~\ref{rinott clt}
in much the same way as in generation zero. A combination of
Lemma~\ref{replacement for troublesome equation}
and Bayes' rule suffices to show that the variance is
not affected by conditioning on parental trait values, after which
the proof proceeds essentially as in the additive case and so is
omitted.
\section{Generation $t$: accumulation of information}
\label{appendix: accumulation of information}
If we wanted to prove a strict analogue of the results of
Barton et al.~(2017) in the
\nocite{barton/etheridge/veber:2017}
additive case, then we would want to condition not just on the
trait values of the parents, but on the trait values of an
arbitrary collection of individuals in the pedigree. Such a proof
can follow essentially the same line as that above, although the
calculations are considerably longer to write out. The only thing that
must be checked is that we do not accumulate too much
information from knowing those trait values; it is this
that controls for how long the infinitesimal approximation will
remain accurate.
This requires more care
than the additive case of Barton et al.~(2017),
\nocite{barton/etheridge/veber:2017}
so we present the argument here.
Recall that we write ${\cal P}(t)$ for the pedigree up to and
including generation $t$ and $Z(t)$ for the
corresponding vector of trait values of all individuals in ${\cal P}(t)$.
We would like to understand the distribution of the allelic types
$\chi_l^1(j^*),\chi_l^2(j^*)$ at locus $l$ of an individual $j^*$ in generation $t$,
conditional on knowing the trait values of all individuals in the pedigree
up to generation $t-1$.
That is, we would like to estimate
\begin{multline}
\label{thing to bound}
\mathbb P\Big[\left. (\chi_l^1(j^*),\chi_l^2(j^*))=(x,x') \right| {\cal P}(t),
Z(t-1)=\big(z^j\big)_{j\in {\cal P}(t-1)}\Big]
\\
=\frac{\mathbb P\big[Z(t-1)=\big(z^j\big)_{j\in {\cal P}(t-1)}\big|
(\chi_l^1(j^*),\chi_l^2(j^*))=(x,x'), {\cal P}(t)\Big]}
{\mathbb P[Z(t-1) =\big(z^j\big)_{j\in {\cal P}(t-1)}
\big|{\cal P}(t)]}\mathbb P\Big[(\chi_l^1(j^*),\chi_l^2(j^*))=(x,x')\big|
{\cal P}(t)\Big].
\end{multline}
To estimate the numerator in the fraction,
we partition over the possible patterns of
identity at locus~$l$ in the pedigree, conditional on that pedigree; that is we condition on
the values of the Bernoulli random variables that determine Mendelian inheritance at locus $l$
across the pedigree.
We denote this $M_l(t)$ and abuse notation by writing $(M_l^1(j), M_l^2(j))$ for the allelic
states at locus $l$ in individual $j\in {\cal P}(t-1)$ conditional on $M_l(t)$.
More precisely, if $\chi_l^1(j^*)=x$ and $\chi_l^2(j^*)=x'$,
$(M_l^1(j), M_l^2(j))
=(y,y'), (y,x'), (x,y'), (x,x')$ according to whether $j$ is identical
by descent with the chosen individual $j^*$ on neither chromosome, one
chromosome or both chromosomes. We use $\mathbb E_{M_l}$ when we wish to emphasize that we are
taking the expectation
with respect to this quantity.
We proceed as in Lemma~\ref{troublesome lemma with two parents}:
\begin{align*}
\mathbb P\Big[&Z(t-1)=\big(z^j\big)_{j\in {\cal P}(t-1)}
\Big| (\chi_l^1(j^*), \chi_l^2(j^*))=(x,x'),
{\cal P}(t), M_l(t)\Big]
\\
& =\mathbb P\left[Z_{-l}^j =z^j-\frac{1}{\sqrt{M}}
\Psi_l\big(M_l^1(j),M_l^2(j)\big), \ \forall j\in {\cal P}(t-1)\Big|{\cal P}(t)\right]
\\
& = \mathbb E\bigg[\mathbb P\left[Z^j=z^j-\frac{1}{\sqrt{M}}
\Psi_l\big(M_l^1(j),M_l^2(j)\big)+\frac{1}{\sqrt{M}}\Psi_l\big(\chi_l^1(j),\chi_l^2(j)\big),\ \forall j\in {\cal P}(t-1) \Big|{\cal P}(t)\right]\bigg],
\end{align*}
where in the last line the expectation is taken with respect to the unconditional law of the random family $\{(\chi_l^1(j),\chi_l^2(j)),\, j\in {\cal P}(t-1)\}$.
Substituting in~(\ref{thing to bound}), in an obvious notation,
\begin{align*}
& \mathbb P\Big[(\chi_l^1(j^*),\chi_l^2(j^*))=(x,x') \big| {\cal P}(t), Z(t-1)=\mathbf{z}\Big]\\
& = \mathbb P\Big[\left. (\chi_l^1(j^*),\chi_l^2(j^*))=(x,x') \right| {\cal P}(t)\Big]
\\
& \, \times \!
\left(1 -\!\! \sum_{j\in{\cal P}(t-1)}\!\frac{1}{\sqrt{M}}\bigg\{\mathbb E\Big[\Psi_l\big(M_l^1(j),M_l^2(j)\big)\big| {\cal P}(t) \Big]-\mathbb E\Big[\Psi_l\big(\chi_l^1(j),\chi_l^2(j)\big)|{\cal P}(t)\Big]
\bigg\}\frac{\mathbb P_{Z^j}[\mathbf{z}]}{\mathbb P [\mathbf{z}]}\right)\!
+\!\mathcal{O}\bigg(\frac{1}{M}\bigg).
\end{align*}
In particular, the summand will vanish if $j$ and $j^*$ are not identical
by descent in at least one copy at locus $l$, since then the allelic states at locus $l$ in individuals $j$ and $j^*$ are independent.
Furthermore, the more distant the relationship between $j$ and $j^*$ (that is, the smaller the probability of their being identical by descent), the less information we glean about the allelic states in $j^*$ from
observing the trait value of individual $j$, resulting in a small contribution of the $j$-th term to the difference between the conditional and unconditional laws of $(\chi_l^1(j^*),\chi_l^2(j^*))$. The infinitesimal model
can be expected to break down for an individual if we know that one
of its close relatives had a particularly extreme trait value, or if
the pedigree is particularly inbred (so that there is little variation
between offspring).
\end{appendix}
\bibliographystyle{apalike}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,428
|
<?php
namespace Zend\Di\Definition;
use Zend\Code\Annotation\AnnotationCollection;
use Zend\Code\Reflection;
use Zend\Code\Scanner\AggregateDirectoryScanner;
use Zend\Code\Scanner\DerivedClassScanner;
use Zend\Code\Scanner\DirectoryScanner;
use Zend\Code\Scanner\FileScanner;
/**
* Class definitions based on a set of directories to be scanned
*/
class CompilerDefinition implements DefinitionInterface
{
protected $isCompiled = false;
protected $introspectionStrategy = null;
protected $allowReflectionExceptions = false;
/**
* @var AggregateDirectoryScanner
*/
protected $directoryScanner = null;
protected $classes = [];
/**
* Constructor
*
* @param null|IntrospectionStrategy $introspectionStrategy
*/
public function __construct(IntrospectionStrategy $introspectionStrategy = null)
{
$this->introspectionStrategy = ($introspectionStrategy) ?: new IntrospectionStrategy();
$this->directoryScanner = new AggregateDirectoryScanner();
}
/**
* Set introspection strategy
*
* @param IntrospectionStrategy $introspectionStrategy
*/
public function setIntrospectionStrategy(IntrospectionStrategy $introspectionStrategy)
{
$this->introspectionStrategy = $introspectionStrategy;
}
/**
* @param bool $allowReflectionExceptions
*/
public function setAllowReflectionExceptions($allowReflectionExceptions = true)
{
$this->allowReflectionExceptions = (bool) $allowReflectionExceptions;
}
/**
* Get introspection strategy
*
* @return IntrospectionStrategy
*/
public function getIntrospectionStrategy()
{
return $this->introspectionStrategy;
}
/**
* Add directory
*
* @param string $directory
*/
public function addDirectory($directory)
{
$this->addDirectoryScanner(new DirectoryScanner($directory));
}
/**
* Add directory scanner
*
* @param DirectoryScanner $directoryScanner
*/
public function addDirectoryScanner(DirectoryScanner $directoryScanner)
{
$this->directoryScanner->addDirectoryScanner($directoryScanner);
}
/**
* Add code scanner file
*
* @param FileScanner $fileScanner
*/
public function addCodeScannerFile(FileScanner $fileScanner)
{
if ($this->directoryScanner === null) {
$this->directoryScanner = new DirectoryScanner();
}
$this->directoryScanner->addFileScanner($fileScanner);
}
/**
* Compile
*
* @return void
*/
public function compile()
{
/* @var $classScanner DerivedClassScanner */
foreach ($this->directoryScanner->getClassNames() as $class) {
$this->processClass($class);
}
}
/**
* @return ArrayDefinition
*/
public function toArrayDefinition()
{
return new ArrayDefinition(
$this->classes
);
}
/**
* @param string $class
* @throws \ReflectionException
*/
protected function processClass($class)
{
$strategy = $this->introspectionStrategy; // localize for readability
try {
$rClass = new Reflection\ClassReflection($class);
} catch (\ReflectionException $e) {
if (!$this->allowReflectionExceptions) {
throw $e;
}
return;
}
$className = $rClass->getName();
$matches = null; // used for regex below
// setup the key in classes
$this->classes[$className] = [
'supertypes' => [],
'instantiator' => null,
'methods' => [],
'parameters' => []
];
$def = &$this->classes[$className]; // localize for brevity
// class annotations?
if ($strategy->getUseAnnotations() == true) {
$annotations = $rClass->getAnnotations($strategy->getAnnotationManager());
if (($annotations instanceof AnnotationCollection)
&& $annotations->hasAnnotation('Zend\Di\Definition\Annotation\Instantiator')
) {
// @todo Instantiator support in annotations
}
}
/* @var $rTarget \Zend\Code\Reflection\ClassReflection */
$rTarget = $rClass;
$supertypes = [];
do {
$supertypes = array_merge($supertypes, $rTarget->getInterfaceNames());
if (!($rTargetParent = $rTarget->getParentClass())) {
break;
}
$supertypes[] = $rTargetParent->getName();
$rTarget = $rTargetParent;
} while (true);
$def['supertypes'] = $supertypes;
if ($def['instantiator'] === null) {
if ($rClass->isInstantiable()) {
$def['instantiator'] = '__construct';
}
}
if ($rClass->hasMethod('__construct')) {
$def['methods']['__construct'] = true; // required
try {
$this->processParams($def, $rClass, $rClass->getMethod('__construct'));
} catch (\ReflectionException $e) {
if (!$this->allowReflectionExceptions) {
throw $e;
}
return;
}
}
foreach ($rClass->getMethods(Reflection\MethodReflection::IS_PUBLIC) as $rMethod) {
$methodName = $rMethod->getName();
if ($rMethod->getName() === '__construct' || $rMethod->isStatic()) {
continue;
}
if ($strategy->getUseAnnotations() == true) {
$annotations = $rMethod->getAnnotations($strategy->getAnnotationManager());
if (($annotations instanceof AnnotationCollection)
&& $annotations->hasAnnotation('Zend\Di\Definition\Annotation\Inject')
) {
$def['methods'][$methodName] = true;
$this->processParams($def, $rClass, $rMethod);
continue;
}
}
$methodPatterns = $this->introspectionStrategy->getMethodNameInclusionPatterns();
// matches a method injection pattern?
foreach ($methodPatterns as $methodInjectorPattern) {
preg_match($methodInjectorPattern, $methodName, $matches);
if ($matches) {
$def['methods'][$methodName] = false; // check ot see if this is required?
$this->processParams($def, $rClass, $rMethod);
continue 2;
}
}
// method
// by annotation
// by setter pattern,
// by interface
}
$interfaceInjectorPatterns = $this->introspectionStrategy->getInterfaceInjectionInclusionPatterns();
// matches the interface injection pattern
/** @var $rIface \ReflectionClass */
foreach ($rClass->getInterfaces() as $rIface) {
foreach ($interfaceInjectorPatterns as $interfaceInjectorPattern) {
preg_match($interfaceInjectorPattern, $rIface->getName(), $matches);
if ($matches) {
foreach ($rIface->getMethods() as $rMethod) {
if (($rMethod->getName() === '__construct') || !count($rMethod->getParameters())) {
// constructor not allowed in interfaces
// ignore methods without parameters
continue;
}
$def['methods'][$rMethod->getName()] = true;
$this->processParams($def, $rClass, $rMethod);
}
continue 2;
}
}
}
}
/**
* @param array $def
* @param \Zend\Code\Reflection\ClassReflection $rClass
* @param \Zend\Code\Reflection\MethodReflection $rMethod
*/
protected function processParams(&$def, Reflection\ClassReflection $rClass, Reflection\MethodReflection $rMethod)
{
if (count($rMethod->getParameters()) === 0) {
return;
}
$methodName = $rMethod->getName();
// @todo annotations here for alternate names?
$def['parameters'][$methodName] = [];
foreach ($rMethod->getParameters() as $p) {
/** @var $p \ReflectionParameter */
$actualParamName = $p->getName();
$fqName = $rClass->getName() . '::' . $rMethod->getName() . ':' . $p->getPosition();
$def['parameters'][$methodName][$fqName] = [];
// set the class name, if it exists
$def['parameters'][$methodName][$fqName][] = $actualParamName;
$def['parameters'][$methodName][$fqName][] = ($p->getClass() !== null) ? $p->getClass()->getName() : null;
$def['parameters'][$methodName][$fqName][] = !($optional =$p->isOptional());
$def['parameters'][$methodName][$fqName][] = $optional && $p->isDefaultValueAvailable() ? $p->getDefaultValue() : null;
}
}
/**
* {@inheritDoc}
*/
public function getClasses()
{
return array_keys($this->classes);
}
/**
* {@inheritDoc}
*/
public function hasClass($class)
{
return (array_key_exists($class, $this->classes));
}
/**
* {@inheritDoc}
*/
public function getClassSupertypes($class)
{
if (!array_key_exists($class, $this->classes)) {
$this->processClass($class);
}
return $this->classes[$class]['supertypes'];
}
/**
* {@inheritDoc}
*/
public function getInstantiator($class)
{
if (!array_key_exists($class, $this->classes)) {
$this->processClass($class);
}
return $this->classes[$class]['instantiator'];
}
/**
* {@inheritDoc}
*/
public function hasMethods($class)
{
if (!array_key_exists($class, $this->classes)) {
$this->processClass($class);
}
return (count($this->classes[$class]['methods']) > 0);
}
/**
* {@inheritDoc}
*/
public function hasMethod($class, $method)
{
if (!array_key_exists($class, $this->classes)) {
$this->processClass($class);
}
return isset($this->classes[$class]['methods'][$method]);
}
/**
* {@inheritDoc}
*/
public function getMethods($class)
{
if (!array_key_exists($class, $this->classes)) {
$this->processClass($class);
}
return $this->classes[$class]['methods'];
}
/**
* {@inheritDoc}
*/
public function hasMethodParameters($class, $method)
{
if (!isset($this->classes[$class])) {
return false;
}
return (array_key_exists($method, $this->classes[$class]['parameters']));
}
/**
* {@inheritDoc}
*/
public function getMethodParameters($class, $method)
{
if (!is_array($this->classes[$class])) {
$this->processClass($class);
}
return $this->classes[$class]['parameters'][$method];
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,811
|
Q: Radius convergence of $\sum\limits_{n=0}^{\infty}\frac{(-1)^n}{n} z^{n(n+1)}$ I wish to show that the series
$\displaystyle\sum_{n=0}^{\infty}\dfrac{(-1)^n}{n} z^{n(n+1)}$
Has radius of convergence $R=1$.
I know that if the series would have the form $\displaystyle\sum_{n=0}^{\infty}a_n z^n$ then $R=\displaystyle\lim_{n\to\infty}\dfrac{|a_n|}{|a_{n+1}|}=\displaystyle\lim_{n\to\infty}\dfrac{1}{\sqrt[n]{|a_n|}}$.
But the series has no a form like that.
Can you give me a hint to show that $R=1$?
A: Hint. $\root{n}\of n\to 1$ as $n\to\infty$.
A: If you know why the domain of convergence of a series is a disk plus some points of the boundary, then it is easy to calculate the radius in this case.
For $z=1$, the series is convergent by the alternating series test. For any $\varepsilon>0$, I say the series $$\sum_{n\geq 1} \frac{(-1)^n}{n}(1+\varepsilon)^{n(n+1)}$$ is divergent, for the general term does not tend to zero. Try to show it yourself; in any case, an argument is below.
By Bernoulli, the even terms are such that $\frac{(-1)^{2n}}{2n}(1+\varepsilon)^{2n(2n+1)}\geq\frac{1+2n(2n+1)\varepsilon}{2n}\gt (2n+1)\varepsilon,$ so the general term indeed does not tend to zero.
A: If $|z| > 1,$ then $|z|^{n(n+1)}/n>|z|^n/n \to \infty.$ Hence the $n$th term does not approach $0,$ and the series diverges. If $|z|<1,$ then $|z|^{n(n+1)}/n \le |z|^n.$ Since $\sum |z|^n = 1/(1-|z|) < \infty,$ the series converges absolutely, hence converges. It follows that the radius of covergence is $1.$
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,035
|
Q: Install Ubuntu 18.04 with latest kernel After changing some PC pieces (most importantly MOBO), my Ubuntu started not working correctly e.g. no ethernet. After updating the Kernel to the latest one, the ethernet worked again but audio are still an issue. I would like to reinstall Ubuntu 18.04 just to try again fresh, but I already have problems because the ISO Kernel is outdated for my MOBO (ASUS TUF GAMING B550M PLUS).
Is there a way to install 18.04 with an updated kernel? I need specific packages from 18.04.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,044
|
\section{Introduction}
\subsection{Mapping Class Groups}
By the hyper-elliptic involution, the disk of $3$ marked points, $D_3$, is the quotient of the closed orientable surface of genus one with one boundary, $\chi_1^1$. Thus $\chi_1^1$ serves as a two fold branched covering space of $D_3$. Not only are the surfaces related by a two fold branched cover, but their mapping class groups are related. Under the Birman-Hilden theorem the mapping class group of the disk is isomorphic to a subgroup of the Dehn Twists, (\textit{i.e.} $\chi_1^1$'s mapping class group). The mapping class group of $D_n$ is isomorphic to the braid group of $n$ strands \cite{3}. Artin's braid relations, defined on the braid group, are similar to relations defined on the Dehn twists. The map $\psi$ transfers the braid relation and the disjointness relation from $\text{MOD}(\chi_g^1)$ to the braid group of $2g+1$ marked points \cite{2}. Pictorially, these maps are presented in figure \ref{fig:Overview}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{PaperOverview.png}
\caption{\label{fig:Overview} A representation of the maps between topological surfaces.}
\end{figure}
\subsection{The Braid Group}
Let $B_n$ denote the braid group of $n$ strands. Let $\sigma_1 ,..., \sigma_{n-1}$ denote the standard generators of $B_n$ and $\sigma_1^{-1} ,\dots, \sigma_{n-1}^{-1}$ denote their inverses \ref{fig:genOfBn}. The Artin representation of the braid group of $n$-strands is defined \cite{1}:
\begin{definition}
$B_n = \{ \sigma_1 ,..., \sigma_{n-1} \mid \sigma_i \sigma_j = \sigma_j \sigma_i \, \text{ if } \, \lvert i - j \rvert> 1 \, , \, \sigma_i \sigma_{i+1} \sigma_i = \sigma_{i+1} \sigma_{i} \sigma_{i+1} \, \text{ for } \, 1 \le i \le n-2 \}$
\end{definition}
If left and right directions of \ref{fig:MOD(D_n)} are conceptualized as backward and forward motion in time, respectively, then the strands are tracing the motion of the marked points in the disk through time. The homeomorphism of $D_3$ presented in figure \ref{fig:MOD(D_n)} can be expressed as the combination of four generators, $\sigma_2^{-1} \sigma_2^{-1} \sigma_1 \sigma_2^{-1}$. Note these braid generators are written using standard functional composition notation (\textit{i.e.} $\sigma_1$ is applied second).
\begin{figure}[h]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{BraidGenerators3.png}
\caption{\label{fig:genOfBn}The standard generators of $B_n$}
\label{fig:sub2}
\end{subfigure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{mappingClassGroup3.png}
\caption{\label{fig:MOD(D_n)} A homeomorphism, $\sigma_2^{-1} \sigma_1 \sigma_2^{-1} \sigma_2^{-1}$, from MOD($D_{n}$)}
\end{subfigure}
\caption{Expressing elements of SMOD($D_3$) as elements of $B_3$. [5][6]}
\label{fig:test}
\end{figure}
\section{Algebraic Structure under the Burau Representation}
As seen the braid group is a quantity of interest in group geometric theory. The Burau representation maps the braid group of $n$ strands to $GL_{n}\left(\mathbb{Z}[t,t^{-1}]\right)$. This maps is shown at the bottom of figure \ref{fig:Overview}. In this work, we study properties of mapping class groups via the general linear group. Let us define the Burau representation.
\begin{definition}
Let $I_k$ denote the $k \, \times \, k$ identity matrix. The Burau representation of the braid group is the map:
\begin{align*}
\rho : B_n \rightarrow GL_{n}(\mathbb{Z}[t,t^{-1}]) && \sigma_i \rightarrow I_{i-1} \bigoplus \begin{bmatrix}
1-t&t\\1&0 \end{bmatrix} \bigoplus I_{n-i-1}
\end{align*}
\end{definition}
The algebraic structure created by the image of $\sigma_1 \sigma_2 \sigma_1$ under the Burau representation, seen in equation \ref{equ:zeta}, provides insight into the kernel of the evaluated Burau representation. The element $\sigma_1 \sigma_2 \sigma_1$ is selected for study because of its novelty in Artin's presentation of the braid group. However, other than exposing the element, the braid relation itself is not used in this work.
\begin{equation} \label{equ:zeta}
\rho (\sigma_{1} \sigma_2 \sigma_1) = \zeta = \begin{bmatrix} 1-t&(1-t)t&t^{2}\\1-t&t&0\\1&0&0& \end{bmatrix}
\end{equation}
Immediately, there is nothing notable about this image. Regardless, as we shall show, the cyclotomic polynomials are exposed in $\zeta^k$ for even $k$. $\zeta^k$, has two different forms, one for odd $k$ and one for even $k$. The algebraic structure that arises in the even powers is enough to prove that the Burau representation is unfaithful when evaluated at any $\tau^{th}$ primitive root of unity, such that $\tau >3$. Using proof by induction, we establish the form for even $k$.
\begin{lemma}
If $k \in \mathbb{N}$ such that $k$ is even, then the image of $(\sigma_{1} \sigma_2 \sigma_1)^k$ in $B_3$, under the Burau representation is:
\begin{equation}\label{equ:zetak}
\zeta^k = \begin{bmatrix}
a_{\frac{3}{2}k}&t a_{\frac{3}{2}k-2}&t^2 a_{\frac{3}{2}k-2}
\\a_{\frac{3}{2}k-2}&t a_{\frac{3}{2}k-1}&t^2 a_{\frac{3}{2}k-2}
\\a_{\frac{3}{2}k-2}&t a_{\frac{3}{2}k-2}&t^2 a_{\frac{3}{2}k-3}
\end{bmatrix}
\end{equation}
Let $\phi_i$ be the $i^{th}$ cyclotomic polynomial and $Z = \{ \, d \in \mathbb{N} \, \vert \, d \text{ divides } m+2 \, \}$. Define the polynomial $a_m$ such that:
\begin{align}\label{equ:am}
a_m &= \begin{cases}
\frac{1+t^{m+1}+t^{m+2}}{\phi_3}, \forall m \equiv 0 \text{ mod } 3
\\ - \frac{1}{\phi_3} \prod\limits_{d \in Z}\phi_d = \frac{1-t^{m+2}}{\phi_3}, \forall m \equiv 1 \text{ mod } 3
\\ \frac{1+t^{m}+t^{m+2}}{\phi_3}, \forall m \equiv 2 \text{ mod } 3
\end{cases}
\end{align}
\end{lemma}
\begin{proof}
Assume $k \in \mathbb{N}$ such that $k$ is even and consider the base case where $k=2$, then:
\begin{equation*}
\zeta^2 = \begin{bmatrix}
\frac{1}{\phi_3}( 1+t^4+t^5)&\frac{1}{\phi_3}(t(1-t^3))&\frac{1}{\phi_3}(t^2(1-t^3)) \\\frac{1}{\phi_3}(1-t^3)&\frac{1}{\phi_3}(t+t^3+t^5)&\frac{1}{\phi_3}(t^2(1-t^3))
\\\frac{1}{\phi_3}(1-t^3)&\frac{1}{\phi_3}(t(1-t^3))&\frac{1}{\phi_3}(t^2(1+t+t^2) \end{bmatrix} = \begin{bmatrix}a_3&t a_1&t^2 a_1\\a_1&t a_2&t^2 a_1\\a_1&t a_1&t^2 a_0\end{bmatrix}
\end{equation*}
Assuming the induction hypothesis, we examine the form of $\zeta^{k+2}$.
\begin{equation*}
\zeta^{k+2} = \zeta^k*\zeta^2 = \begin{bmatrix}
a_{\frac{3}{2}k}&t a_{\frac{3}{2}k-2}&t^2 a_{\frac{3}{2}k-2}
\\a_{\frac{3}{2}k-2}&t a_{\frac{3}{2}k-1}&t^2 a_{\frac{3}{2}k-2}
\\a_{\frac{3}{2}k-2}&t a_{\frac{3}{2}k-2}&t^2 a_{\frac{3}{2}k-3}
\end{bmatrix}*\begin{bmatrix}a_3&t a_1&t^2 a_1\\a_1&t a_2&t^2 a_1\\a_1&t a_1&t^2 a_0\end{bmatrix}
\end{equation*}
Consider element [1,1] of the matrix $\zeta^k$, since $k$ is even then $\frac{3}{2}k \equiv 0$ mod $3$, $\frac{3}{2}k-1 \equiv 2$ mod $3$ and $\frac{3}{2}k-3 \equiv 0$ mod $3$. Explicitly multiplying the first row and the first column we get the $[1,1]$ element of the matrix.
\begin{align*}
[1,1] &= a_{\frac{3}{2}k}*a_3 + t a_{\frac{3}{2}k-2}*a_1 + t^2 a_{\frac{3}{2}k-2}*a_1
\\
&= \frac{1}{\phi_3}(1-t +t + t^2 - t^2 +t^3 - t^3 + t^{\frac{3}{2}k+1} -t^{\frac{3}{2}k+1} +t^{\frac{3}{2}k+2}-t^{\frac{3}{2}k+2}
\\& + t^{\frac{3}{2}k+2}- t^{\frac{3}{2}k+2} + t^{\frac{3}{2}k+3} -t^{\frac{3}{2}k+3} +t^{\frac{3}{2}k+4}+t^{\frac{3}{2}k+5})
\\&= \frac{1}{\phi_3}(1+t^{\frac{3}{2}k+4}+t^{\frac{3}{2}k+5}), \intertext{\textit{Using the definition of $a_m$}, from equation \ref{equ:am}.}
&= a_{\frac{3}{2}(k+2)}
\end{align*}
We see that the conjectured form of $\zeta^{k+2}$'s [1,1] matrix element holds. Similar arguments can be used to prove each of the other 8 elements of the even form.
\end{proof}
\section{The Burau Representation's Kernel}
Now that the general form of $\zeta^{k}$ is established or even $k$. We set a constraint on the system, $\zeta^{k}=I$, to explore what roots yield the identity matrix. Given this constraint, it will be shown that any $\tau^{th}$ primitive root of unity, such that $\tau > 3$ is the solution to $\zeta^{k}=I$ for some even $k$. Furthermore, in corollary \ref{crl:root}, we determine the minimum $k$ such that the $\tau^{th}$ root of unity is a solution to $\zeta^{k}=I$.
\begin{theorem} \label{thrm:kernel}
If $t = e^{\frac{2 \pi \imath}{\tau}}$ then for any given $\tau \in \mathbb{N}$, such that $\tau > 3$, there exists $k \in \mathbb{Z}$ such that $\zeta^k = I_3$.
\end{theorem}
\begin{proof}
Let $k$ be even, first consider the off-diagonal terms. since then $\frac{3}{2}k-2 \equiv 0$ mod $3$, the form of equation \ref{equ:zetak}, implies that the off-diagonal elements are proportional to $a_{\frac{3}{2}k-2}$. Thus, for $\zeta^k = I_3$ the definition \ref{equ:am} implies the off-diagonal elements yield $\frac{1-t^{\frac{3k}{2}}}{\phi_3} =0$. After redistributing terms we see that as long as $t$ is not evaluated at the roots of $\phi_3$ then:
\begin{equation} \label{equ:offdiagonal}
t^{\frac{3k}{2}} = 1
\end{equation}
We shall use equation \ref{equ:offdiagonal} to show that $\zeta^k = I$. Consider the diagonal elements of the even form. Since $k$ is even then $\frac{3}{2}k \equiv 0$ mod $3$, $\frac{3}{2}k-1 \equiv 2$ mod $3$ and $\frac{3}{2}k-3 \equiv 0$ mod $3$. Thus the diagonal elements can be evaluated as:
\begin{align*}
a_{\frac{3}{2}k} &= \frac{1+t^{\frac{3}{2}k+1}+t^{\frac{3}{2}k+2}}{\phi_3} = \frac{ 1+(t)+(t^2)}{\phi_3} = \frac{1+t+t^2}{\phi_3}
\\&= 1
\\t a_{\frac{3}{2}k-1} &= t * \frac{1+t^{\frac{3}{2}k-1}+t^{\frac{3}{2}k+1}}{\phi_3} = t* \frac{1+(t^{-1})+(t)}{\phi_3}=\frac{1+ t +t^2}{\phi_3}
\\&= 1
\\t^2 a_{\frac{3}{2}k-3} &= t^2 * \frac{1+t^{\frac{3}{2}k-2}+t^{\frac{3}{2}k-1}}{\phi_3} = t^2 * \frac{1+t^{-2} + t^{-1}}{\phi_3} = \frac{1+t+t^2}{\phi_3}
\\&= 1
\end{align*}
In conclusion, if $t^{\frac{3k}{2}} = 1$ then $\zeta^k = I_3$.
Additionally, it was assumed that $t = e^{\frac{2 \pi \imath}{\tau}}$ this implies $t^{\tau}=1$. Comparing this to the off-diagonal constraint, $t^{\frac{3}{2}k}=1$. Both equations are satisfied if and only if $\frac{3}{2}k = j*\tau$ where $j \in \mathbb{Z}$. Thus, $k$ must be divisible by $2$ and $\tau$. Trivially, this is satisfied by $k = 2 \tau$. Therefore, given any integer $\tau$ in $t = e^{\frac{2 \pi \imath}{\tau}}$ there exists a $k$ such that $\zeta^k = I_3$.
\end{proof}
\begin{corollary} \label{crl:root}
For a given root of unity, $\tau$, the minimum power, $k$, that makes $(\sigma_1 \sigma_2 \sigma_1)^k$ equal to the identity matrix is $k=\frac{2}{3}\tau$ for either $\tau \equiv 0 \, \text{mod} \, 3$ and $k=2\tau$ for $\tau \equiv 1 \, \text{mod} \, 3$ or $\tau \equiv 2 \, \text{mod} \, 3$.
\end{corollary}
\begin{proof}
In theorem \ref{thrm:kernel} we encountered two conditions. First, that $t^{\frac{3k}{2}} = 1$ and second that $t = e^{\frac{2 \pi \imath}{\tau}}$ which implied that $t^{\tau}=1$. Together these constraints showed that $t^{\frac{3}{2}k}=1$ when $\frac{3}{2}k = j*\tau$ such that $j \in \mathbb{N}$ (i.e. $\frac{3}{2}k \text{ is a multiple of } \tau$). We will now classify the minimal $k$ via $\tau \text{ mod } 3$:
\begin{enumerate}
\item $\tau = 3l$
Given $\tau$ is also equal to $\frac{3}{2}k$, then $3l = \frac{3}{2}k$. Since both sides are divisible by $3$, $l = \frac{k}{2}$. Therefore $k$ must be divisible by $2$ and $l$. The smallest term divisible by both $2$ and $l$ is $2l$. Given that $3l=\tau$, then $k=\frac{2}{3}\tau$ is the minimum $k$ which yields the identity.
\item $\tau = 3l+1$ or $\tau = 3l+2$
Given $\tau$ is also equal to $\frac{3}{2}k$, then $3l+1 = \frac{3}{2}k$ or $3l+2 = \frac{3}{2}k$. Neither value of $\tau$ is divisible by $3$. Therefore k must be divisible by both $2$ and $\tau$. The smallest term divisible by both $2$ and $\tau$ is $2 \tau$, thus $k = 2\tau$ is the minimum $k$ which yields the identity.
\end{enumerate}
\end{proof}
It has now been shown that two distinct elements map to the identity matrix when the Burau representation is evaluated at any primitive root of unity excepting the first three. This is a novel result and provides the necessary strength to show the unfaithfulness of Burau representation. In the above proofs, we used the specific element $\sigma_1 \sigma_2 \sigma_1 \in B_3$. Therefore, once we have shown the unfaithfulness for $B_3$ in corollary \ref{crl:B3faith}, the result will be generalized to $B_n \text{, } \forall n \geq 3$ in corollary \ref{crl:Bnfaith}.
\begin{corollary}\label{crl:B3faith}
If the Burau representation of $B_3$ is evaluated at any primitive root of unity, excepting the first three, then the representation is unfaithful.
\end{corollary}
\begin{proof}
In theorem \ref{thrm:kernel} we showed that if $t$ is evaluated at any root of unity, excepting the first three, then there exists a $k$ such that $\zeta^k = I_3$. This implies the kernel of the Burau representation is non-trivial. Therefore the map is unfaithful.
\end{proof}
\begin{corollary}\label{crl:Bnfaith}
For all $n \ge 3$, the Burau representation of $B_n$ evaluated all roots of unity excepting the first three is unfaithful.
\end{corollary}
\begin{proof}
We must consider that $\sigma_1 \sigma_2 \sigma_1$ does not exist in $B_1$ or $B_2$. Limiting ourselves to $n \geq 3$, the image of $\sigma_i \sigma_{i+1} \sigma_i \in B_n$ is expressed as the segmented matrix:
\begin{equation}
\zeta = \begin{bmatrix} \begin{array}{c|c|c}
I_{i-1} & 0 & 0 \\ \hline
0& \begin{smallmatrix} 1-t&(1-t)t&t^{2} \\1-t&t&0\\1&0&0 \end{smallmatrix} & 0 \\
\hline
0 & 0 & I_{n-(i+2)}
\end{array} \end{bmatrix}
\end{equation}
When this image is expressed as a segmented matrix it is easy to see that under matrix multiplication $\zeta^k = I_{i-1} \bigoplus \begin{bmatrix} 1-t&(1-t)t&t^{2} \\1-t&t&0\\1&0&0 \end{bmatrix}^k \bigoplus I_{n-(i+2)}$.
This expression clearly holds the same general form from equation \ref{equ:zetak}, which was proven in theorem \ref{thrm:kernel}. Thus, corollary \ref{crl:B3faith} extends to all braid groups with three or more strands.
\end{proof}
\section{Results}
In this paper, we studied the braid group using representation theory. It was established in theorem \ref{thrm:kernel} that the image of $\sigma_1 \sigma_2 \sigma_2$ generates a cyclic group. The periodicity of the generated group was explicitly determined in corollary \ref{crl:root}. Using this periodicity it was shown that the evaluated Burau representation of any braid group which contains the braid relation is unfaithful when evaluated at primitive roots of unity excepting the first three (corollary \ref{crl:Bnfaith}). Lastly, this unfaithfulness was extended to all braid groups which contain $\sigma_1 \sigma_2 \sigma_2$.
\bibliographystyle{amsplain}
\section*{This is an unnumbered first-level section head}
This is an example of an unnumbered first-level heading.
\specialsection*{THIS IS A SPECIAL SECTION HEAD}
This is an example of a special section head%
\footnote{Here is an example of a footnote. Notice that this footnote
text is running on so that it can stand as an example of how a footnote
with separate paragraphs should be written.
\par
And here is the beginning of the second paragraph.}%
.
\section{This is a numbered first-level section head}
This is an example of a numbered first-level heading.
\subsection{This is a numbered second-level section head}
This is an example of a numbered second-level heading.
\subsection*{This is an unnumbered second-level section head}
This is an example of an unnumbered second-level heading.
\subsubsection{This is a numbered third-level section head}
This is an example of a numbered third-level heading.
\subsubsection*{This is an unnumbered third-level section head}
This is an example of an unnumbered third-level heading.
\begin{lemma}
Let $f, g\in A(X)$ and let $E$, $F$ be cozero
sets in $X$.
\begin{enumerate}
\item If $f$ is $E$-regular and $F\subseteq E$, then $f$ is $F$-regular.
\item If $f$ is $E$-regular and $F$-regular, then $f$ is $E\cup
F$-regular.
\item If $f(x)\ge c>0$ for all $x\in E$, then $f$ is $E$-regular.
\end{enumerate}
\end{lemma}
The following is an example of a proof.
\begin{proof} Set $j(\nu)=\max(I\backslash a(\nu))-1$. Then we have
\[
\sum_{i\notin a(\nu)}t_i\sim t_{j(\nu)+1}
=\prod^{j(\nu)}_{j=0}(t_{j+1}/t_j).
\]
Hence we have
\begin{equation}
\begin{split}
\prod_\nu\biggl(\sum_{i\notin
a(\nu)}t_i\biggr)^{\abs{a(\nu-1)}-\abs{a(\nu)}}
&\sim\prod_\nu\prod^{j(\nu)}_{j=0}
(t_{j+1}/t_j)^{\abs{a(\nu-1)}-\abs{a(\nu)}}\\
&=\prod_{j\ge 0}(t_{j+1}/t_j)^{
\sum_{j(\nu)\ge j}(\abs{a(\nu-1)}-\abs{a(\nu)})}.
\end{split}
\end{equation}
By definition, we have $a(\nu(j))\supset c(j)$. Hence, $\abs{c(j)}=n-j$
implies (5.4). If $c(j)\notin a$, $a(\nu(j))c(j)$ and hence
we have (5.5).
\end{proof}
\begin{quotation}
This is an example of an `extract'. The magnetization $M_0$ of the Ising
model is related to the local state probability $P(a):M_0=P(1)-P(-1)$.
The equivalences are shown in Table~\ref{eqtable}.
\end{quotation}
\begin{table}[ht]
\caption{}\label{eqtable}
\renewcommand\arraystretch{1.5}
\noindent\[
\begin{array}{|c|c|c|}
\hline
&{-\infty}&{+\infty}\\
\hline
{f_+(x,k)}&e^{\sqrt{-1}kx}+s_{12}(k)e^{-\sqrt{-1}kx}&s_{11}(k)e^
{\sqrt{-1}kx}\\
\hline
{f_-(x,k)}&s_{22}(k)e^{-\sqrt{-1}kx}&e^{-\sqrt{-1}kx}+s_{21}(k)e^{\sqrt
{-1}kx}\\
\hline
\end{array}
\]
\end{table}
\begin{definition}
This is an example of a `definition' element.
For $f\in A(X)$, we define
\begin{equation}
\mathcal{Z} (f)=\{E\in Z[X]: \text{$f$ is $E^c$-regular}\}.
\end{equation}
\end{definition}
\begin{remark}
This is an example of a `remark' element.
For $f\in A(X)$, we define
\begin{equation}
\mathcal{Z} (f)=\{E\in Z[X]: \text{$f$ is $E^c$-regular}\}.
\end{equation}
\end{remark}
\begin{example}
This is an example of an `example' element.
For $f\in A(X)$, we define
\begin{equation}
\mathcal{Z} (f)=\{E\in Z[X]: \text{$f$ is $E^c$-regular}\}.
\end{equation}
\end{example}
\begin{xca}
This is an example of the \texttt{xca} environment. This environment is
used for exercises which occur within a section.
\end{xca}
The following is an example of a numbered list.
\begin{enumerate}
\item First item.
In the case where in $G$ there is a sequence of subgroups
\[
G = G_0, G_1, G_2, \dots, G_k = e
\]
such that each is an invariant subgroup of $G_i$.
\item Second item.
Its action on an arbitrary element $X = \lambda^\alpha X_\alpha$ has the
form
\begin{equation}\label{eq:action}
[e^\alpha X_\alpha, X] = e^\alpha \lambda^\beta
[X_\alpha X_\beta] = e^\alpha c^\gamma_{\alpha \beta}
\lambda^\beta X_\gamma,
\end{equation}
\begin{enumerate}
\item First subitem.
\[
- 2\psi_2(e) = c_{\alpha \gamma}^\delta c_{\beta \delta}^\gamma
e^\alpha e^\beta.
\]
\item Second subitem.
\begin{enumerate}
\item First subsubitem.
In the case where in $G$ there is a sequence of subgroups
\[
G = G_0, G_1, G_2, \ldots, G_k = e
\]
such that each subgroup $G_{i+1}$ is an invariant subgroup of $G_i$ and
each quotient group $G_{i+1}/G_{i}$ is abelian, the group $G$ is called
\textit{solvable}.
\item Second subsubitem.
\end{enumerate}
\item Third subitem.
\end{enumerate}
\item Third item.
\end{enumerate}
Here is an example of a cite. See \cite{A}.
\begin{theorem}
This is an example of a theorem.
\end{theorem}
\begin{theorem}[Marcus Theorem]
This is an example of a theorem with a parenthetical note in the
heading.
\end{theorem}
\begin{figure}[tb]
\blankbox{.6\columnwidth}{5pc}
\caption{This is an example of a figure caption with text.}
\label{firstfig}
\end{figure}
\begin{figure}[tb]
\blankbox{.75\columnwidth}{3pc}
\caption{}\label{otherfig}
\end{figure}
\section{Some more list types}
This is an example of a bulleted list.
\begin{itemize}
\item $\mathcal{J}_g$ of dimension $3g-3$;
\item $\mathcal{E}^2_g=\{$Pryms of double covers of $C=\openbox$ with
normalization of $C$ hyperelliptic of genus $g-1\}$ of dimension $2g$;
\item $\mathcal{E}^2_{1,g-1}=\{$Pryms of double covers of
$C=\openbox^H_{P^1}$ with $H$ hyperelliptic of genus $g-2\}$ of
dimension $2g-1$;
\item $\mathcal{P}^2_{t,g-t}$ for $2\le t\le g/2=\{$Pryms of double
covers of $C=\openbox^{C'}_{C''}$ with $g(C')=t-1$ and $g(C'')=g-t-1\}$
of dimension $3g-4$.
\end{itemize}
This is an example of a `description' list.
\begin{description}
\item[Zero case] $\rho(\Phi) = \{0\}$.
\item[Rational case] $\rho(\Phi) \ne \{0\}$ and $\rho(\Phi)$ is
contained in a line through $0$ with rational slope.
\item[Irrational case] $\rho(\Phi) \ne \{0\}$ and $\rho(\Phi)$ is
contained in a line through $0$ with irrational slope.
\end{description}
\bibliographystyle{amsplain}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,344
|
{"url":"https:\/\/www.askiitians.com\/forums\/Magnetism\/a-particle-having-a-charge-of-10-micro-coulomb-and_154230.htm","text":"# A particle having a charge of 10 micro coulomb and a mass of1Micro gram moves in a horizontal circle of radius 10cm under the influence of a magnetic field Of0.1T.when the particle is at a Point P,a uniform electric field is switched on so that the particle start moving along the tangent with uniform velocity.What is the electric field?\n\nDivya\n32 Points\n4 years ago\nFor the particle to go undeflected,\n$qvBsin\\theta =qE$\nE=vB---------- (1)\nmv2\/r=qvb\nv=qBr\/m-------------(2)\nFrom 1 and 2,\nE=vB\u00a0 \u00a0 \u00a0 \u00a0\u00a0$E=qB^2r\/m$\nputting the values we get,E=0.01V\/m\nShubhangi Sharma\n13 Points\n4 years ago\nqvBsin\u2205 = qEE = vBv = qBr\/mE = qB\u00b2r\/mConverting all values into standard, C, kg & m;E = (10\u00d710^-6\u00d70.01\u00d70.1)\u00f7(10^-6\u00d71\u00d710^-3)E = 10 V\/m\nYash Chourasiya\n2 years ago\nDear Student\n\nAs per condition\nqVB = qE\nNow,R = mv\/qB\n\u200b\u21d2V = BqR\u200b\/M\n(BqR\/M)*B = E........(1)\nAfter putting values in equation (1), we get\nE = 10V","date":"2022-08-10 05:22:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 2, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5204371809959412, \"perplexity\": 7315.510346930705}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882571147.84\/warc\/CC-MAIN-20220810040253-20220810070253-00145.warc.gz\"}"}
| null | null |
È conosciuto come l'abbé Gorret, che in francese significa abate Gorret. Tale definizione è tuttavia inesatta, in quanto Gorret non fu mai abate. Il termine "abbé" si usa in lingua francese con un significato più ampio, riferendosi a un religioso cui la comunità attribuisce una particolare importanza.
Originario della Valtournenche, è conosciuto principalmente per la sua fama di alpinista (fece anche parte della prima spedizione italiana al Cervino), nonché per la sua indole anticonformista, ragione per la quale fu spesso assegnato a parrocchie di alta montagna che lui stesso considerò come un esilio.
Biografia
Amé Gorret è nato il 25 ottobre 1836 a Valtournenche, figlio di Jean-Antoine Gorret e Marie-Véronique Carrel. Amé è il suo nome nel patois valtournain, ma è talvolta riportato secondo la dicitura standardizzata francese di Aimé, avente lo stesso significato.
Compì gli studi di base nel suo paese natale sotto la guida del sacerdote locale ed in seguito del vicario. Sotto suggerimento di questi ultimi, fu mandato a studiare in seminario ad Aosta, ricevendo gli ordini il 25 maggio 1861.
Il 1º agosto del medesimo anno mentre saliva a Champorcher, sede della sua prima parrocchia, incontrò il re Vittorio Emanuele II di Savoia, con il quale in seguito si venne a creare un'amicizia principalmente dovuta alla passione comune per la montagna ed al disprezzo verso i formalismi della vita politica dell'epoca.
Gorret fu spesso trasferito repentinamente di parrocchia: già nel 1864 passò da Champorcher a Saint-Pierre, poi nel 1865 passò a Cogne.
Fece parte della cordata che il 17 luglio dello stesso anno, insieme ai compaesani Jean-Antoine Carrel, Jean-Baptiste Bich e Jean-Augustin Meynet compì la prima ascensione italiana al Cervino, preceduta solo di tre giorni dalla prima ascensione assoluta diretta dall'inglese Edward Whymper.
Nel 1866 fu trasferito alla parrocchia di Valgrisenche. Tra il 1869 ed il 1880 cambiò spesso sede, diviso tra la gestione di piccole parrocchie di montagna, l'insegnamento in seminario, il lavoro intellettuale e la sua passione per la montagna. Quest'ultima gli permise di conoscere molte personalità dell'alpinismo dell'epoca, riunite nel neo-costituito Club Alpino Italiano.
A partire dal 1881 si trovò nel Delfinato, fino al momento in cui il governo francese non decise il rimpatrio di tutti i preti stranieri (1884). Negli anni che seguirono fu rettore a Saint-Jacques-des-Allemands, alla sommità della val d'Ayas.
Nel 1902 iniziò ad accusare problemi alla vista e fu operato a Torino nell'anno successivo. Nel 1905, con l'aggravarsi delle sue condizioni di salute, fu trasferito nuovamente al priorato Saint-Jacquême di Saint-Pierre, ove morì il 4 novembre 1907.
Curiosità
Amé Gorret è conosciuto soprattutto come Abbé Gorret ma anche come Aimé Gorret in francese, Amato Gorret in italiano, Amatus Gorret in latino e con altri pseudonimi quali l'Ours de la montagne, l'Ermite de Saint-Jacques e Le Grand Gorret.
Opere
Amé Gorret, Ascension du Mont-Cervin, Feuille d'Aoste n. 41, 43, 44, Aoste, 1865
Amé Gorret, Claude-Nicolas Bich, Guide Illustré de la Vallée d'Aoste, 1876.
Amé Gorret, Victor-Emmanuel sur les Alpes, 1879.
Note
Bibliografia
Amé Gorret, Autobiographie et écrits divers, Administration Communale de Valtournenche, 1998.
Alexis Betemps, Du dernier ours au premier ethnographe : l'abbé Aimé Gorret, Le Monde alpin et rhodanien, 2003, vol. 31, n. 1-4, p.|149-159 (Leggere on-line)
Saverio Favre (a cura di), L'Ermite de Saint-Jacques, Amé Gorret (1836-1907), Musumeci Editore, Quart, 2007, ISBN 978-88-7032-814-1
Alfonso Bernardi (a cura di), Écrits de l'abbé Amé Gorret, Bologna: Arti grafiche Tamari, 1966
Enrico Camanni, Cieli di pietra - La vera storia di Amé Gorret, Vivalda editori, Torino, 1997.
Voci correlate
Valle d'Aosta
Storia della Valle d'Aosta
Altri progetti
Collegamenti esterni
Pagina dedicata all'Abbé Gorret su Varasc.it
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,168
|
[][gem]
[][travis]
[][gemnasium]
[][codeclimate]
[][coveralls]
[][bitdeli]
[gem]: http://badge.fury.io/rb/plurky
[travis]: https://travis-ci.org/Domon/plurky
[gemnasium]: https://gemnasium.com/Domon/plurky
[codeclimate]: https://codeclimate.com/github/Domon/plurky
[coveralls]: https://coveralls.io/r/Domon/plurky
[bitdeli]: https://bitdeli.com/free "Bitdeli Badge"
Yet another Plurk API wrapper. Or something to play when the Plurk team is busy optimizing the site.
## Installation
Add this line to your application's `Gemfile`:
gem 'plurky'
And then execute:
$ bundle
Or install it yourself as:
$ gem install plurky
## Documentation
* [Plurky][]
* [Plurk API 2.0][]
[Plurky]: http://rdoc.info/gems/plurky
[Plurk API 2.0]: https://www.plurk.com/API
## Examples
```ruby
require 'plurky'
client = Plurky.client
client.get '/APP/Profile/getPublicProfile', :user_id => 34
```
## Configuration
Applications that make requests on behalf of a single Plurk user can pass global configuration options as a block to the `Plurky.configure` method.
```ruby
Plurky.configure do |config|
config.consumer_key = YOUR_CONSUMER_KEY
config.consumer_secret = YOUR_CONSUMER_SECRET
config.oauth_token = YOUR_OAUTH_TOKEN
config.oauth_token_secret = YOUR_OAUTH_TOKEN_SECRET
end
```
Alternately, you can set the following environment variables:
```
PLURK_CONSUMER_KEY
PLURK_CONSUMER_SECRET
PLURK_OAUTH_TOKEN
PLURK_OAUTH_TOKEN_SECRET
```
After configuration, requests can be made like so:
```ruby
Plurky.get '/APP/Timeline/getPlurks'
```
## Implemented APIs
* status
* update
## Obtaining access tokens
Plurky will not support obtaining access tokens.
If you do not intend to provide service to other users, you can obtain an access token from the [test console][].
(You may need to [create a Plurk App][] first to get a pair of comsumer key and secret.)
[test console]: https://www.plurk.com/OAuth/test
[create a Plurk App]: https://www.plurk.com/PlurkApp/register
## TODO
* Improve test coverage.
* Add APIs.
* Catch errors from Plurk.
* APIs should return consistent data. (e.g. status & update.)
## Credits
Most of the code are copy-pasted from the [twitter][] gem.
[twitter]: https://github.com/sferik/twitter
## Copyright
Copyright (c) 2012 - 2013 Chun-wei Kuo. See [LICENSE][] for details.
[license]: https://github.com/Domon/plurky/blob/master/LICENSE.md
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,828
|
Maurice (Moritz) Loewy (Mariánské Lázně, 15 de abril de 1833 — Paris, 15 de outubro de 1907) foi um astrônomo francês
Nascido em Mariánské Lázně, atualmente pertencente à República Tcheca, seus pais foram residir em Viena, em 1841. Loewy foi assistente no Observatório de Viena, trabalhando com mecânica celeste. A instituição austro-húngara não permitia a um judeu progredir na carreira sem abdicar sua religião e aderir ao catolicismo. O diretor do observatório, Karl Ludwig von Littrow, em correspondência direta com Urbain Le Verrier, diretor do Observatório de Paris, obteve para ele um cargo em 1860. Após ir para a França, Loewy obteve a nacionalidade francesa.
Estudou as órbitas de asteroides e cometas e estudou a medida de longitude, aprimorando a precisão do Connaissance des Temps. Também estudou óptica e aberração da luz.
Foi eleito membro do Bureau des Longitudes em 1872 e da Académie des Sciences em 1873.
Loewy foi diretor do Observatório de Paris, de 1896 a 1907, reorganizando a instituição e estabelecendo um departamento de astronomia física. Trabalhou durante uma década com Pierre Puiseux na elaboração de um atlas da lua, composto de 10 mil fotografias, L'Atlas photographique de la Lune (1910), fundamental para a geografia lunar no meio século seguinte.
Seu nome é perpetuado pela cratera lunar Loewy e o asteroide 253 Mathilde é possivelmengte batizado com o nome de sua mulher.
Morreu em Paris durante uma reunião governamental, vitimado por uma parada cardiorrespiratória.
Honrarias
1889 - Medalha de Ouro da Royal Astronomical Society
Medalha de Ouro da Royal Astronomical Society
Astrónomos da França
Nascidos em 1833
Mortos em Paris
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,012
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.